This document provides an overview of using Docker containers on Amazon Web Services (AWS). It discusses the benefits of containers, how Amazon ECS provides container management and scheduling capabilities, and how to run containerized services on ECS. Key points covered include how ECS handles resource management and scheduling across a cluster, its use of APIs, tasks, services, load balancing, and updating deployments. The document concludes with a reminder to complete an evaluation.
This document provides an overview and summary of DevOps, microservices, and serverless architecture. It discusses key concepts like DevOps and how it relates to software delivery. Microservices and their rise in popularity for building loosely coupled services. Serverless architecture and how it abstracts away infrastructure management. It also summarizes different AWS services that can be used to build microservices and serverless applications, like ECS, Lambda, API Gateway, and provides examples of architectures using these services.
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for code builds and workflow automation; and Amazon EC2 Container Service (ECS) to manage and scale containers.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Nate Slater, Sr. Manager, Solutions Architecture
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Asha Chakrabarty, Senior Solutions Architect
This document discusses continuous delivery/deployment strategies on AWS using various services. It begins with an introduction to continuous integration and continuous delivery/deployment. It then covers CD strategies such as blue-green deployments and red-black deployments. The rest of the document discusses various AWS services that can be used for application management like Elastic Beanstalk, OpsWorks, CloudFormation, and EC2 Container Service. It also covers services for application lifecycle management including CodeCommit, CodePipeline, and CodeDeploy.
The document provides an overview of setting up and managing infrastructure on Amazon ECS. It discusses setting up ECS clusters with CloudFormation templates and AWS OpsWorks, setting up container image repositories with ECR, monitoring clusters with CloudWatch, auto-scaling clusters with Auto Scaling, and service discovery options like Route 53 and Consul. It also covers security configurations, PaaS options like Elastic Beanstalk and Convox, and Remind Empire for deploying Docker images to ECS.
Running Microservices and Docker with AWS Elastic BeanstalkAmazon Web Services
In this session, we introduce you to a solution for easily running a Docker-powered microservices architecture on AWS using Elastic Beanstalk. We will also cover the fundamentals of Elastic Beanstalk and how it benefits developers looking for a quick and scalable way to get their applications running on AWS with no infrastructure work required.
Building a microservices architecture using Docker can require a lot of work, from launching and operating the underlying infrastructure to installing and maintaining cluster management software. With AWS Elastic Beanstalk’s multicontainer support feature, many of these tasks are simplified and abstracted away so you can focus on your application code. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. Elastic Beanstalk leverages Amazon EC2 Container Service for its container management capabilities.
This document provides an overview and summary of DevOps, microservices, and serverless architecture. It discusses key concepts like DevOps and how it relates to software delivery. Microservices and their rise in popularity for building loosely coupled services. Serverless architecture and how it abstracts away infrastructure management. It also summarizes different AWS services that can be used to build microservices and serverless applications, like ECS, Lambda, API Gateway, and provides examples of architectures using these services.
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for code builds and workflow automation; and Amazon EC2 Container Service (ECS) to manage and scale containers.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Nate Slater, Sr. Manager, Solutions Architecture
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Asha Chakrabarty, Senior Solutions Architect
This document discusses continuous delivery/deployment strategies on AWS using various services. It begins with an introduction to continuous integration and continuous delivery/deployment. It then covers CD strategies such as blue-green deployments and red-black deployments. The rest of the document discusses various AWS services that can be used for application management like Elastic Beanstalk, OpsWorks, CloudFormation, and EC2 Container Service. It also covers services for application lifecycle management including CodeCommit, CodePipeline, and CodeDeploy.
The document provides an overview of setting up and managing infrastructure on Amazon ECS. It discusses setting up ECS clusters with CloudFormation templates and AWS OpsWorks, setting up container image repositories with ECR, monitoring clusters with CloudWatch, auto-scaling clusters with Auto Scaling, and service discovery options like Route 53 and Consul. It also covers security configurations, PaaS options like Elastic Beanstalk and Convox, and Remind Empire for deploying Docker images to ECS.
Running Microservices and Docker with AWS Elastic BeanstalkAmazon Web Services
In this session, we introduce you to a solution for easily running a Docker-powered microservices architecture on AWS using Elastic Beanstalk. We will also cover the fundamentals of Elastic Beanstalk and how it benefits developers looking for a quick and scalable way to get their applications running on AWS with no infrastructure work required.
Building a microservices architecture using Docker can require a lot of work, from launching and operating the underlying infrastructure to installing and maintaining cluster management software. With AWS Elastic Beanstalk’s multicontainer support feature, many of these tasks are simplified and abstracted away so you can focus on your application code. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. Elastic Beanstalk leverages Amazon EC2 Container Service for its container management capabilities.
One of the core principles behind the design of Amazon ECS is the separation of the scheduling logic from the state management. This allows you to use the Amazon ECS schedulers, write your own schedulers, or integrate with third party schedulers.
In this session we will explore the advanced cluster management capabilities of Amazon ECS and dive deep into the Amazon ECS Service Scheduler, which supports long-running applications by monitoring container health, restarting failed containers, and load balancing across containers. We will explain how you can communicate with the Amazon ECS API in order to integrate your own custom schedulers. We will then walk through how we built an Apache Mesos scheduler driver that enables you to integrate Mesos scheduling frameworks to work with Amazon ECS without requiring a Mesos cluster. We will also demo using Marathon to schedule Docker containers on an Amazon ECS cluster.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Dan Gerdesmeier, Sr. Software Development Engineer
The document provides an overview of application lifecycle management (ALM) in a serverless world. It discusses key concepts like continuous integration/delivery and testing practices for serverless applications. Serverless architectures using AWS Lambda and API Gateway are highlighted, along with how to manage deployments, configurations, and monitor applications.
AWS re:Invent 2016: Securing Container-Based Applications (CON402)Amazon Web Services
Containers have had an incredibly large adoption rate since Docker was launched, especially from the developer community, as it provides an easy way to package, ship, and run applications. Securing your container-based application is now becoming a critical issue as applications move from development into production. In this session, you learn ways to implement storing secrets, distributing AWS privileges using IAM roles, protecting your container-based applications with vulnerability scans of container images, and incorporating automated checks into your continuous delivery workflow.
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session familiarizes you with the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
(DVO313) Building Next-Generation Applications with Amazon ECSAmazon Web Services
Two trends are driving app development: The shift from the server-based web to rich applications that run on a diverse set of mobile devices and modern browsers, and the growth of microservices running in the cloud that serve these clients. The results are “connected clients” - apps with the processing power of the device that are statefully connected and scaled to the cloud. In this session, you will learn about the architecture for Meteor's JavaScript app platform, Galaxy, which uses Amazon ECS, Elastic Load Balancing, and AWS CloudFormation to provide highly available, scalable, isolated environments for stateful apps across browsers and devices. We will discuss the essential characteristics of the platform, how those are provided for, and why we decided to use Amazon ECS instead of alternatives, such as Kubernetes. We will also demonstrate the Galaxy system in production.
Building a CI/CD Pipeline for Containers - DevDay Los Angeles 2017Amazon Web Services
What to expect:
- Review continuous integration, delivery, and deployment
- Using Docker images, Amazon ECS, and Amazon ECR for CI/CD
- Deployment strategies with Amazon ECS
- Building Docker container images with AWS CodeBuild
- Orchestrating deployment pipelines with AWS CodePipeline
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
Containers are a developer's new best friend. For all the non-developers, what does this mean? This session will demystify this abstraction called containers, and dive deep on how it changes the way we provision, deliver, deploy and manage applications.
Speaker: Shiva Narayanaswamy, Solutions Architect, Amazon Web Services
Continuous Delivery with AWS Lambda - AWS April 2016 Webinar SeriesAmazon Web Services
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this webinar, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline. We will discuss how to use versioning, allowing you to better manage the different variations of your Lambda function and API Gateway methods in your development workflow, such as development, staging, and production. We will walk through how to automate the entire release process of your application from development to staging and finally to production, performing automated integration tests at each stage.
Learning Objectives:
Understand the basics of AWS CodePipeline
Learn how to version AWS Lambda functions and API Gateway methods
Build a deployment pipeline to AWS Lambda
(DVO306) AWS CodeDeploy: Automating Your Software DeploymentsAmazon Web Services
So you’ve written some code. Now what? How do you make it available to your customers in an efficient and reliable manner? Learn how you can use AWS CodeDeploy to easily and quickly push your application updates. This talk will introduce you to the basics of CodeDeploy: key concepts, how it works, where it fits in your release process, and some deployment strategies to get you started on the right foot. We’ll walk through several demos, going from a basic sample deployment to a live update of a large multi-instance fleet, giving you a sense for how CodeDeploy can grow with your needs.
The document discusses AWS Code services that can be used to automate the software release process. It describes CodeCommit for source control, CodeBuild for building and testing code, and CodeDeploy for deploying builds to EC2/on-premises servers. CodePipeline allows orchestrating builds and deployments across different environments through a visual workflow.
(DVO305) Turbocharge YContinuous Deployment Pipeline with ContainersAmazon Web Services
This document outlines best practices for using containers in a continuous delivery pipeline. It recommends using containers with tools like Docker, Docker Compose, Amazon ECS, Jenkins, and AWS CodePipeline to build, test, and deploy applications. The workflow involves developing code in a source code repository, building Docker images, running tests inside containers, and deploying containers to production using Amazon ECS and AWS services for automation and orchestration of the pipeline. Demo applications and architectures are presented to illustrate container-based continuous delivery.
AWS re:Invent 2016: Chalk Talk: Succeeding at Infrastructure-as-Code (GPSCT312)Amazon Web Services
- Infrastructure as code is the practice of provisioning and managing infrastructure using code and software development techniques like version control. This allows infrastructure changes to be tested and deployed in a consistent, repeatable way.
- AWS services like CloudFormation, OpsWorks, and CodeDeploy allow defining infrastructure as code templates and automating the deployment of applications and infrastructure changes across environments like development, testing, and production.
- CloudFormation templates define AWS resources and their dependencies and can be used to create matching environments in different stages. OpsWorks and CodeDeploy help manage application deployments and ongoing configuration of running systems.
(CMP406) Amazon ECS at Coursera: A general-purpose microserviceAmazon Web Services
"Coursera has helped millions of students learn computer science through MOOCs ranging from Introduction to Python, to state-of-the-art Functional-Reactive Programming in Scala. Our interactive educational experience relies upon an automated grading platform for programming assignments. But, because anyone can sign up for a course on Coursera for free, our systems must defend against arbitrary code execution.
Come learn how Coursera uses AWS services such as Amazon EC2 Container Service (ECS), and Amazon Virtual Private Cloud (VPC) to power a defense-in-depth strategy to secure our infrastructure against bad actors. We have modified the Amazon ECS Agent to support security layers including kernel privilege de-escalation, and enabling mandatory access control systems. Additionally, we post-process uploaded grading container images to defang binaries.
At the core of automated grading is a general-purpose near-line & batch scheduling and execution microservice built on top of the Amazon ECS APIs. We use this flexible system to power a variety of internal services across the company including data exports for instructors, course announcement emails, data reconciliation jobs, and more.
In this session, we detail aspects of our success from implementing Docker and Amazon ECS in production, providing ideas for your own scheduling, execution and hardening requirements."
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for code builds and workflow automation; and Amazon EC2 Container Service (ECS) to manage and scale containers.
Docker, Unikernels and Docker for Mac discusses how Docker spans the continuum of compute by enabling the building, shipping, and running of applications across Linux containers, Windows containers, and soon unikernels. Docker for Mac embeds a hypervisor and extends it with improvements for native packaging, enabling Docker containers to run seamlessly on Mac systems. Unikernels compile application source code into custom operating systems including only required functionality for high performance, efficiency, and security. Docker aims to incorporate unikernels onto a continuum with Linux and Windows containers to allow applications to run from datacenters to clouds to IoT.
AWS January 2016 Webinar Series - Introduction to Deploying Applications on AWSAmazon Web Services
Based on your specific needs and the nature of your application, AWS offers a variety of services for getting your application up and running. You may want to launch and scale a web application or you may want to host a microservices application using Docker containers. How do you decide which service to use and when?
In this webinar, we will provide an overview of the AWS services that help simplify launching and running your application in the cloud. We will discuss the strengths of each service and provide a framework for understanding when to use them.
Learning Objectives:
Understand the primary services for deploying your application on AWS
Learn the basics of AWS Elastic Beanstalk, AWS CodeDeploy, and Amazon EC2 Container Service
Gain an understanding of the strengths of each service and when to use them
Who Should Attend:
Developers, DevOps Engineers, IT Professionals
Automating Software Deployments with AWS CodeDeploy by Matthew Trescot, Manag...Amazon Web Services
This document discusses AWS CodeDeploy, a service that automates software deployments to EC2 instances and on-premises servers. It provides an overview of CodeDeploy's key concepts including applications, deployment groups, deployment configurations, and hooks. It also shows examples of how CodeDeploy can be used for automated deployments across development, test, and production environments. The document suggests additional features like CloudFormation support and integration with CI/CD tools.
DevOps at Amazon: A Look at Our Tools and Processes by Matthew Trescot, Manag...Amazon Web Services
Matthew Trescot discusses DevOps and new AWS developer tools. He explains that DevOps aims to speed up the software development lifecycle through efficiencies. AWS has adopted microservices and continuous delivery to deploy code 50 million times per year across thousands of teams. The new AWS Code services - CodeCommit, CodePipeline, and CodeDeploy - help automate deployments and release processes. CodeCommit provides version control, CodePipeline builds pipelines, and CodeDeploy automates deployments.
AWS re:Invent 2016: Getting Started with Docker on AWS (CMP209)Amazon Web Services
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users.
This session familiarizes you with the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
This is a basic workshop for Amazon ECS. In this workshop you will learn:
AWS computing services overview
Monolith and Microservices
What is Docker
How to dockerize your app in your local laptop
How to run your Docker app in Amazon ECS and ECR
How to use ecs-cli
Best Practices designing your Dockerfile
One of the core principles behind the design of Amazon ECS is the separation of the scheduling logic from the state management. This allows you to use the Amazon ECS schedulers, write your own schedulers, or integrate with third party schedulers.
In this session we will explore the advanced cluster management capabilities of Amazon ECS and dive deep into the Amazon ECS Service Scheduler, which supports long-running applications by monitoring container health, restarting failed containers, and load balancing across containers. We will explain how you can communicate with the Amazon ECS API in order to integrate your own custom schedulers. We will then walk through how we built an Apache Mesos scheduler driver that enables you to integrate Mesos scheduling frameworks to work with Amazon ECS without requiring a Mesos cluster. We will also demo using Marathon to schedule Docker containers on an Amazon ECS cluster.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Dan Gerdesmeier, Sr. Software Development Engineer
The document provides an overview of application lifecycle management (ALM) in a serverless world. It discusses key concepts like continuous integration/delivery and testing practices for serverless applications. Serverless architectures using AWS Lambda and API Gateway are highlighted, along with how to manage deployments, configurations, and monitor applications.
AWS re:Invent 2016: Securing Container-Based Applications (CON402)Amazon Web Services
Containers have had an incredibly large adoption rate since Docker was launched, especially from the developer community, as it provides an easy way to package, ship, and run applications. Securing your container-based application is now becoming a critical issue as applications move from development into production. In this session, you learn ways to implement storing secrets, distributing AWS privileges using IAM roles, protecting your container-based applications with vulnerability scans of container images, and incorporating automated checks into your continuous delivery workflow.
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session familiarizes you with the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
(DVO313) Building Next-Generation Applications with Amazon ECSAmazon Web Services
Two trends are driving app development: The shift from the server-based web to rich applications that run on a diverse set of mobile devices and modern browsers, and the growth of microservices running in the cloud that serve these clients. The results are “connected clients” - apps with the processing power of the device that are statefully connected and scaled to the cloud. In this session, you will learn about the architecture for Meteor's JavaScript app platform, Galaxy, which uses Amazon ECS, Elastic Load Balancing, and AWS CloudFormation to provide highly available, scalable, isolated environments for stateful apps across browsers and devices. We will discuss the essential characteristics of the platform, how those are provided for, and why we decided to use Amazon ECS instead of alternatives, such as Kubernetes. We will also demonstrate the Galaxy system in production.
Building a CI/CD Pipeline for Containers - DevDay Los Angeles 2017Amazon Web Services
What to expect:
- Review continuous integration, delivery, and deployment
- Using Docker images, Amazon ECS, and Amazon ECR for CI/CD
- Deployment strategies with Amazon ECS
- Building Docker container images with AWS CodeBuild
- Orchestrating deployment pipelines with AWS CodePipeline
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
Containers are a developer's new best friend. For all the non-developers, what does this mean? This session will demystify this abstraction called containers, and dive deep on how it changes the way we provision, deliver, deploy and manage applications.
Speaker: Shiva Narayanaswamy, Solutions Architect, Amazon Web Services
Continuous Delivery with AWS Lambda - AWS April 2016 Webinar SeriesAmazon Web Services
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this webinar, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline. We will discuss how to use versioning, allowing you to better manage the different variations of your Lambda function and API Gateway methods in your development workflow, such as development, staging, and production. We will walk through how to automate the entire release process of your application from development to staging and finally to production, performing automated integration tests at each stage.
Learning Objectives:
Understand the basics of AWS CodePipeline
Learn how to version AWS Lambda functions and API Gateway methods
Build a deployment pipeline to AWS Lambda
(DVO306) AWS CodeDeploy: Automating Your Software DeploymentsAmazon Web Services
So you’ve written some code. Now what? How do you make it available to your customers in an efficient and reliable manner? Learn how you can use AWS CodeDeploy to easily and quickly push your application updates. This talk will introduce you to the basics of CodeDeploy: key concepts, how it works, where it fits in your release process, and some deployment strategies to get you started on the right foot. We’ll walk through several demos, going from a basic sample deployment to a live update of a large multi-instance fleet, giving you a sense for how CodeDeploy can grow with your needs.
The document discusses AWS Code services that can be used to automate the software release process. It describes CodeCommit for source control, CodeBuild for building and testing code, and CodeDeploy for deploying builds to EC2/on-premises servers. CodePipeline allows orchestrating builds and deployments across different environments through a visual workflow.
(DVO305) Turbocharge YContinuous Deployment Pipeline with ContainersAmazon Web Services
This document outlines best practices for using containers in a continuous delivery pipeline. It recommends using containers with tools like Docker, Docker Compose, Amazon ECS, Jenkins, and AWS CodePipeline to build, test, and deploy applications. The workflow involves developing code in a source code repository, building Docker images, running tests inside containers, and deploying containers to production using Amazon ECS and AWS services for automation and orchestration of the pipeline. Demo applications and architectures are presented to illustrate container-based continuous delivery.
AWS re:Invent 2016: Chalk Talk: Succeeding at Infrastructure-as-Code (GPSCT312)Amazon Web Services
- Infrastructure as code is the practice of provisioning and managing infrastructure using code and software development techniques like version control. This allows infrastructure changes to be tested and deployed in a consistent, repeatable way.
- AWS services like CloudFormation, OpsWorks, and CodeDeploy allow defining infrastructure as code templates and automating the deployment of applications and infrastructure changes across environments like development, testing, and production.
- CloudFormation templates define AWS resources and their dependencies and can be used to create matching environments in different stages. OpsWorks and CodeDeploy help manage application deployments and ongoing configuration of running systems.
(CMP406) Amazon ECS at Coursera: A general-purpose microserviceAmazon Web Services
"Coursera has helped millions of students learn computer science through MOOCs ranging from Introduction to Python, to state-of-the-art Functional-Reactive Programming in Scala. Our interactive educational experience relies upon an automated grading platform for programming assignments. But, because anyone can sign up for a course on Coursera for free, our systems must defend against arbitrary code execution.
Come learn how Coursera uses AWS services such as Amazon EC2 Container Service (ECS), and Amazon Virtual Private Cloud (VPC) to power a defense-in-depth strategy to secure our infrastructure against bad actors. We have modified the Amazon ECS Agent to support security layers including kernel privilege de-escalation, and enabling mandatory access control systems. Additionally, we post-process uploaded grading container images to defang binaries.
At the core of automated grading is a general-purpose near-line & batch scheduling and execution microservice built on top of the Amazon ECS APIs. We use this flexible system to power a variety of internal services across the company including data exports for instructors, course announcement emails, data reconciliation jobs, and more.
In this session, we detail aspects of our success from implementing Docker and Amazon ECS in production, providing ideas for your own scheduling, execution and hardening requirements."
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for code builds and workflow automation; and Amazon EC2 Container Service (ECS) to manage and scale containers.
Docker, Unikernels and Docker for Mac discusses how Docker spans the continuum of compute by enabling the building, shipping, and running of applications across Linux containers, Windows containers, and soon unikernels. Docker for Mac embeds a hypervisor and extends it with improvements for native packaging, enabling Docker containers to run seamlessly on Mac systems. Unikernels compile application source code into custom operating systems including only required functionality for high performance, efficiency, and security. Docker aims to incorporate unikernels onto a continuum with Linux and Windows containers to allow applications to run from datacenters to clouds to IoT.
AWS January 2016 Webinar Series - Introduction to Deploying Applications on AWSAmazon Web Services
Based on your specific needs and the nature of your application, AWS offers a variety of services for getting your application up and running. You may want to launch and scale a web application or you may want to host a microservices application using Docker containers. How do you decide which service to use and when?
In this webinar, we will provide an overview of the AWS services that help simplify launching and running your application in the cloud. We will discuss the strengths of each service and provide a framework for understanding when to use them.
Learning Objectives:
Understand the primary services for deploying your application on AWS
Learn the basics of AWS Elastic Beanstalk, AWS CodeDeploy, and Amazon EC2 Container Service
Gain an understanding of the strengths of each service and when to use them
Who Should Attend:
Developers, DevOps Engineers, IT Professionals
Automating Software Deployments with AWS CodeDeploy by Matthew Trescot, Manag...Amazon Web Services
This document discusses AWS CodeDeploy, a service that automates software deployments to EC2 instances and on-premises servers. It provides an overview of CodeDeploy's key concepts including applications, deployment groups, deployment configurations, and hooks. It also shows examples of how CodeDeploy can be used for automated deployments across development, test, and production environments. The document suggests additional features like CloudFormation support and integration with CI/CD tools.
DevOps at Amazon: A Look at Our Tools and Processes by Matthew Trescot, Manag...Amazon Web Services
Matthew Trescot discusses DevOps and new AWS developer tools. He explains that DevOps aims to speed up the software development lifecycle through efficiencies. AWS has adopted microservices and continuous delivery to deploy code 50 million times per year across thousands of teams. The new AWS Code services - CodeCommit, CodePipeline, and CodeDeploy - help automate deployments and release processes. CodeCommit provides version control, CodePipeline builds pipelines, and CodeDeploy automates deployments.
AWS re:Invent 2016: Getting Started with Docker on AWS (CMP209)Amazon Web Services
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users.
This session familiarizes you with the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
This is a basic workshop for Amazon ECS. In this workshop you will learn:
AWS computing services overview
Monolith and Microservices
What is Docker
How to dockerize your app in your local laptop
How to run your Docker app in Amazon ECS and ECR
How to use ecs-cli
Best Practices designing your Dockerfile
The document discusses containers and container management on AWS. It provides an overview of containers and microservices, then describes how Amazon ECS can be used to manage container clusters at scale. Key benefits of Amazon ECS include easily managing container clusters of any size, flexible container placement, and integration with other AWS services. It also discusses task definitions, services, scheduling, updating services, and the task placement engine.
The document provides an overview of running Docker containers on AWS using ECS. It discusses:
- Why containers are useful for building scalable microservices applications.
- How ECS handles cluster management, scheduling containers across a cluster, and integrates with other AWS services.
- Common workflows for using ECS, such as pushing images to ECR, defining tasks, running tasks/services, updating services, and monitoring with CloudWatch.
- Security considerations like IAM roles for containers and tasks.
- Examples of task placement strategies and a customer case study on using ECS at scale.
The document concludes by noting other AWS services that complement ECS and taking questions.
"In recent years, Docker containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources. Using Docker on your local development machine is simple, but running Docker applications at scale in production can be difficult.
In this session, we will discuss the difficulties of running Docker in production and how Amazon EC2 Container Service (ECS) can be used to reduce the operational burdens. We will give an overview of the core architectural principles underlying Amazon ECS, and we will walk through a number of patterns used by our customers to run their microservices platforms, to run batch jobs, and for deployments and continuous integration. We will also demonstrate how to define multi-container applications with Docker Compose and deploy and scale them seamlessly on a cluster with Amazon ECS."
This session provides the attendee with an overview of our Amazon EC2 Container Service (Amazon ECS) and the benefits of running a managed cluster on AWS. We also discuss the benefits from a customer perspective.
Containers is the new buzzword, you have been playing with them for a while but want to explore more for a better way to orchestrate, manage, and deploy containers? Find out how Amazon ECS can help you run applications on a managed cluster of EC2 instances and help you leverage familiar features like security groups, elastic load balancing, EBS volumes and IAM roles at scale.
Speaker: Ninad Phatak
Solutions Architect, Amazon India
AWS January 2016 Webinar Series - Introduction to Docker on AWSAmazon Web Services
Using Docker on your local development machine is simple, but running Docker applications at scale in production can be difficult.
In this webinar, we will discuss the difficulties of running Docker in production and how Amazon EC2 Container Service (ECS) can be used to reduce the operational burdens, and we will give an overview of the architecture powering Amazon ECS. We will also demo how to define multi-container applications with Docker Compose and deploy and scale them seamlessly to a cluster with Amazon ECS.
Learning Objectives:
Understand the benefits and architecture of Amazon ECS
Learn how to deploy and scale Docker containers on Amazon ECS
Who Should Attend:
Developers
The document discusses getting started with Docker containers on Amazon Web Services (AWS). It introduces containers and their benefits like portability and efficiency. It then describes Amazon Elastic Container Service (ECS), a fully managed container orchestration service that allows users to run and scale containerized applications on AWS. Key aspects of ECS covered include clusters, tasks, services, scheduling, load balancing, and updating deployments.
AWS April Webinar Series - Getting Started with Amazon EC2 Container ServiceAmazon Web Services
How do you deploy and manage containerized applications at scale? Amazon ECS is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. This webinar will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications. You will learn how to define, schedule, and stop sets of containers. You will also learn how to access the state of your resources to view running tasks and EC2 instance utilization in your cluster.
Learning Objectives:
• Understand the benefits of containers
• Define and deploy containers on Amazon ECS
• Access cluster state information to track utilization and unning tasks
• Integrate Amazon ECS into your existing software release process or CI/CD (Continuous Integration / Continuous Delivery) pipeline
Who Should Attend:
• Developers, system administrators, Docker users, container users
During this session, you will get an update on what’s happening at AWS, APN program announcements, new services and how to position AWS to the customers. You will also get introduced to some of our ISV solutions that could help you strengthen your value proposition to your customers.nology Update
This document provides an overview of using Docker containers on Amazon Web Services (AWS). It begins with an introduction to containers and Docker, explaining how containers allow applications to be easily deployed across different environments. It then discusses Amazon EC2 Container Service (ECS), a highly scalable and managed container orchestration service that supports Docker containers. The document outlines key components of ECS including clusters, tasks, services, scheduling, and integration with other AWS services. It provides examples of how to use ECS to deploy containers as tasks or long-running services behind a load balancer, updating services, and automatically scaling them.
ECS in action provides an overview of using Amazon ECS for container deployment and management. Key features of ECS include a good web console, auto recovery of failed containers, and rolling upgrades. With ECS, containers are deployed across a cluster of Amazon EC2 instances with ECS agents that interface with the Docker daemon. The persistence layer is kept outside of containers for easier management. While ECS met their needs, the author notes some requested features like global services and improved logging/monitoring integration.
This document provides an overview of running Docker containers on Amazon ECS. It discusses the benefits of containers and microservices architectures and how Amazon ECS can help manage containerized applications at scale. Key points include:
1) Amazon ECS is a fully managed container orchestration service that allows users to easily run and scale containerized applications on EC2 instances without having to manage the underlying infrastructure.
2) With ECS, users can define tasks, services, and clusters to deploy their containerized applications across a cluster of EC2 instances managed by ECS.
3) ECS provides benefits like elastic scaling, integration with other AWS services for load balancing, storage, networking etc., and optimizes scheduling of
This document provides an overview of Docker and containers on AWS. It discusses the benefits of containers including portability and efficiency. It also describes how microservices architectures are a natural fit for containers. The document then discusses using Amazon ECS for container scheduling and orchestration, including task definitions, services, task placement strategies, and consuming real-time events. Finally, it introduces Blox, an open source project that provides an alternative scheduler and cluster management experience on ECS.
This document discusses containers and Amazon ECS. It provides an overview of containers and their benefits like portability and efficiency. It then describes Amazon ECS as a highly scalable and performant container management service that supports Docker containers. It discusses how ECS runs applications on a managed cluster of EC2 instances using tasks, services, and scheduling. It also outlines some key benefits of ECS like being fully managed, integration with other AWS services, and application load balancing. Finally, it provides examples of commands to create an ECS cluster, register a task definition, and create a service to run tasks.
Amazon EC2 Container Service: Manage Docker-Enabled Apps in EC2Amazon Web Services
Amazon EC2 Container Service (Amazon ECS) is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon ECS lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of EC2 instances in your cluster. This session will describe the benefits of containers, introduce ECS, and demonstrate how to use ECS for your applications.
This document discusses containers and Amazon ECS. It provides an overview of containers and their benefits like portability and efficiency. It then describes Amazon ECS and how it provides a fully managed container orchestration service. Key aspects covered include clusters, tasks, services, and scheduling. It also outlines some benefits of Amazon ECS like elastic scaling and integration with other AWS services. Finally, it provides steps to run services using the ECS CLI to register a task definition and create a cluster and service.
Containers have become key in modern application design. It is relatively easy to run a few containers on your laptop, but building and maintaining an entire infrastructure to run and manage containerized apps is hard and requires a lot of undifferentiated heavy lifting.
In this session, we will discuss some of the core architectural principles underlying Amazon ECS, a highly scalable, high performance service to run and manage distributed applications using the Docker container engine. We will explore the advanced scheduling capabilities of Amazon ECS and dive deep into the Amazon ECS Service Scheduler, which optimizes for long-running applications by monitoring container health, restarting failed containers, and load balancing.
Similar to Getting Started With Docker on AWS (20)
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
7. Services evolve to microservices
Monolithic Application
Order UI User UI Shipping UI
Order
Service
User
Service
Shipping
Service
Data
Access
Host 1
Service A
Service B
Host 2
Service B
Service D
Host 3
Service A
Service C
Host 4
Service B
Service C
8. Containers are natural for microservices
Simple to model
Any app, any language
Image is the version
Test & deploy same artifact
Stateless servers decrease change risk
11. Scheduling a cluster is hard
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
12. What is Amazon ECS?
Amazon EC2 Container Service (ECS) is a highly scalable,
high performance container management service. You
can use Amazon ECS to schedule the placement of
containers across your cluster. You can also integrate your
own scheduler or third-party scheduler to meet business
or application specific requirements.
27. Designed for use with other AWS services
Elastic Load Balancing
Amazon Elastic Block Store
Amazon Virtual Private Cloud
Amazon CloudWatch
AWS Identity and Access Management
AWS CloudTrail
37. Create Service
Load Balance traffic across containers
Automatically recover unhealthy containers
Discover services
Elastic Load Balancing
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
38. Scale Service
Scale up
Scale down
Elastic Load Balancing
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
40. Update Service
Deploy new version
Drain connections
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
new new new
Elastic Load Balancing
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
old old old
41. Update Service (cont.)
Deploy new version
Drain connections
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
new new new
Elastic Load Balancing
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
old old old
42. Update Service (cont.)
Deploy new version
Drain connections
Elastic Load Balancing
Shared Data Volume
Containers
Shared Data Volume
Containers
Shared Data Volume
Containers
new new new
43. Update Service (cont.)
Specify a deployment configuration for your service:
• minimumHealthyPercent: lower limit (as a percentage of
the service's desiredCount) of the number of running
tasks that must remain running in a service during a
deployment.
• maximumPercent: upper limit (as a percentage of the
service's desiredCount) of the number of running tasks
that can be running in a service during a deployment.
So we are going to briefly recap, why containers and the challenges you may face in production, and some of the use patterns
We will then talk about cluster management and how Amazon ECS fits into all of this
Then we will close with a demo
Containers are similar to hardware virtualization (like EC2), however instead of partitioning a machine, containers isolate the processes running on a single operating system
This is a useful concept that lets you use the OS kernel to create multiple isolated user space processes that can have constraints on them like cpu & memory.
The Docker CLI makes using containers easy, with commands like docker run.
Docker images make it easy to define what runs in a container and versions the entire app
These concepts enable automation – you can define your app, build & share the image, and deploy that image.
You may be thinking – this sounds interesting but why would I want to use containers? There are 4 key benefits to using containers.
1.) The first is that containers are portable.
the image is consistent and immutable -- no matter where I run it, or when I start it, it’s the same.
This makes the dev lifecycle simpler –an image works the same on the developer’s desktop & prod, whether I start it today or scale my environment tomorrow, so there’s no surprises.
The entire Application is self-contained -- The image is the version, which makes deployments and scaling easier because the image includes the dependencies.
Small, usually 10s MB for the image, very sharable
2.) Containers are flexible.
You can create clean, reproducible, and modular environments.
Whereas in the past multiple processes would be on the same OS (e.g. Ruby, caching, log pushing), now
Containers makes make it easy to decompose an app into smaller chunks, like microservices, reducing complexity & letting teams move faster while still running the processes on the same host, e.g. no library conflicts
This streamlines both code deployment and infrastructure management
3.) Simply stating that Docker images start fast sells the technology short as speed is apparent in both performance characteristics and in application lifecycle and deployment benefits
So yes, containers start quickly because the operating system is already running, but
Every container can be a single threaded dev stream; less interdependencies
Also ops benefits - Example: IT updates the base image, I just do a new docker build – I can just focus on my app, meaning it’s faster for me to build & release.
4.) Finally, containers are efficient. You can allocate exactly the resources you want – specific cpu, ram, disk, network
Since it shares the same OS kernel & libs, containers use less resources than running the same processes on different virtual machines (different way to get isolation)
So I want to tell you a story about Amazon.com and the evolution of its architecture.
Over 10 years ago, Amazon had a large monolithic application running its website. Everything from its UI, ordering systems, recommendations engine, shopping cart was one big application with one large code base. The problem with that was there are a lot of code interdependencies that have to be resolved. Another problem Amazon experienced was it was hard to scale the website. If one part or service was memory intensive and another CPU intensive, the servers much be provisioned with enough memory and CPU to handle that baseline load. So if the CPU intensive service received a heavy load you have to provision a large machine and have a lot of underutilized resources
In order to scale better, Amazon decomposed its architecture into individual services that could be deployed separately. This allowed it to scale each service independently. It was able to have smaller teams that worked on each of the services and controlled that services codebase. This allowed the website to evolve faster because new updates can be delivered independently of other teams. This architecture is what now is known as microservices.
Containers & Docker are natural for this pattern of microservices
It makes services simple to model; The application and all its dependencies are packaged into an image using a Dockerfile.
It supports Any app, any language
The Image is a versioned artifact that can be stored in a repository just like your source code.
This makes applications easy to test & deploy because they are the same artifacts
Containers also simplify deployment -- Stateless servers are natural with Docker and each deployment is a new set of containers
This Decreases risk of change – rollback is simple
This all makes it easy to decompose applications to microservices. Every microservice is self contained allowing you to reduce dependency conflicts and decouple deployments.
So lets talk about scheduling
The Docker CLI is great if you want to run a container on your laptop for example “docker run myimage”.
But it’s challenging to scale to 100s of containers. Now you’re suddenly managing a cluster & cluster management is hard.
You need a way to intelligently place your containers on the instances that have the resources and that means you need to know the state of everything in your system. For example…
what instances have available resources like memory and ports?
How do I know if a container dies?
How do I hook into other resources like ELB?
Can I extend whatever system that I use, e.g. CD pipeline, third party schedulers, etc.
Do I need to operate another piece of software?
These are the questions and challenges that our customers had which led us to build Amazon ECS
These are snippets taken from the ECS landing page.
At the core ECS is a container management service which enables schedulers to run concurrently on-top. We currently offer a built-in scheduler known as the Amazon ECS Service Scheduler to make running services on ECS easy.
Resource Manager is responsible for keeping track of resources like memory, CPU, and storage and their availability at any given time in the cluster.
Next, the Scheduler is responsible for scheduling containers or tasks for execution.
The scheduler contains algorithms for assigning tasks to nodes in the cluster based on the resources required to execute the task.
To properly schedule you need to :
Know your constraints like memory, CPU
Find resources from your cluster that meet the constraints
Request a resource
Confirm the resource
The scheduler is also responsible for the task execution lifecycle.
Is the task alive or dead and should it be rescheduled
ECS provides a simple solution to cluster management:
We have a cluster management engine that coordinates the cluster of instances, which is just a pool of CPU, memory, storage, and networking resources
The instances are just EC2 instances that are running our agent that have been checked into a cluster. You own them and can SSH into them if you want
Dynamically scalable. Possible to have a 1 instance cluster, and then a 100 or even 1000 instance cluster.
Segment for particular purposes, e.g. dev/test
On each instance, we have the ECS agent which communicates with the engine and processes ECS commands and turns them into Docker commands
To instructs the EC2 instances to start, stop containers and monitor the used and available resources
It’s all open source on Github and we develop in the open, so we’d love to see you involved through pull requests.
To coordinate this cluster we need a single source of truth for all the instances in the cluster, tasks running on the instances, and containers that make up the task, and the resources available. This is known as cluster state
So at the heart of ECS is a key/value store that stores all of this cluster state
To be robust and scalable, this key/value store needs to be distributed for durability and availability
But because the key/value store is distributed, making sure data is consistent and handling concurrent changes becomes more difficult
For example, if two developers request all the remaining memory resources from a certain EC2 instance for their container, only one container can actually receive those resources and the other would have to be told their request could not be completed.
As such, some form of concurrency control has to be put in place in order to make sure that multiple state changes don’t conflict.
Lets talk a bit how we achieve this concurrency control under the hood
We implemented Amazon ECS using one of Amazon’s core distributed systems primitives:
a Paxos-based transactional journal based data store that keeps a record of every change made to a data entry.
Any write to the data store is committed as a transaction in the journal with a specific order-based ID.
The current value in a data store is the sum of all transactions made as recorded by the journal.
Any read from the data store is only a snapshot in time of the journal.
For a write to succeed, the write proposed must be the latest transaction since the last read.
So if a user made a read and subsequently a few writes happened after that and it tries to write based on the last seend ID, the write wouldn’t succeed
This primitive allows Amazon ECS to store its cluster state information with optimistic concurrency,
which is ideal in environments where constantly changing data is shared.
This architecture affords Amazon ECS high availability, low latency, and high throughput because the data store is never pessimistically locked.
But what is unique about ECS is we decouple the container scheduling from the cluster management.
We have opened up the Amazon ECS cluster manager through a set of API actions that allow customers to access all the cluster state information stored in our key/value store
This set of API actions form the basis of solutions that customers can build on top of Amazon ECS such as connecting your CICD system or schedulers
This API allows you to connect different schedulers to ECS
A scheduler just provides logic around how, when, and where to start and stop containers.
Amazon ECS’ architecture is designed to share the state of the cluster and allow customers to run as many varieties of schedulers (e.g., bin packing, spread, etc) as needed for their applications.
The reason we developed ECS was customers had been running containers and Docker on EC2 for quite some time.
What customers told us was the difficulty of running these containers at scale which generally involved installing and managing cluster management software
Eliminates cluster management software
Manages cluster state
Manages containers
Control and monitoring
Scale from one to tens of thousands of containers
Earlier this year we ran a load test
Over a 3 day period we scaled our cluster 200 to over 1000 instances in our cluster as represented by the purple line
The green and red line show the p99 and p50 latencies
As you can see they are relatively flat demonstrating that ECS is stable and will scale regardless of your cluster size
So Amazon ECS has two builtin schedulers to help find the optimal instance placement based on your resource needs, isolation policies, and availability requirements:
A scheduler for long running applications and services
A scheduler for short running tasks like batch jobs
As discussed before - Because ECS provides you a power set of APIs, it allows you to integrate your own custom scheduler as well as open source schedulers.
All of these allow you to have very flexible methods to do scheduling on ECS
Amazon ECS is built to work with the AWS services you value. You can set up each cluster in its own Virtual Private Cloud and use security groups to control network access to your ec2 instances. You can store persistent information using EBS and you can route traffic to containers using ELB. CloudTrail integration captures every API access for security analysis, resource change tracking, and compliance auditing
As discussed before ECS has a simple set of APIs that allows it to be very easy to integrate and extend
You can use your own container scheduler or connect ECS into your existing software delivery process (e.g., continuous integration and delivery systems)
Our container agent and CLI is open source and available on GitHub. We look forward to hearing your input and pull requests.
Summing everything, what ECS allows is the reduction on the amount of code you need to go from idea to implementation when building distributed systems.
So, rather than having Mesos or other cluster management software having to manage a set of machines directly, ECS manages your instances.
Much of the undifferentiated heavy lifting and housekeeping has been abstracted behind a set of APIs.
The ability to run multiple tasks on a shared pool of resources can also lead to higher utilization and faster task completion than if compute resources are statically partitioned.
You can model your app using a file called a Task Definition
This file defines the containers you want to run together.
A task definition also lets you specify Docker concepts like links to establish network channels between the containers and the volumes your containers need.
Task definitions are tracked by name and revision, just like source code
To create a task definition, you can use the console to specify the Docker image to use for the containers
You can specify resources like CPU and memory, ports and volumes for each container.
You can specify what command to run when the container starts.
And the essential flag specifies whether the task should fail if the container stops running.
You can also type everything as JSON if you want
Once your task definition is created, scheduling a Task Definition onto an instance with available resources creates a task
A task is an instantiation of a task definition.
You can have a task with just 1 container…or up to 10 that work together on a single machine. Maybe nginx in front of rails, or redis behind rails.
You can run as many tasks on an instances as will fit.
Often people wonder about cross host links, those don’t go in your task, put them behind an ELB, or a discovery system and make multiple tasks.
ECS has a scheduler that is good for long-running applications called the service scheduler
You reference a task definition and the number of tasks you want to run and then can optionally place it behind an ELB.
The scheduler will then launch the number of tasks that you requested
The scheduler will maintain the number of tasks you want to run and will have it automatically load balance
Scaling up and down is simple. You just tell the scheduler how many tasks you need and the scheduler will automatically launch more tasks or terminate tasks
Amazon EC2 Container Service (Amazon ECS) can now automatically scale container-based applications by dynamically growing and shrinking the number of tasks run by an Amazon ECS service.
Now, you can automatically scale an Amazon ECS service based on any Amazon CloudWatch metric. For example, you can use CloudWatch metrics published by Amazon ECS, such as each service’s average CPU and memory usage. You can also use CloudWatch metrics published by other services or use custom metrics that are specific to your application. For example, a web service could increase the number of tasks based on Elastic Load Balancing metrics like SurgeQueueLength, while a batch job could increase the number of tasks based on Amazon SQS metrics like ApproximateNumberOfMessagesVisible.
Updating a service is easy
You deploy the new version and the scheduler with launch tasks with the new application version
It will drain the connection from the old containers and remove the containers
Leaving the newest containers running
minimumHealthyPercent represents the minimum number of running tasks during a deployment.
maximumPercent represents an upper limit on the number of running tasks during a deployment, enabling you to define the deployment batch size.