Elastic Load Balancing provides a scalable and highly-available load balancer that automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. In this session, we take a deeper look at some of the existing and newer features that enable application developers to architect highly-available architectures that are resilient to load spikes and application failures. We also explore some of the features that allow seamless integration with services such as Auto Scaling and Amazon Route 53 to further improve the scalability and resilience of your applications.
All You Need to Know about AWS Elastic Load BalancerCloudlytics
Elastic Load Balancer (ELB) distributes incoming application traffic across multiple Amazon EC2 instances, performs health checks on the instances, and directs traffic away from unhealthy instances to ensure application availability. ELBs scale automatically to match the incoming application traffic load, distributing traffic evenly across healthy EC2 instances. ELBs can distribute traffic to instances across availability zones for high availability.
Elastic Load Balancing Deep Dive and Best Practices - Pop-up Loft Tel AvivAmazon Web Services
The document provides an overview of Elastic Load Balancing (ELB) on AWS. It discusses how ELB automatically distributes traffic across EC2 instances, provides high availability and fault tolerance. It covers key ELB concepts like public/private load balancing, health checks, cross-zone load balancing and integration with other AWS services like CloudWatch, Route 53, ACM and VPC. The document also provides best practices for ELB configuration and monitoring to ensure high performance and reliability of applications.
Using the New Network Load Balancer with Amazon ECS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Choosing the right Elastic Load Balancer for your architecture
- How to use Network Load Balancer with Amazon ECS
- How to configure your tasks and services to take advantage of the Network Load Balancer
Slide for ELB (Elastic Load Balancer), which is a topic of AWS Architect Associate and AWS SysOps Certification training for individual or group or corporate training.
Designing Fault Tolerant Applications on AWS - Janakiram MSVAmazon Web Services
This document discusses how to design fault-tolerant applications on AWS. It describes key AWS building blocks like EC2, EBS, ELB, and RDS that provide redundancy and high availability. EC2 instances can be launched across availability zones and attached to EBS volumes for persistent storage. Load balancers distribute traffic across instances. RDS supports multi-AZ deployments for database fault tolerance. Together these services allow applications to automatically recover from failures with minimal downtime.
Amazon Elastic Load Balancing (ELB) distributes traffic across multiple Amazon EC2 instances and monitors the health of the instances. It ensures traffic is only routed to healthy instances. ELB uses listeners to check for connection requests and target groups to route requests to registered targets like EC2 instances. There are three types of load balancers - Application Load Balancer, Network Load Balancer, and Classic Load Balancer - that operate at different layers and are suited for different use cases. ELB integrates with other AWS services like EC2, Route 53, CloudWatch, and Auto Scaling to improve availability and scalability.
(SDD423) Elastic Load Balancing Deep Dive and Best Practices | AWS re:Invent ...Amazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service's many customization choices. We also share best practices and useful tips for success.
AWS re:Invent 2016: Lessons Learned from a Year of Using Spot Fleet (CMP205)Amazon Web Services
Over the last year, Yelp has transitioned its scalable and reliable parallel task execution system, Seagull, from On-Demand and Reserved Instances entirely to Spot Fleet. Seagull runs over 28 million tests per day, launches more than 2.5 million Docker containers per day, and uses over 10,000 vCPUs in Spot Fleet at peak capacity. To deal with rising infrastructure costs for Seagull, we have extended our in-house Auto Scaling Engine called FleetMiser to scale the Spot Fleet in response to demand. FleetMiser has reduced Seagull’s cluster costs by 60% in the past year and saved Yelp thousands of dollars every month.
In this session, we describe how Yelp uses Spot Fleet for Seagull and lessons we’ve learned over the past year, along with our recommendations on how to use it reliably (pro tip: don’t get outbid for your whole Spot Fleet). We conclude by looking at our future plans for extending Spot Fleet usage at Yelp.
All You Need to Know about AWS Elastic Load BalancerCloudlytics
Elastic Load Balancer (ELB) distributes incoming application traffic across multiple Amazon EC2 instances, performs health checks on the instances, and directs traffic away from unhealthy instances to ensure application availability. ELBs scale automatically to match the incoming application traffic load, distributing traffic evenly across healthy EC2 instances. ELBs can distribute traffic to instances across availability zones for high availability.
Elastic Load Balancing Deep Dive and Best Practices - Pop-up Loft Tel AvivAmazon Web Services
The document provides an overview of Elastic Load Balancing (ELB) on AWS. It discusses how ELB automatically distributes traffic across EC2 instances, provides high availability and fault tolerance. It covers key ELB concepts like public/private load balancing, health checks, cross-zone load balancing and integration with other AWS services like CloudWatch, Route 53, ACM and VPC. The document also provides best practices for ELB configuration and monitoring to ensure high performance and reliability of applications.
Using the New Network Load Balancer with Amazon ECS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Choosing the right Elastic Load Balancer for your architecture
- How to use Network Load Balancer with Amazon ECS
- How to configure your tasks and services to take advantage of the Network Load Balancer
Slide for ELB (Elastic Load Balancer), which is a topic of AWS Architect Associate and AWS SysOps Certification training for individual or group or corporate training.
Designing Fault Tolerant Applications on AWS - Janakiram MSVAmazon Web Services
This document discusses how to design fault-tolerant applications on AWS. It describes key AWS building blocks like EC2, EBS, ELB, and RDS that provide redundancy and high availability. EC2 instances can be launched across availability zones and attached to EBS volumes for persistent storage. Load balancers distribute traffic across instances. RDS supports multi-AZ deployments for database fault tolerance. Together these services allow applications to automatically recover from failures with minimal downtime.
Amazon Elastic Load Balancing (ELB) distributes traffic across multiple Amazon EC2 instances and monitors the health of the instances. It ensures traffic is only routed to healthy instances. ELB uses listeners to check for connection requests and target groups to route requests to registered targets like EC2 instances. There are three types of load balancers - Application Load Balancer, Network Load Balancer, and Classic Load Balancer - that operate at different layers and are suited for different use cases. ELB integrates with other AWS services like EC2, Route 53, CloudWatch, and Auto Scaling to improve availability and scalability.
(SDD423) Elastic Load Balancing Deep Dive and Best Practices | AWS re:Invent ...Amazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service's many customization choices. We also share best practices and useful tips for success.
AWS re:Invent 2016: Lessons Learned from a Year of Using Spot Fleet (CMP205)Amazon Web Services
Over the last year, Yelp has transitioned its scalable and reliable parallel task execution system, Seagull, from On-Demand and Reserved Instances entirely to Spot Fleet. Seagull runs over 28 million tests per day, launches more than 2.5 million Docker containers per day, and uses over 10,000 vCPUs in Spot Fleet at peak capacity. To deal with rising infrastructure costs for Seagull, we have extended our in-house Auto Scaling Engine called FleetMiser to scale the Spot Fleet in response to demand. FleetMiser has reduced Seagull’s cluster costs by 60% in the past year and saved Yelp thousands of dollars every month.
In this session, we describe how Yelp uses Spot Fleet for Seagull and lessons we’ve learned over the past year, along with our recommendations on how to use it reliably (pro tip: don’t get outbid for your whole Spot Fleet). We conclude by looking at our future plans for extending Spot Fleet usage at Yelp.
Building a CICD Pipeline for Containers - DevDay Austin 2017Amazon Web Services
This document summarizes a presentation on building a continuous integration and continuous deployment (CI/CD) pipeline for deploying containers. It discusses using Docker images, Amazon ECS, and Amazon ECR for CI/CD. It covers deployment strategies like blue/green deployments and canary deployments with Amazon ECS. It also describes building Docker images with AWS CodeBuild and orchestrating pipelines with AWS CodePipeline.
Getting Started with Docker on AWS - DevDay Los Angeles 2017Amazon Web Services
The document discusses getting started with Docker on AWS. It provides an overview of containers and Docker, and introduces Amazon ECS for managing Docker containers on AWS. Key points include: Docker provides portable application environments; Amazon ECS manages container scheduling across EC2 instances; and services in ECS allow deploying containers behind a load balancer for long-running applications.
by Nathan Wray, Sr. Technical Account Manager, AWS
Managing the code testing and deployment lifecycle for containerized applications is a complex task. In this session, we will explore how to build effective CICD workflows to manage containerized code deployments using Amazon EC2 Container Service, Amazon EC2 Container Registry, and AWS Code Suite tools. We will explore best practices for CICD architectures used by our customers to deploy containers onto AWS, including how to create an accessible CICD platform and how to execute Blue/Green and Canary deployments for containerized apps. Level 300
This document provides guidance on troubleshooting issues with EC2 instances and Elastic Load Balancers (ELB) on AWS. It begins by recommending monitoring the AWS service health dashboard and CloudWatch metrics. Potential causes and resolutions are outlined for common problems with EC2 instance launching, health, networking, and EBS volumes. For ELBs, error messages, response metrics, health checks, and other potential problems are covered. The document concludes by listing information needed for support cases and additional resources.
This document provides an overview of using Docker containers on Amazon Web Services (AWS). It begins with an introduction to containers and Docker, explaining how containers allow applications to be easily deployed across different environments. It then discusses Amazon EC2 Container Service (ECS), a highly scalable and managed container orchestration service that supports Docker containers. The document outlines key components of ECS including clusters, tasks, services, scheduling, and integration with other AWS services. It provides examples of how to use ECS to deploy containers as tasks or long-running services behind a load balancer, updating services, and automatically scaling them.
AWS November Webinar Series - From Local Development to Production Using the ...Amazon Web Services
Running and managing large scale multi-container applications in production usually requires different tools than what is used for development.
In this webinar, we will show you how to use the Amazon EC2 Container Service CLI with Docker Compose to define and run multi-container applications in a local development environment. We will also show how you can eliminate the need to install, operate, and scale your own cluster management infrastructure by using Amazon ECS. We will then demonstrate how to schedule your multi-container application as defined by Compose across a production Amazon ECS cluster. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs.
Learning Objectives:
Understand the basics of the Amazon ECS CLI
Run multi-container applications defined by Docker Compose using the Amazon ECS CLI
Learn how to run and manage production applications using Amazon ECS
Who Should Attend:
Developers, system administrators, Docker users, container users
Different containerized services have different needs. You may want to deploy containers to ensure availability, maximize resource utilization, or ensure data security. As you build and run production microservices based on containers, having powerful tools to manage the placement and scheduling of these workloads is critical. In this talk, we will focus on the capabilities of the Amazon EC2 Container Service task placement engine, options for task scheduling, and explore the use cases and construction of custom task schedulers.
This document provides an overview of Docker containers and Amazon ECS for running containers on AWS. It discusses how containers can help deploy applications across different environments consistently. It then introduces Docker and how images are built. Amazon ECS is presented as a fully managed service for running Docker containers across a cluster of EC2 instances, providing scheduling and management of containers. Key features of ECS like load balancing, scaling, and integration with other AWS services are highlighted.
This document discusses how to scale applications using AWS Elastic Beanstalk. It begins with an introduction to AWS and Elastic Beanstalk. It then covers how to deploy applications using Elastic Beanstalk, including creating environments and configuring auto scaling. It also discusses how to implement load balancing across availability zones and regions using Elastic Load Balancing and Route 53 for fault tolerance and high availability. The key takeaways are that Elastic Beanstalk makes deploying and scaling applications on AWS easy, and cross-zone and cross-region configurations can improve availability.
AWS Batch and Amazon ECS are two options for running batch workloads on AWS. AWS Batch provides a fully managed batch processing environment where users focus on applications and resource requirements, while AWS handles the rest. Amazon ECS allows for more customization by letting users build and run containerized applications at scale using containers and tasks. The presentation covered key considerations for batch workloads and best practices like storing state in S3, minimizing dependencies between tasks, and using Spot Instances for cost savings.
This document provides an overview of a workshop on building serverless web applications. It discusses the scenario of building a website for the fictional company Wild Rydes. The workshop consists of four labs: 1) hosting a static website using Amazon S3, 2) managing user registration and authentication using Amazon Cognito, 3) creating a serverless backend service using AWS Lambda and Amazon DynamoDB, and 4) exposing the Lambda function as a RESTful API using Amazon API Gateway. The document provides background on serverless computing and describes the AWS services that will be used, including AWS Lambda, Amazon DynamoDB, Amazon API Gateway, Amazon Cognito, and Amazon S3.
This is a basic workshop for Amazon ECS. In this workshop you will learn:
AWS computing services overview
Monolith and Microservices
What is Docker
How to dockerize your app in your local laptop
How to run your Docker app in Amazon ECS and ECR
How to use ecs-cli
Best Practices designing your Dockerfile
This document provides an overview and instructions for setting up and managing infrastructure and applications on Amazon EC2 Container Service (ECS). It covers the key components of ECS including tasks, containers, clusters and container instances. It also discusses setting up ECS infrastructure with CloudFormation, monitoring with CloudWatch, service discovery with Route 53 and Weaveworks, security with IAM roles and policies and image scanning. The document demonstrates deploying applications to ECS including scheduling containers for batch jobs and long-running apps. It shows automating deployments with Jenkins and Shippable and using platform as a service options like Elastic Beanstalk, Convox and Remind Empire. Finally, it provides instructions for using the ECS CLI
Deep Dive with Amazon EC2 Container Service Hands-on WorkshopAmazon Web Services
This is an advanced workshop for Amazon ECS. In this workshop you will learn:
How to provision your Amazon ECS with CloudFormation
Amazon ECS with Windows Container
Amazon ECS CI/CD
Amazon ECS service autoscaling and host autoscaling design pattern and best practices
Amazon ECS log consolidation design patterns
Secure credential management with IAM and EC2 Parameter Store
Amazon ECS Events and design patterns
Service Discovery with fully-managed etcd3 cluster on Amazon ECS
AWS re:Invent 2016: Securing Container-Based Applications (CON402)Amazon Web Services
Containers have had an incredibly large adoption rate since Docker was launched, especially from the developer community, as it provides an easy way to package, ship, and run applications. Securing your container-based application is now becoming a critical issue as applications move from development into production. In this session, you learn ways to implement storing secrets, distributing AWS privileges using IAM roles, protecting your container-based applications with vulnerability scans of container images, and incorporating automated checks into your continuous delivery workflow.
AWS Batch is a fully managed batch computing service that makes it easy to run batch computing workloads on AWS. It dynamically provisions compute resources and schedules jobs across EC2 instances. Users can define jobs, job queues, compute environments and have AWS Batch automatically manage the underlying infrastructure. The service is integrated with other AWS services and has no upfront costs, with users only paying for the resources used to run jobs.
AWS July Webinar Series-Deploying and Scaling Web Application with AWS Elasti...Amazon Web Services
Want to easily deploy your Node.js, Ruby, Python, .NET, Tomcat, PHP or Docker web applications to AWS?
AWS Elastic Beanstalk is a service that makes it easy to deploy, scale, and manage your web applications on AWS by providing preconfigured application stacks and managing the underlying infrastructure on your behalf. You simply select the application stack you want and upload your code to get started.
This webinar will familiarize you with how to deploy your web application to AWS using AWS Elastic Beanstalk (EB). We will walk through the steps necessary to debug, test and scale your application to handle millions of web requests. Also, you will learn about deployment options, cost management, and ongoing monitoring and maintenance.
Learning Objectives:
Understand the benefits of AWS Elastic Beanstalk versus do-it-yourself
Deploy a sample Node.js application using the Elastic Beanstalk command line interface
Modify application stack configuration and extend your application to use additional AWS resources (e.g.: DynamoDB, SNS, etc.)
Debug, load test, and scale the sample application to handle millions of web requests
Use deployment options available for zero downtime deployments (in-place and blug/green)
Use tags for cost management
Building a CI/CD Pipeline for Containers - DevDay Los Angeles 2017Amazon Web Services
What to expect:
- Review continuous integration, delivery, and deployment
- Using Docker images, Amazon ECS, and Amazon ECR for CI/CD
- Deployment strategies with Amazon ECS
- Building Docker container images with AWS CodeBuild
- Orchestrating deployment pipelines with AWS CodePipeline
Building and Scaling a Containerized Microservice - DevDay Austin 2017Amazon Web Services
This document summarizes a presentation about building and scaling the first containerized microservice on AWS. It discusses microservices architecture, Amazon ECS, task placement, the Twelve-Factor App methodology, and reference architectures for microservices on AWS including automatic service scaling, continuous deployment, secrets management, and service discovery.
by Madhuri Peri, DevOps Consultant, AWS Professional Services
Different containerized services have different needs. You may want to deploy containers to ensure availability, maximize resource utilization, or ensure data security. As you build and run production microservices based on containers, having powerful tools to manage the placement and scheduling of these workloads is critical. In this talk, we will focus on the capabilities of the Amazon EC2 Container Service task placement engine, options for task scheduling, and explore the use cases and construction of custom task schedulers. Level 300
(SDD408) Amazon Route 53 Deep Dive: Delivering Resiliency, Minimizing Latency...Amazon Web Services
Learn how to utilize Amazon Route 53 latency-based routing, weighted round-robin, and other features in conjunction with DNS failover to direct traffic to the least latent, most available endpoints across a global infrastructure. We explore topics such as balancing traffic between endpoints in terms of load and latency, and discuss how to provide multi-record answers to improve client-side resiliency. As part of this session, Loggly will present how they utilize Route 53 for their traffic management needs.
This document discusses architecting applications on AWS for high availability across multiple regions. It begins by reviewing some notable outages and what is covered by typical SLAs. It then provides an overview of initial steps like using auto scaling, ELB, and CloudWatch. It discusses moving beyond a single availability zone to multiple zones. The main topic is setting up applications across multiple AWS regions for redundancy in case an entire region fails. Key services mentioned for high availability architectures are S3, CloudFront, ELB, CloudWatch, and SQS.
Building a CICD Pipeline for Containers - DevDay Austin 2017Amazon Web Services
This document summarizes a presentation on building a continuous integration and continuous deployment (CI/CD) pipeline for deploying containers. It discusses using Docker images, Amazon ECS, and Amazon ECR for CI/CD. It covers deployment strategies like blue/green deployments and canary deployments with Amazon ECS. It also describes building Docker images with AWS CodeBuild and orchestrating pipelines with AWS CodePipeline.
Getting Started with Docker on AWS - DevDay Los Angeles 2017Amazon Web Services
The document discusses getting started with Docker on AWS. It provides an overview of containers and Docker, and introduces Amazon ECS for managing Docker containers on AWS. Key points include: Docker provides portable application environments; Amazon ECS manages container scheduling across EC2 instances; and services in ECS allow deploying containers behind a load balancer for long-running applications.
by Nathan Wray, Sr. Technical Account Manager, AWS
Managing the code testing and deployment lifecycle for containerized applications is a complex task. In this session, we will explore how to build effective CICD workflows to manage containerized code deployments using Amazon EC2 Container Service, Amazon EC2 Container Registry, and AWS Code Suite tools. We will explore best practices for CICD architectures used by our customers to deploy containers onto AWS, including how to create an accessible CICD platform and how to execute Blue/Green and Canary deployments for containerized apps. Level 300
This document provides guidance on troubleshooting issues with EC2 instances and Elastic Load Balancers (ELB) on AWS. It begins by recommending monitoring the AWS service health dashboard and CloudWatch metrics. Potential causes and resolutions are outlined for common problems with EC2 instance launching, health, networking, and EBS volumes. For ELBs, error messages, response metrics, health checks, and other potential problems are covered. The document concludes by listing information needed for support cases and additional resources.
This document provides an overview of using Docker containers on Amazon Web Services (AWS). It begins with an introduction to containers and Docker, explaining how containers allow applications to be easily deployed across different environments. It then discusses Amazon EC2 Container Service (ECS), a highly scalable and managed container orchestration service that supports Docker containers. The document outlines key components of ECS including clusters, tasks, services, scheduling, and integration with other AWS services. It provides examples of how to use ECS to deploy containers as tasks or long-running services behind a load balancer, updating services, and automatically scaling them.
AWS November Webinar Series - From Local Development to Production Using the ...Amazon Web Services
Running and managing large scale multi-container applications in production usually requires different tools than what is used for development.
In this webinar, we will show you how to use the Amazon EC2 Container Service CLI with Docker Compose to define and run multi-container applications in a local development environment. We will also show how you can eliminate the need to install, operate, and scale your own cluster management infrastructure by using Amazon ECS. We will then demonstrate how to schedule your multi-container application as defined by Compose across a production Amazon ECS cluster. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs.
Learning Objectives:
Understand the basics of the Amazon ECS CLI
Run multi-container applications defined by Docker Compose using the Amazon ECS CLI
Learn how to run and manage production applications using Amazon ECS
Who Should Attend:
Developers, system administrators, Docker users, container users
Different containerized services have different needs. You may want to deploy containers to ensure availability, maximize resource utilization, or ensure data security. As you build and run production microservices based on containers, having powerful tools to manage the placement and scheduling of these workloads is critical. In this talk, we will focus on the capabilities of the Amazon EC2 Container Service task placement engine, options for task scheduling, and explore the use cases and construction of custom task schedulers.
This document provides an overview of Docker containers and Amazon ECS for running containers on AWS. It discusses how containers can help deploy applications across different environments consistently. It then introduces Docker and how images are built. Amazon ECS is presented as a fully managed service for running Docker containers across a cluster of EC2 instances, providing scheduling and management of containers. Key features of ECS like load balancing, scaling, and integration with other AWS services are highlighted.
This document discusses how to scale applications using AWS Elastic Beanstalk. It begins with an introduction to AWS and Elastic Beanstalk. It then covers how to deploy applications using Elastic Beanstalk, including creating environments and configuring auto scaling. It also discusses how to implement load balancing across availability zones and regions using Elastic Load Balancing and Route 53 for fault tolerance and high availability. The key takeaways are that Elastic Beanstalk makes deploying and scaling applications on AWS easy, and cross-zone and cross-region configurations can improve availability.
AWS Batch and Amazon ECS are two options for running batch workloads on AWS. AWS Batch provides a fully managed batch processing environment where users focus on applications and resource requirements, while AWS handles the rest. Amazon ECS allows for more customization by letting users build and run containerized applications at scale using containers and tasks. The presentation covered key considerations for batch workloads and best practices like storing state in S3, minimizing dependencies between tasks, and using Spot Instances for cost savings.
This document provides an overview of a workshop on building serverless web applications. It discusses the scenario of building a website for the fictional company Wild Rydes. The workshop consists of four labs: 1) hosting a static website using Amazon S3, 2) managing user registration and authentication using Amazon Cognito, 3) creating a serverless backend service using AWS Lambda and Amazon DynamoDB, and 4) exposing the Lambda function as a RESTful API using Amazon API Gateway. The document provides background on serverless computing and describes the AWS services that will be used, including AWS Lambda, Amazon DynamoDB, Amazon API Gateway, Amazon Cognito, and Amazon S3.
This is a basic workshop for Amazon ECS. In this workshop you will learn:
AWS computing services overview
Monolith and Microservices
What is Docker
How to dockerize your app in your local laptop
How to run your Docker app in Amazon ECS and ECR
How to use ecs-cli
Best Practices designing your Dockerfile
This document provides an overview and instructions for setting up and managing infrastructure and applications on Amazon EC2 Container Service (ECS). It covers the key components of ECS including tasks, containers, clusters and container instances. It also discusses setting up ECS infrastructure with CloudFormation, monitoring with CloudWatch, service discovery with Route 53 and Weaveworks, security with IAM roles and policies and image scanning. The document demonstrates deploying applications to ECS including scheduling containers for batch jobs and long-running apps. It shows automating deployments with Jenkins and Shippable and using platform as a service options like Elastic Beanstalk, Convox and Remind Empire. Finally, it provides instructions for using the ECS CLI
Deep Dive with Amazon EC2 Container Service Hands-on WorkshopAmazon Web Services
This is an advanced workshop for Amazon ECS. In this workshop you will learn:
How to provision your Amazon ECS with CloudFormation
Amazon ECS with Windows Container
Amazon ECS CI/CD
Amazon ECS service autoscaling and host autoscaling design pattern and best practices
Amazon ECS log consolidation design patterns
Secure credential management with IAM and EC2 Parameter Store
Amazon ECS Events and design patterns
Service Discovery with fully-managed etcd3 cluster on Amazon ECS
AWS re:Invent 2016: Securing Container-Based Applications (CON402)Amazon Web Services
Containers have had an incredibly large adoption rate since Docker was launched, especially from the developer community, as it provides an easy way to package, ship, and run applications. Securing your container-based application is now becoming a critical issue as applications move from development into production. In this session, you learn ways to implement storing secrets, distributing AWS privileges using IAM roles, protecting your container-based applications with vulnerability scans of container images, and incorporating automated checks into your continuous delivery workflow.
AWS Batch is a fully managed batch computing service that makes it easy to run batch computing workloads on AWS. It dynamically provisions compute resources and schedules jobs across EC2 instances. Users can define jobs, job queues, compute environments and have AWS Batch automatically manage the underlying infrastructure. The service is integrated with other AWS services and has no upfront costs, with users only paying for the resources used to run jobs.
AWS July Webinar Series-Deploying and Scaling Web Application with AWS Elasti...Amazon Web Services
Want to easily deploy your Node.js, Ruby, Python, .NET, Tomcat, PHP or Docker web applications to AWS?
AWS Elastic Beanstalk is a service that makes it easy to deploy, scale, and manage your web applications on AWS by providing preconfigured application stacks and managing the underlying infrastructure on your behalf. You simply select the application stack you want and upload your code to get started.
This webinar will familiarize you with how to deploy your web application to AWS using AWS Elastic Beanstalk (EB). We will walk through the steps necessary to debug, test and scale your application to handle millions of web requests. Also, you will learn about deployment options, cost management, and ongoing monitoring and maintenance.
Learning Objectives:
Understand the benefits of AWS Elastic Beanstalk versus do-it-yourself
Deploy a sample Node.js application using the Elastic Beanstalk command line interface
Modify application stack configuration and extend your application to use additional AWS resources (e.g.: DynamoDB, SNS, etc.)
Debug, load test, and scale the sample application to handle millions of web requests
Use deployment options available for zero downtime deployments (in-place and blug/green)
Use tags for cost management
Building a CI/CD Pipeline for Containers - DevDay Los Angeles 2017Amazon Web Services
What to expect:
- Review continuous integration, delivery, and deployment
- Using Docker images, Amazon ECS, and Amazon ECR for CI/CD
- Deployment strategies with Amazon ECS
- Building Docker container images with AWS CodeBuild
- Orchestrating deployment pipelines with AWS CodePipeline
Building and Scaling a Containerized Microservice - DevDay Austin 2017Amazon Web Services
This document summarizes a presentation about building and scaling the first containerized microservice on AWS. It discusses microservices architecture, Amazon ECS, task placement, the Twelve-Factor App methodology, and reference architectures for microservices on AWS including automatic service scaling, continuous deployment, secrets management, and service discovery.
by Madhuri Peri, DevOps Consultant, AWS Professional Services
Different containerized services have different needs. You may want to deploy containers to ensure availability, maximize resource utilization, or ensure data security. As you build and run production microservices based on containers, having powerful tools to manage the placement and scheduling of these workloads is critical. In this talk, we will focus on the capabilities of the Amazon EC2 Container Service task placement engine, options for task scheduling, and explore the use cases and construction of custom task schedulers. Level 300
(SDD408) Amazon Route 53 Deep Dive: Delivering Resiliency, Minimizing Latency...Amazon Web Services
Learn how to utilize Amazon Route 53 latency-based routing, weighted round-robin, and other features in conjunction with DNS failover to direct traffic to the least latent, most available endpoints across a global infrastructure. We explore topics such as balancing traffic between endpoints in terms of load and latency, and discuss how to provide multi-record answers to improve client-side resiliency. As part of this session, Loggly will present how they utilize Route 53 for their traffic management needs.
This document discusses architecting applications on AWS for high availability across multiple regions. It begins by reviewing some notable outages and what is covered by typical SLAs. It then provides an overview of initial steps like using auto scaling, ELB, and CloudWatch. It discusses moving beyond a single availability zone to multiple zones. The main topic is setting up applications across multiple AWS regions for redundancy in case an entire region fails. Key services mentioned for high availability architectures are S3, CloudFront, ELB, CloudWatch, and SQS.
In this presentation, created for a webinar recorded on 4/26/2012, we demo'd Amazon Route 53's new Latency Based Routing (LBR) feature. LBR is one of Amazon Route 53’s most requested features and helps improve your application’s performance for a global audience. LBR works by routing your customers to the AWS endpoint (e.g. EC2 instances, Elastic IPs or ELBs) that provides the fastest experience based on actual performance measurements of the different AWS regions where your application is running.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
How I learned to stop worrying and love the cloudShlomo Swidler
The document discusses how to successfully adopt cloud computing. It notes that cloud computing involves adjusting to changing technological, economic, organizational, and risk factors over time. To fully realize the benefits of cloud computing, organizations must be prepared to effectively handle this dynamic nature of cloud computing and be adept at change. The key to success is recognizing that cloud computing is about more than just technology and having a strong organizational culture that can flexibly adapt to changing requirements.
Advanced Approaches to Amazon VPC and Amazon Route 53 | AWS Public Sector Sum...Amazon Web Services
This session provides attendees with approaches to their VPC, including creating and protecting subnets, routing, performing VPC peering, and leveraging the latest features in Amazon VPC. Additionally, we'll discuss Amazon Route 53 for delivering traffic.
AWS re:Invent 2016: Elastic Load Balancing Deep Dive and Best Practices (NET403)Amazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Amazon Route 53 is a highly available, scalable, and easy to use cloud Domain Name System (DNS) web service. With an SLA of 100% availability, Route 53 is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications. By using Route 53 as your DNS provider, you can ensure your application’s up-time, run architecture that delivers better performance, and provide your end users with a better experience through lower latency and faster load times, all at the fraction of the cost of other DNS providers. Learning Objective: In this webinar, you will learn the following: - General overview of DNS, and how Route 53 is built to provide reliable and secure DNS - Using the Route 53 console to manage your DNS, easily and seamlessly - Utilizing health checks and failover to ensure high availability - Configuring advanced routing policies, including running your application in multiple regions with LBR and Geo for better performance for your end users. - Saving costs by using Route 53 - Registering or transferring your domains into Route 53 to manage all of your domain resources from one place - How to start using Route 53, including migrating your DNS without experiencing any downtime.
AWS Webcast - High Availability with Route 53 DNS FailoverAmazon Web Services
This webinar will be discussing how to use DNS Failover to a range of high-availability architectures, from a simple backup website to advanced multi-region architectures.
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:I...Amazon Web Services
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual data center that you define. In this session you learn how to leverage the VPC networking constructs to configure a highly available and secure virtual data center on AWS for your application. We cover best practices around choosing an IP range for your VPC, creating subnets, configuring routing, securing your VPC, establishing VPN connectivity, and much more. The session culminates in creating a highly available web application stack inside of VPC and testing its availability with Chaos Monkey.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
Amazon S3 hosts trillions of objects and is used for storing a wide range of data, from system backups to digital media. This presentation from the Amazon S3 Masterclass webinar we explain the features of Amazon S3 from static website hosting, through server side encryption to Amazon Glacier integration. This webinar will dive deep into the feature sets of Amazon S3 to give a rounded overview of its capabilities, looking at common use cases, APIs and best practice.
See a recording of this video here on YouTube: http://youtu.be/VC0k-noNwOU
Check out future webinars in the Masterclass series here: http://aws.amazon.com/campaigns/emea/masterclass/
View the Journey Through the Cloud webinar series here: http://aws.amazon.com/campaigns/emea/journey/
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
Introduction to AWS VPC, Guidelines, and Best PracticesGary Silverman
I crafted this presentation for the AWS Chicago Meetup. This deck covers the rationale, building blocks, guidelines, and several best practices for Amazon Web Services Virtual Private Cloud. I classify it as a somewhere between a 101 and 201 level presentation.
If you like the presentation, I would appreciate you clicking the Like button.
PyData - Consumindo e publicando web APIs com PythonBruno Rocha
Apresentado no auditório da NuBank em São Paulo dia 28 de Março de 2017 - PyData Meetup.
- O que são Web APIs
- Consumindo web APIs com Python
- O que fazer com os dados?
- Publicando web APIs com Python.
http://github.com/rochacbruno/flasgger
Route 53 is AWS's DNS service that provides low-latency DNS resolution through anycast with nameservers located in over 50 edge locations globally. It allows for various routing types including latency-based routing to route traffic to the lowest latency region, geo DNS routing based on query location, failover, and health checks to monitor endpoint health. Route 53 also offers private and public hosted zones for DNS records and integrates with other AWS services.
There are several points which architects and engineers should take into account when building new applications (or redesigning existing) in order to archive high elasticity on AWS. The presentation will reveal some best practices related to elasticity, redundancy and cost-effectiveness of AWS learned from the past.
Talk on Amazon Redshift, Meetup Les Nouvelles Organisations, 11/02/2016, Paris - http://www.meetup.com/fr-FR/lesnouvellesorganisations/events/227195680/
Amazon Simple Queue Service (SQS) is a message queue service that allows applications to exchange messages asynchronously. It offers reliable and scalable hosted queues that allow components to communicate without being available at the same time. SQS provides advantages like asynchronicity, decoupling of applications, redundancy, and scalability. Some disadvantages are latency due to asynchronous processing and potential load issues if jobs take too long to process. Common uses of message queues include communicating with APIs, sending emails, and generating reports.
Building a data warehouse with Amazon Redshift … and a quick look at Amazon ...Julien SIMON
This document provides a summary of a presentation about building data warehouses with Amazon Redshift and using Amazon Machine Learning. The presentation discusses how Amazon Redshift can be used to build a petabyte-scale data warehouse with SQL and no system administration. Case studies are presented showing companies saving on total cost of ownership by migrating to Amazon Redshift. It also briefly introduces Amazon Machine Learning for building predictive models with managed services. Demo examples are shown of loading data into Redshift and using ML to train a regression model and create a real-time prediction API.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
This document discusses designing fault tolerant web services on AWS. It covers motivation for fault tolerance on AWS, inherent fault tolerant AWS components like availability zones and Elastic IPs, and how to implement redundancy using Auto Scaling and Elastic Load Balancing. It then discusses designing high availability at the web/app, load balancing, and database layers, providing options for session synchronization and load balancing. It proposes a new load balancing algorithm that dynamically adapts strategies based on real-time server stats.
This presentation talks about how you can optimize your Application Architecture on AWS Cloud and create a Fault Tolerant Architecture that will have Zero Down Time! The best practices for a fault tolerant Web Applicaiton.
This document provides an overview of deploying Oracle E-Business Suite on AWS. It describes key AWS services like EC2, S3, EBS, and VPC that can be used. It also summarizes the core components of Oracle E-Business Suite and provides a reference architecture for running it on AWS. Some benefits mentioned are using AWS's scalable infrastructure, paying for only what you use, and gaining high availability.
Writing JavaScript Applications with the AWS SDK (TLS303) | AWS re:Invent 2013Amazon Web Services
We give a guided tour of using the AWS SDK for JavaScript to create powerful web applications. Learn the best practices for configuration, credential management, streaming requests, as well as how to use some of the higher level features in the SDK. We also show a live demonstration of using AWS services with the SDK to build a real-world JavaScript application.
This document provides an overview of running Docker containers on Amazon ECS. It discusses the benefits of containers and microservices architectures and how Amazon ECS can help manage containerized applications at scale. Key points include:
1) Amazon ECS is a fully managed container orchestration service that allows users to easily run and scale containerized applications on EC2 instances without having to manage the underlying infrastructure.
2) With ECS, users can define tasks, services, and clusters to deploy their containerized applications across a cluster of EC2 instances managed by ECS.
3) ECS provides benefits like elastic scaling, integration with other AWS services for load balancing, storage, networking etc., and optimizes scheduling of
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Asha Chakrabarty, Senior Solutions Architect
Application Delivery on Amazon Web Services for DevelopersAmazon Web Services
Application Delivery on Amazon Web Services for Developers
Every developer has gone through the frustration of creating new features, fixing bugs, or refactoring beautiful code, and then wait for it to reach the promise land of production. Come and learn how to get your changes in the hands of your customers with more speed, reliability, security and quality.
Speaker: Daniel Zoltak & Shiva Narayanaswamy, Solutions Architects, Amazon Web Services
This document discusses designing applications for high availability on AWS. It provides best practices for designing systems to be fault tolerant and self-healing. The key principles discussed are: 1) design for failure by avoiding single points of failure, 2) use multiple availability zones for redundancy, 3) implement auto-scaling for flexibility and fault tolerance, 4) incorporate self-healing techniques like health checks and auto-scaling policies, and 5) loosely couple components. The document explores how various AWS services like EC2, EBS, RDS, ELB, auto-scaling, S3 and Route 53 can be leveraged together to build highly available, fault tolerant systems on AWS infrastructure.
This document discusses containers and Amazon ECS. It provides an overview of containers and their benefits like portability and efficiency. It then describes Amazon ECS as a highly scalable and performant container management service that supports Docker containers. It discusses how ECS runs applications on a managed cluster of EC2 instances using tasks, services, and scheduling. It also outlines some key benefits of ECS like being fully managed, integration with other AWS services, and application load balancing. Finally, it provides examples of commands to create an ECS cluster, register a task definition, and create a service to run tasks.
This document discusses containers and Amazon ECS. It provides an overview of containers and their benefits like portability and efficiency. It then describes Amazon ECS and how it provides a fully managed container orchestration service. Key aspects covered include clusters, tasks, services, and scheduling. It also outlines some benefits of Amazon ECS like elastic scaling and integration with other AWS services. Finally, it provides steps to run services using the ECS CLI to register a task definition and create a cluster and service.
The document discusses how Riot Games standardized their application deployments using Amazon ECS. It describes how they broke their infrastructure into modular components defined through Terraform to provision resources like ECS clusters, services, load balancers and monitoring in a consistent and reproducible way. It also discusses lessons learned like breaking stacks apart but not overdoing it, being liberal with cluster provisioning, centralizing logs, and staying up to date with ECS and application releases.
The AWS Cloud offers infrastructure resources optimized for running containers, as well as a set of orchestration services that make it easy for you to build and run containerized applications in production. In this session we will review how Docker containers are used to build microservices and how Amazon Elastic Container Service (Amazon ECS) and AWS Fargate are used for container orchestration to help customers like FINRA run and scale containerized applications on AWS.
This document compares and contrasts Amazon Web Services and Windows Azure cloud platforms. It provides an overview of the different types of cloud services offered, including infrastructure as a service, platform as a service and private clouds. It then details the specific services available in each platform, such as compute, storage, databases and networking. Examples are given of how to architect applications for fault tolerance across cloud services.
NWCloud Cloud Track - Best Practices for Architecting in the Cloudnwcloud
The document discusses best practices for cloud architecture based on lessons learned from Amazon Web Services customers. It provides guidance on designing systems for failure, loose coupling, elasticity, security, leveraging constraints, parallelism, and different storage options. The key lessons are applied to migrating a sample web application architecture to AWS.
This document provides an overview of using Docker containers on Amazon Web Services (AWS). It discusses the benefits of containers, how Amazon ECS provides container management and scheduling capabilities, and how to run containerized services on ECS. Key points covered include how ECS handles resource management and scheduling across a cluster, its use of APIs, tasks, services, load balancing, and updating deployments. The document concludes with a reminder to complete an evaluation.
AWS re:Invent 2016: Getting Started with Docker on AWS (CMP209)Amazon Web Services
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users.
This session familiarizes you with the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
This document provides a quick introduction to Amazon Web Services (AWS) and how it can be used to meet common requirements such as scalability, geographical spread, and redundancy/availability. It describes key AWS services like EC2, auto scaling, Elastic Load Balancing, regions, availability zones, S3, EBS, and their features for scaling infrastructure on demand, distributing content globally, and ensuring high availability through redundancy.
Similar to Availability & Scalability with Elastic Load Balancing & Route 53 (CPN204) | AWS re:Invent 2013 (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
5. Elastic Load Balancing and Amazon Route 53 are
critical components when building scalable and
highly-available applications.
!5
Thursday, November 21, 13
18. EQUAL UTILIZATION
ON EACH INSTANCE
EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
Leastconns used to spread
request across healthy
instances
EC2 Instance
[ Request Routing ]
!15
Thursday, November 21, 13
19. TARGETS INSTANCES WITH
EQUAL UTILIZATION
ON EACH INSTANCE
FEWEST OUTSTANDING REQUESTS
EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
Leastconns used to spread
request across healthy
instances
ADJUSTS TO REQUEST
SMOOTHS REQUEST LOAD
RESPONSE TIMES
ACROSS ALL INSTANCES
EC2 Instance
[ Request Routing ]
!15
Thursday, November 21, 13
20. Instances that fail can be replaced seamlessly
while other instances continue to operate.
!15
Thursday, November 21, 13
21. EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
Application level
health checks ensure that
request traffic is shifted
away from a failed instance
EC2 Instance
[ Health Checks ]
!17
Thursday, November 21, 13
22. FAILURE DETECTED
X
EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
Application level
health checks ensure that
request traffic is shifted
away from a failed instance
EC2 Instance
[ Health Checks ]
!17
Thursday, November 21, 13
23. TRAFFIC SHIFTED
FAILURE DETECTED
X
X
EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
Application level
health checks ensure that
request traffic is shifted
away from a failed instance
EC2 Instance
[ Health Checks ]
!17
Thursday, November 21, 13
24. TRAFFIC SHIFTED
FAILURE DETECTED
X
X
EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
Application level
health checks ensure that
request traffic is shifted
away from a failed instance
EC2 Instance
HEALTHY INSTANCES CARRY
ADDITIONAL REQUEST LOAD
[ Health Checks ]
!17
Thursday, November 21, 13
25. TRAFFIC SHIFTED
FAILURE DETECTED
USED TO DETERMINE THE HEALTH OF
THE INSTANCE X
AND APPLICATION
X
EC2 Instance
Elastic
Load Balancing
Client
EC2 Instance
TCP AND HTTP
Application level
health checks ensure that
request traffic is shifted
away from a failed instance
CONSIDER THE DEPTH AND
ACCURACY OF YOUR
EC2 Instance
HEALTH CHECKS
[ Health Checks ]
CUSTOMIZE FREQUENCY
AND FAILURE THRESHOLDS
HEALTHY INSTANCES CARRY
ADDITIONAL REQUEST LOAD
503 ERRORS RETURNED IF
NO HEALTHY INSTANCES
!17
Thursday, November 21, 13
26. Auto Scaling can be used to automatically adjust
instance capacity up or down depending on
conditions you define.
!18
Thursday, November 21, 13
34. Availability Zones are distinct geographical
locations that are engineered to be insulated from
failures in other zones.
!20
Thursday, November 21, 13
38. Zone 1a
EC2 Instances
Load balancer used to
balance across instances in
multiple Availability Zones.
Elastic
Load Balancing
Client
EC2 Instances
Zone 1b
[ Availability Zone Redundancy ]
!25
Thursday, November 21, 13
39. Each load balancer will contain one or more
DNS records, one for each load balancer node.
!25
Thursday, November 21, 13
41. Client
Elastic
Load Balancing
192.0.2.1
192.0.2.2
DNS ROUND ROBIN USED TO
EXPECT DNS RECORDS
BALANCE TRAFFIC BETWEEN
AVAILABILITY ZONES
EC2 Instance
EC2 Instance
EC2 Instance
TO CHANGE OVER TIME
EC2 Instance
EC2 Instance
EC2 Instance
[ Understanding DNS ]
EACH LOAD BALANCER DOMAIN NAME
MAY CONTAIN MULTIPLE A RECORDS
!27
Thursday, November 21, 13
43. requests / minute
Availability Zones may
see traffic imbalances
due to clients caching
DNS records.
time
[ Multiple Zone Challenges ]
!28
Thursday, November 21, 13
44. 2
Zone 1a
An unequal number of
instances per zone can
lead to over utilization of
instances in a zone.
EC2 Instances
Elastic
Load Balancer
Client
3
EC2 Instances
Zone 1b
[ Multiple Zone Challenges ]
!30
Thursday, November 21, 13
46. Cross-Zone Load Balancing distributes
traffic across all healthy instances,
regardless of Availability Zone.
!31
Thursday, November 21, 13
47. Zone 1a
2
Effectively balances the
request load across all
instances behind the load
balancer.
EC2 Instances
Elastic
Load Balancing
Client
3
EC2 Instances
Zone 1b
[ Cross-Zone Load Balancing ]
!33
Thursday, November 21, 13
48. requests / minute
Traffic is spread evenly
across each of the active
Availability Zones.
time
[ Cross-Zone Load Balancing ]
!33
Thursday, November 21, 13
49. requests / minute
Availability Zones may
ELIMINATES IMBALANCES IN
NO BANDWIDTH CHARGE
FOR CROSS-ZONE TRAFFIC
REQUESTS DISTRIBUTED EQUALLY TO
ALL INSTANCES REGARDLESS OF ZONE
see UTILIZATION
INSTANCE traffic imbalances
due to clients caching
DNS records.
REDUCES IMPACT OF CLIENTS
CACHING DNS RECORDS
time
[ Cross-Zone Load Balancing ]
!33
Thursday, November 21, 13
51. Elastic Load Balancing and Amazon Route 53 have
been integrated to support a single application
across multiple regions.
!36
Thursday, November 21, 13
53. ROUTE
•
53
AWS’s authoritative Domain Name Service (DNS)
•
Health checking service
•
Highly available and scalable
•
Offers tools that provide flexible, high-performance, and
highly available architectures on AWS
[ What is Amazon Route 53? ]
!39
Thursday, November 21, 13
54. Improves availability by …
•
health checking load balancer nodes and rerouting
traffic to avoid failures
•
supporting multi-region and backup architectures for
high-availability
ROUTE
53
[ What is Amazon Route 53? ]
!40
Thursday, November 21, 13
55. Health Checks
Automated requests sent over the
Internet to your application to verify
that your application is reachable,
available, and functional.
+
Failover
Only returns answers for resources
that are healthy and reachable from
the outside world, so end users are
routed away a failed application.
[ What is DNS failover? ]
!40
Thursday, November 21, 13
56. Work on Failure
System activity
Time to react
Constant Work
System activity
Time to react
time
time
When nothing is failing, volume of API
Health checkers and edge locations
calls is zero. When failure occurs,
perform the same volume of activity
volume of API calls spikes.
whether endpoints are healthy or
unhealthy.
[ How does it work? ]
!41
Thursday, November 21, 13
57. Amazon Route 53
conducts health checks
from within each AWS
region
[ Global Health Check Network ]
!43
Thursday, November 21, 13
59. 150
SECONDS
MANUAL FAILOVER
vs.
• operator receives an alarm
• operator manually
configures DNS update
• wait for DNS changes to
propagate
[ How does it work? ]
!44
Thursday, November 21, 13
60. 150
SECONDS
NO CONTROL PLANE INVOLVEMENT
REQUIRED FOR FAILOVER TO OCCUR
MANUAL FAILOVER
• operator receives an alarm
• operator manually
DIRECTLY FROM GLOBALLY DISTRIBUTED
configures DNS
HEALTH CHECKER FLEET update
• wait for DNS changes to
propagate
EDGE LOCATIONS PULL HEALTH RESULTS
vs.
DON’T HAVE TO WAIT FOR API REQUESTS
TO SUCCEED AND THEN PROPAGATE
[ How does it work? ]
FAILOVER HAPPENS ENTIRELY WITHIN
THE AMAZON ROUTE 53 DATA PLANE
!44
Thursday, November 21, 13
61. •
•
Region
E-commerce site: example.com
Elastic
Load Balancing
Running application stack in multiple Availability
Zones in a single AWS region
•
Wants a backup in case:
-
Own application goes down across multiple
Availability Zones
-
Some parts of the world experience
degraded connectivity to this AWS region
EC2 Instances
EC2 Instances
[ Simple Failover Scenario ]
!46
Thursday, November 21, 13
64. Static Site
Static vs. dynamic content
[ Static Backup Site Options ]
!48
Thursday, November 21, 13
65. •
Provides your globally-distributed end users
with faster performance
•
Tag each destination end-point to the
Amazon EC2 region that it’s located in
•
Amazon Route 53 will route end users to the
end-point that provides the lowest latency
[ Latency Based Routing ]
!50
Thursday, November 21, 13
66. •
Better performance than running in a single region
•
Improved reliability relative to running in one region
•
Easier implementation than traditional DNS solutions
•
Much lower prices than traditional DNS solutions
[ LBR Benefits ]
“Our customers bid on video ad
inventory in real time and our system
must evaluate the content they're
sponsoring and respond with a
decision in less than 50ms, or they'll
lose the auction. Route 53’s Latency
Based Routing lets us easily run
multiple stacks of our whole targeting
platform in each AWS region so we can
meet our customers latency needs.”
Jonathan Dodson,
Vice President of Engineering at Affine
!50
Thursday, November 21, 13
67. •
Region 1
example.com wants faster
page load for customers
•
Region 2
Launches application stack in
Elastic
Load Balancing
Elastic
Load Balancing
additional AWS regions
•
Uses Amazon Route 53
Latency Based Routing
•
Amazon Route 53 DNS
Failover ensures that end
users are only routed to a
region where the application is
EC2 Instances
EC2 Instances
EC2 Instances
EC2 Instances
healthy
[ Multi-Region Failover ]
!52
Thursday, November 21, 13
68. Region 1
Region 2
Primary
Elastic
Load Balancing
EC2 Instances
Health
Check
ROUTE
53
Primary
Health
Check
EC2 Instances
Elastic
Load Balancing
EC2 Instances
EC2 Instances
[ Multi-Region Failover ]
!53
Thursday, November 21, 13
69. Region 1
Region 2
Primary
Elastic
Load Balancing
Health
Check
ROUTE
53
Primary
Health
Check
X
X
Elastic
Load Balancing
HEALTH CHECK FAILS AND
TRAFFIC SHIFTS AWAY
EC2 Instances
EC2 Instances
EC2 Instances
EC2 Instances
[ Multi-Region Failover ]
!54
Thursday, November 21, 13
78. Types of Users
Search Site
Users
• 400 million queries per
month
• Broad geographical
distribution
!59
Thursday, November 21, 13
79. Types of Users
Search Site
Users
• 400 million queries per
month
• Broad geographical
distribution
Search API
Partners
• 150+ partners
worldwide
• Located primarily in
US and EU
• 2 billion queries/month
!59
Thursday, November 21, 13
80. Types of Users
Search Site
Users
• 400 million queries per
month
• Broad geographical
distribution
Search API
Partners
• 150+ partners
worldwide
• Located primarily in
US and EU
• 2 billion queries/month
Click Users
• 6.5 billion clicks/month
• Broad geographical
distribution
!59
Thursday, November 21, 13
83. Global Distribution of Traffic
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
!60
Thursday, November 21, 13
84. Global Distribution of Traffic
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
!60
Thursday, November 21, 13
85. Global Distribution of Traffic
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
!60
Thursday, November 21, 13
86. Global Distribution of Traffic
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
!60
Thursday, November 21, 13
87. Global Distribution of Traffic
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
!60
Thursday, November 21, 13
88. Global Distribution of Traffic
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
AZ#
!60
Thursday, November 21, 13
89. Key Statistics
• 4.5 billion requests/month
• Migrated from 2 data centers to AWS in 5 months
• Deployed in 4 regions
• Approximately 500 EC2 instances
• Approximately 50 load balancers
• Approximately 70 Amazon Route 53 zones
!62
Thursday, November 21, 13
103. Results
• Regional failover in 150 seconds consistently
• Decreased latency – 25% less latent worldwide
• Can easily reroute individual partners to different region to avoid routing
problems
• Replaced expensive network gear from datacenter
!65
Thursday, November 21, 13
104. What next?
• Expanding to additional regions
• Integration of monitoring data with traffic routing
!66
Thursday, November 21, 13
109. Please give us your feedback on this
presentation
CPN104
As a thank you, we will select prize
winners daily for completed surveys!
Thursday, November 21, 13
Thank You