- Elastic Load Balancing automatically distributes application traffic across multiple EC2 instances to improve availability and scalability.
- The Application Load Balancer provides advanced request routing features like path-based routing and integration with containers. It also offers improved security, performance, and monitoring capabilities compared to the Classic Load Balancer.
- Key components of Application Load Balancing include listeners, target groups, targets, rules, health checks, and metrics in CloudWatch. These components work together to route traffic, monitor instances, and scale capacity as needed.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
Introduction to DevOps on AWS. Basic introduction to Devops principles and practices, and how they can be implemented on AWS. Introduces basic cloudformation.
Introduction
Benefits
Concepts
Templates
CLI Tool
Cloud Formation Demo
Cloud Former (Intro)
Questions
The tutorial includes an introduction to Cloud formation, benefits to Cloud formation, concepts of Cloud formation, CLI tool, Cloud formation demo, introduction to Cloud former. The tutorial begins with an introduction to Cloud formation subsequent to which, there is another section talking about the benefits of Cloud formation. It also includes the services which are used by Cloud formation.
The next section is based on the concepts of Cloud formation. This section is important as it explains the concepts of Cloud formation which are template and stack. The Template section includes the description, objects, sample template, parameters, resources, types of resources and also the steps to create a template. Whereas, the Stack section includes the collection of resources, resources which are created or deleted. Afterward comes the CLI Tool. This section includes the CLI tool called CFN.
The CLI tool section is then followed by a Cloud formation demo. It not only gives a demo of Cloud formation and which templates would be useful. But, it also includes the issues which are present in the Cloud formation demo. The last section includes an introduction to Cloud former. It provides the description of Cloud former as to which tool and architecture it uses and also the things which are possible while using Cloud former.
AWS CloudFormation is a comprehensive templating language that enables you to create managed 'stacks' of AWS resources, with a growing library of templates available for you to use. But how do you create one from scratch? This presentation will take you through building an AWS CloudFormation template from the ground up, so you can see all the essential template constructs in action.
Watch a recording of the webinar based on this presentation on YouTube here: http://youtu.be/6R44BADNJA8
Check out other upcoming webinars in the Masterclass Series here: http://aws.amazon.com/campaigns/emea/masterclass/
Day 5 - AWS Autoscaling Master Class - The New Capacity PlanAmazon Web Services
Autoscaling groups is the new ‘Capacity Plan’ for Cloud based applications. Autoscaling enables all sorts of applications to scale seamlessly from day one traffic to millions of users – all with no capital expenditure on extra hardware procurement. Never again be caught out unprepared for a surge in traffic or the traffic generated by a successful campaign. In addition, why keep enough infrastructure running for peak loads during quieter periods, at night for example. Scale down your infrastructure to enjoy the significant cost savings that cloud computing affords you.
Reasons to attend:
- Learn how Autoscaling groups work and how they are configured and triggered.
- Learn how to architect your application in order to achieve zero impact to customers while scaling both up and down.
- Learn how to dynamically change the size of your infrastructure to match the changing capacity requirements.
Docker containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
Introduction to DevOps on AWS. Basic introduction to Devops principles and practices, and how they can be implemented on AWS. Introduces basic cloudformation.
Introduction
Benefits
Concepts
Templates
CLI Tool
Cloud Formation Demo
Cloud Former (Intro)
Questions
The tutorial includes an introduction to Cloud formation, benefits to Cloud formation, concepts of Cloud formation, CLI tool, Cloud formation demo, introduction to Cloud former. The tutorial begins with an introduction to Cloud formation subsequent to which, there is another section talking about the benefits of Cloud formation. It also includes the services which are used by Cloud formation.
The next section is based on the concepts of Cloud formation. This section is important as it explains the concepts of Cloud formation which are template and stack. The Template section includes the description, objects, sample template, parameters, resources, types of resources and also the steps to create a template. Whereas, the Stack section includes the collection of resources, resources which are created or deleted. Afterward comes the CLI Tool. This section includes the CLI tool called CFN.
The CLI tool section is then followed by a Cloud formation demo. It not only gives a demo of Cloud formation and which templates would be useful. But, it also includes the issues which are present in the Cloud formation demo. The last section includes an introduction to Cloud former. It provides the description of Cloud former as to which tool and architecture it uses and also the things which are possible while using Cloud former.
AWS CloudFormation is a comprehensive templating language that enables you to create managed 'stacks' of AWS resources, with a growing library of templates available for you to use. But how do you create one from scratch? This presentation will take you through building an AWS CloudFormation template from the ground up, so you can see all the essential template constructs in action.
Watch a recording of the webinar based on this presentation on YouTube here: http://youtu.be/6R44BADNJA8
Check out other upcoming webinars in the Masterclass Series here: http://aws.amazon.com/campaigns/emea/masterclass/
Day 5 - AWS Autoscaling Master Class - The New Capacity PlanAmazon Web Services
Autoscaling groups is the new ‘Capacity Plan’ for Cloud based applications. Autoscaling enables all sorts of applications to scale seamlessly from day one traffic to millions of users – all with no capital expenditure on extra hardware procurement. Never again be caught out unprepared for a surge in traffic or the traffic generated by a successful campaign. In addition, why keep enough infrastructure running for peak loads during quieter periods, at night for example. Scale down your infrastructure to enjoy the significant cost savings that cloud computing affords you.
Reasons to attend:
- Learn how Autoscaling groups work and how they are configured and triggered.
- Learn how to architect your application in order to achieve zero impact to customers while scaling both up and down.
- Learn how to dynamically change the size of your infrastructure to match the changing capacity requirements.
Docker containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources.
While many organizations have started to automate their software development processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
Amazon has proved its might in the field of offering diverse cloud services and has excelled in almost all scenarios to date. Amazon EC2 came into play in 2006 and has gained immense popularity since then. But, along with that, AWS Lambda is also a popular service that came out in 2014 and is now walking side-to-side with EC2 in terms of popularity and adaptation.
To know the major differences between AWS Lambda and CE2 please visit https://www.whizlabs.com/blog/aws-lambda-vs-ec2/
(DVO308) Docker & ECS in Production: How We Migrated Our Infrastructure from ...Amazon Web Services
This session will introduce you to Empire, a new self-hosted PaaS built on top of Amazon’s EC2 Container Service (ECS). Empire is a recently open-sourced project that provides a mostly Heroku-compatible API. It allows engineering teams to deploy and manage applications in a method similar to Heroku, but with the added flexibility and control of running your own ECS container instances. We'll talk about why Remind decided to move its infrastructure from Heroku to AWS, introduce you to ECS and the open source platform we built on top of it to make migration easier, and then we'll demo Empire to show you how you can try it today.
Adapting the capacity of your compute infrastructure to the demands of your applications is the domain of Auto Scaling. Adding and removing Amazon EC2 instances is only part of the story, though – there is more to it than first meets the eye. This session introduces the basics of how to use Auto Scaling before moving on to more advanced topics such as mixing Spot and On-Demand instances to optimize cost or strategies for blue/green deployments. If you have used Auto Scaling before, you can learn about useful new features like lifecycle hooks and step scaling policies that make Auto Scaling even more widely applicable.
An Introduction to the AWS Well Architected Framework - WebinarAmazon Web Services
The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, cost optimization and operational excellence when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud.
In this one hour webinar, you'll learn how to use the AWS Well-Architected Framework to follow guidelines and best practices for your architecture on AWS.
Do you want to run your code without the cost and effort of provisioning and managing servers? Find out how in this deep dive session on AWS Lambda, which allows you to run code for virtually any type of application or back end service – all with zero administration. During the session, we’ll look at a number of key AWS Lambda features and benefits, including automated application scaling with high availability; pay-as-you-consume billing; and the ability to automatically trigger your code from other AWS services or from any web or mobile app.
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
While many organizations have started to automate their software development processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
Amazon has proved its might in the field of offering diverse cloud services and has excelled in almost all scenarios to date. Amazon EC2 came into play in 2006 and has gained immense popularity since then. But, along with that, AWS Lambda is also a popular service that came out in 2014 and is now walking side-to-side with EC2 in terms of popularity and adaptation.
To know the major differences between AWS Lambda and CE2 please visit https://www.whizlabs.com/blog/aws-lambda-vs-ec2/
(DVO308) Docker & ECS in Production: How We Migrated Our Infrastructure from ...Amazon Web Services
This session will introduce you to Empire, a new self-hosted PaaS built on top of Amazon’s EC2 Container Service (ECS). Empire is a recently open-sourced project that provides a mostly Heroku-compatible API. It allows engineering teams to deploy and manage applications in a method similar to Heroku, but with the added flexibility and control of running your own ECS container instances. We'll talk about why Remind decided to move its infrastructure from Heroku to AWS, introduce you to ECS and the open source platform we built on top of it to make migration easier, and then we'll demo Empire to show you how you can try it today.
Adapting the capacity of your compute infrastructure to the demands of your applications is the domain of Auto Scaling. Adding and removing Amazon EC2 instances is only part of the story, though – there is more to it than first meets the eye. This session introduces the basics of how to use Auto Scaling before moving on to more advanced topics such as mixing Spot and On-Demand instances to optimize cost or strategies for blue/green deployments. If you have used Auto Scaling before, you can learn about useful new features like lifecycle hooks and step scaling policies that make Auto Scaling even more widely applicable.
An Introduction to the AWS Well Architected Framework - WebinarAmazon Web Services
The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, cost optimization and operational excellence when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud.
In this one hour webinar, you'll learn how to use the AWS Well-Architected Framework to follow guidelines and best practices for your architecture on AWS.
Do you want to run your code without the cost and effort of provisioning and managing servers? Find out how in this deep dive session on AWS Lambda, which allows you to run code for virtually any type of application or back end service – all with zero administration. During the session, we’ll look at a number of key AWS Lambda features and benefits, including automated application scaling with high availability; pay-as-you-consume billing; and the ability to automatically trigger your code from other AWS services or from any web or mobile app.
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
AWS re:Invent 2016: Elastic Load Balancing Deep Dive and Best Practices (NET403)Amazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
(SDD423) Elastic Load Balancing Deep Dive and Best Practices | AWS re:Invent ...Amazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service's many customization choices. We also share best practices and useful tips for success.
Elastic Load Balancing Deep Dive and Best Practices - Pop-up Loft Tel AvivAmazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service's many customization choices. We also share best practices and useful tips for success.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Delivering High-Availability Web Services with NGINX Plus on AWSNGINX, Inc.
Over 1/3 of websites running on Amazon Web Services (AWS) are delivered and accelerated using NGINX. In this webinar Nginx and Amazon explain how to get started with NGINX Plus on AWS and how to further increase performance and availability of large, dynamic, cloud-based applications integrating with critical AWS services.
Slide for ELB (Elastic Load Balancer), which is a topic of AWS Architect Associate and AWS SysOps Certification training for individual or group or corporate training.
These slides are from the September 2017 group about the 3 types of Load Balancers in AWS - Classic Load Balancer, Application Load Balancer, and Network Load Balancer
Learning Objectives:
- Learn how to make decisions about the service and share best practices and useful tips for success
- Learn about Content based routing, HTTP/2, WebSockets
- Secure your web applications using TLS termination, AWS WAF on Application Load Balancer
(CMP401) Elastic Load Balancing Deep Dive and Best PracticesAmazon Web Services
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
AWS re:Invent 2016: From EC2 to ECS: How Capital One uses Application Load Ba...Amazon Web Services
Capital One began moving to AWS just two years ago. Every day, the amount of traffic we serve from the cloud continues to grow. With development teams having the freedom to choose their own technology stacks, many teams have quickly started moving applications to Docker. In this session, learn how Capital One uses a combination of the Elastic Load Balancing service along with Application Load Balancer features to increase deployment speed and reliability.
Elastic Load Balancing allows the incoming traffic to be distributed automatically across multiple healthy EC2 instances.
ELB serves as a single point of contact to the client.
ELB helps to being transparent and increases the application availability by allowing addition or removal of multiple EC2 instances across one or more availability zones, without disrupting the overall flow of information.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
8. Layer 7 (application)Layer 4 (network)
Supports TCP and SSL
Incoming client connection bound to
server connection
No header modification
Proxy Protocol prepends source
and destination IP and ports to
request
Supports HTTP and HTTPS
Connection terminated at the load
balancer and pooled to the server
Headers may be modified
X-Forwarded-For header contains
client IP address
11. New, feature rich, layer 7 load balancing
platform
Fully-managed, scalable and highly
available load balancing platform
Content-based routing allows requests to
be routed to different applications behind a
single load balancer
Application Load Balancer
12. Application Load Balancer allows for
multiple applications to be hosted
behind a single load balancer
17. Consider blast radius and isolation when
grouping applications behind a
single load balancer
18. Application Load Balancer provides
native support for microservice and
container-based architectures
19. Instances can be registered with multiple ports,
allowing for requests to be routed to multiple
containers on a single instance
ECS will automatically register tasks with the load
balancer using a dynamic port mapping
Can also be used with other container
technologies
Application Load Balancer
22. New API version provided for creating,
configuring and managing Application Load
Balancers
Follows latest AWS best practices for
resource identifiers and API design
Provides several new resource types,
including target groups, targets and rules
Application Load Balancer
24. Define the protocol and port on which the
load balancer listens for incoming
connections
Each load balancer needs at least one
listener to accept incoming traffic, and can
support up to 10 listeners
Routing rules are defined on listeners
Listeners
26. Logical grouping of targets
behind a load balancer
Target groups can be exist independently
from the load balancer, and be associated
with a load balancer when needed
Regional construct that can be associated
with AutoScaling group
Target Groups
27. Load Balancer
Target Group #1
Health Check Health Check Health Check
EC2 EC2 EC2 EC2 EC2 EC2 ECS ECS ECS
Listener Listener
Target Group #2 Target Group #3
28. Logical load balancing target, which can be
an EC2 instances, micro-service or
container-based application
EC2 instances can be registered with the
same target group using multiple ports
A single target can be registered with
multiple target groups
Targets
29. Load Balancer
Target Group #1
Health Check Health Check Health Check
EC2 EC2 EC2 EC2 EC2 EC2 ECS ECS ECS
Listener Listener
Target Group #2 Target Group #3
Rule (default) Rule (*/img/*) Rule (default)
30. Provide the link between listeners and
target groups and consist of conditions and
actions
When a request meets the condition of the
rule, the associated action is taken
Today, rules can forward requests to a
specified target group
Rules
31. Conditions can be specified in path pattern format
A path pattern is case sensitive, can be up to 128
characters in length, and can contain any of the
following characters:
• A-Z, a-z, 0-9
• _ - . $ / ~ " ' @ : +
• & (using &)
• * (matches 0 or more characters)
• ? (matches exactly 1 character)
Rules (continued)
32. Load Balancer
Target Group #1
Health Check Health Check Health Check
EC2 EC2 EC2 EC2 EC2 EC2 ECS ECS ECS
Listener
Rule (default) Rule (*/img/*)
Listener
Rule (default)
Target Group #2 Target Group #3
37. Native support for WebSockets, supporting
full-duplex communication channels
over a single TCP connection
Support for HTTP/2 provides improved
page load times from most
of today’s browsers
Improved performance for
real-time and streaming applications
Application Load Balancer
45. HTTP and HTTPS health checks
Customize the frequency, failure
thresholds, and list of successful
response codes
Detailed reasons for health check failures
are now returned via the API and
displayed in the Management Console
Health Checks
54. Distributes requests evenly across
multiple Availability Zones.
Absorbs impact of DNS caching and
eliminates imbalances in backend
instance utilization
No additional bandwidth charge for
cross-zone traffic.
Cross-Zone Load Balancing
55. Cross Zone Load Balancing enabled by
default on all Application Load Balancers
56. Auto Scaling now supports the scaling of
applications at the target group level
57. Application Load
Balancer integrates
with Auto Scaling to
manage the scaling of
each target group
independently
ELB
/orders
example.com
EC2
Instance
EC2
Instance
EC2
Instance
EC2
Instance
/images
EC2
Instance
58. When using Auto Scaling, keep in mind that
your application may be under load
during quiet times
60. SSL Negotiation Policies provide
selection of ciphers and protocols
that adhere to the latest industry best
practices
Optimized for balance between
security and client connectivity, as
testing with Amazon.com traffic
New: TLSv2, TLSv3 and WinXP
policies
SSL Offloading
63. SSL Negotiation Policies provide
selection of ciphers and protocols
that adhere to the latest industry best
practices
Optimized for balance between
security and client connectivity, as
testing with Amazon.com traffic
Website Application Firewall
65. CloudWatch metrics provided for each
load balancer
Provide detailed insight into the health of
the load balancer and application stack
All metrics provided at the 1-minute
granularity
Amazon CloudWatch Metrics
66. Metrics provided at both the load
balancer and target group level
CloudWatch alarms can be configured to
notify or take action should any metric go
outside of the acceptable range
Auto Scaling can use these metrics for
scaling of the back-end fleet.
Amazon CloudWatch Metrics
67. HealthyHostCount
The count of the number of healthy instances
in each Availability Zone
Most common cause of unhealthy hosts is
health check exceeding the allocated timeout
Test by making repeated requests to the
backend instance from another EC2 instance
View at the zonal dimension
68. Latency
Measures the elapsed time, in seconds, from when the request leaves the
load balancer until the response is received
Test by sending requests to the backend instance from another instance
Using min, average, and max CloudWatch stats,
provide upper and lower bounds for latency
Debug individual requests using access logs
69. Rejected Connection Count
The number of connections that were rejected because the load balancer
could not establish a connection with a healthy target in order to route the
request
This replaces surge queue metrics which are used
by the Classic Load Balancer
Surge queues often impact client applications,
which fast request rejection improves
Normally a sign of an under-scaled application
70. Target Group Metrics
The following metrics are now provided at the target group level, allowing
for individual applications to be closely monitored:
• RequestCount
• HTTPCode_Target_2XX_Count
• HTTPCode_Target_3XX_Count
• HTTPCode_Target_4XX_Count
• HTTPCode_Target_5XX_Count
• TargetResponseTime (Latency)
• UnHealthyHostCount
• HealthyHostCount
71. Load balancer request response times
are now provided with percentile
dimensions
Provides visibility into the 90th, 95th, 99th
or 99.9th percentile of response times
Allows for more meaningful, and
aggressive, performance targets for
applications
CloudWatch Percentiles
73. Provide detailed information on each
request processed by the load balancer
Includes request time, client IP address,
latencies, request path, server
responses, ciphers and protocols, and
user-agents
Delivered to an S3 bucket every 5 or 60
minutes
Access Logs
74. Application Load Balancers insert a unique
trace identifier into each request using a
custom header: X-Amzn-Trace-ID
Trace identifiers are preserved through the
request chain to allow for request tracing
Trace identifiers are included in access logs
and can also be logged by applications
themselves
Request Tracing
We’ve all started here, a single instance serving a basic application. It does not take much to realize that this is not an architecture you’d want to take into production. From an availability point of view, you don’t have much hope. From a scalability point of view, you’re down to what a single EC2 instance can support with no plan to add capacity if required.
Elastic Load Balancing allows you to route application request traffic over 1 to many EC2 instances and ensures that any failed instances does not impact your customers by removing them from service.
This is how you want your application to look
Elastic: scales dynamically as request load increases, we watch all different metrics, throughput CPU, memory, and scale accordingly
Secure: support for end-to-end traffic encryption using latest protocols and ciphers, we handle the SSL for you so you can focus on building awesome applications
Integrated: Amazon EC2, Auto Scaling, Beanstalk, CloudWatch and Route 53, ECS
Cost Effective: cheaper to run an ELB then to try and do yourself with EC2 only pay for what you use - ~$18.50 per month plus a bandwidth charge, ELB becomes cheaper with scale, the larger you get
EXPLAIN WE WANT AZS FOR FAILURES….
EC2-VPC Architecture for the load balancer.
Customers instances in their VPC, spread across two subnets (shown in blue).
Load Balancer nodes in a separate VPC, owned by the ELB account.
Customer associates subnet with ELB when it is created.
ELB takes 2 ENIs from the customers account and attaches them to each load balancer node
This is how we give you control using security groups, and how we get very very secure access into your network
If public ELB put public IP, if internal, private, which will only be accesible from inside VPC
Amazon Route 53 used for DNS and used round robin to direct traffic to each of the load balancer nodes.
You get the ELB DNS name from the API, that you can CNAME to or use the R53 alias feature
We are HUGE supporters of R53… highly recommend you guys take a look at the health check feature
Connections:
TCP: each connection terminated to LB, but bound to the connection on the back-end; we don’t look at it, just flip it to the backend
If you want to to SSL on backend, you can just pass through and do it yourself
HTTP: a connection pool is used to the back-end instance.
Headers:
TCP: the headers are left unchanged and forwarded to the back-end instance
HTTP: headers may be inserted depending on the features that are enabled on the load balancer., for example x-forwarded-for
Source IP: Since ELB proxies all incoming connection, the back-end instance will see the connection coming from the ELB nodes themselves.
TCP: proxy protocol can be used to retrieve the source IP address and port, we append this to the front of the packet
HTTP: X-Forwarded-For appended to header contains the source IP address.
Algorithms:
TCP: round robin is used., the reason for this is no connection pooling, we don’t look at the packet
HTTP: least outstanding requests, which is a request-based form of the leastconns algorithm is used, ELB with fewest outstadding requests will get the next request
Sticky Sessions: although we always recommend architectures that utilize caching off instance, such as ElastiCache, we do support cookie-based sticky sessions for HTTP listeners.
Connections:
TCP: each connection terminated to LB, but bound to the connection on the back-end; we don’t look at it, just flip it to the backend
If you want to to SSL on backend, you can just pass through and do it yourself
HTTP: a connection pool is used to the back-end instance.
Headers:
TCP: the headers are left unchanged and forwarded to the back-end instance
HTTP: headers may be inserted depending on the features that are enabled on the load balancer., for example x-forwarded-for
Source IP: Since ELB proxies all incoming connection, the back-end instance will see the connection coming from the ELB nodes themselves.
TCP: proxy protocol can be used to retrieve the source IP address and port, we append this to the front of the packet
HTTP: X-Forwarded-For appended to header contains the source IP address.
Algorithms:
TCP: round robin is used., the reason for this is no connection pooling, we don’t look at the packet
HTTP: least outstanding requests, which is a request-based form of the leastconns algorithm is used, ELB with fewest outstadding requests will get the next request
Sticky Sessions: although we always recommend architectures that utilize caching off instance, such as ElastiCache, we do support cookie-based sticky sessions for HTTP listeners.
We built a special relationship with EC2 to get you your cross zone traffic for free
Elastic Load Balancing allows you to route application request traffic over 1 to many EC2 instances and ensures that any failed instances does not impact your customers by removing them from service.
This is how you want your application to look
Elastic Load Balancing allows you to route application request traffic over 1 to many EC2 instances and ensures that any failed instances does not impact your customers by removing them from service.
This is how you want your application to look
Elastic Load Balancing allows you to route application request traffic over 1 to many EC2 instances and ensures that any failed instances does not impact your customers by removing them from service.
This is how you want your application to look
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
Elastic Load Balancing allows you to route application request traffic over 1 to many EC2 instances and ensures that any failed instances does not impact your customers by removing them from service.
This is how you want your application to look
We built a special relationship with EC2 to get you your cross zone traffic for free
EXPLAIN WE WANT AZS FOR FAILURES….
EC2-VPC Architecture for the load balancer.
Customers instances in their VPC, spread across two subnets (shown in blue).
Load Balancer nodes in a separate VPC, owned by the ELB account.
Customer associates subnet with ELB when it is created.
ELB takes 2 ENIs from the customers account and attaches them to each load balancer node
This is how we give you control using security groups, and how we get very very secure access into your network
If public ELB put public IP, if internal, private, which will only be accesible from inside VPC
Amazon Route 53 used for DNS and used round robin to direct traffic to each of the load balancer nodes.
You get the ELB DNS name from the API, that you can CNAME to or use the R53 alias feature
We are HUGE supporters of R53… highly recommend you guys take a look at the health check feature
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
EXPLAIN WE WANT AZS FOR FAILURES….
EC2-VPC Architecture for the load balancer.
Customers instances in their VPC, spread across two subnets (shown in blue).
Load Balancer nodes in a separate VPC, owned by the ELB account.
Customer associates subnet with ELB when it is created.
ELB takes 2 ENIs from the customers account and attaches them to each load balancer node
This is how we give you control using security groups, and how we get very very secure access into your network
If public ELB put public IP, if internal, private, which will only be accesible from inside VPC
Amazon Route 53 used for DNS and used round robin to direct traffic to each of the load balancer nodes.
You get the ELB DNS name from the API, that you can CNAME to or use the R53 alias feature
We are HUGE supporters of R53… highly recommend you guys take a look at the health check feature
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
EXPLAIN WE WANT AZS FOR FAILURES….
EC2-VPC Architecture for the load balancer.
Customers instances in their VPC, spread across two subnets (shown in blue).
Load Balancer nodes in a separate VPC, owned by the ELB account.
Customer associates subnet with ELB when it is created.
ELB takes 2 ENIs from the customers account and attaches them to each load balancer node
This is how we give you control using security groups, and how we get very very secure access into your network
If public ELB put public IP, if internal, private, which will only be accesible from inside VPC
Amazon Route 53 used for DNS and used round robin to direct traffic to each of the load balancer nodes.
You get the ELB DNS name from the API, that you can CNAME to or use the R53 alias feature
We are HUGE supporters of R53… highly recommend you guys take a look at the health check feature
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
EXPLAIN WE WANT AZS FOR FAILURES….
EC2-VPC Architecture for the load balancer.
Customers instances in their VPC, spread across two subnets (shown in blue).
Load Balancer nodes in a separate VPC, owned by the ELB account.
Customer associates subnet with ELB when it is created.
ELB takes 2 ENIs from the customers account and attaches them to each load balancer node
This is how we give you control using security groups, and how we get very very secure access into your network
If public ELB put public IP, if internal, private, which will only be accesible from inside VPC
Amazon Route 53 used for DNS and used round robin to direct traffic to each of the load balancer nodes.
You get the ELB DNS name from the API, that you can CNAME to or use the R53 alias feature
We are HUGE supporters of R53… highly recommend you guys take a look at the health check feature
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
EXPLAIN WE WANT AZS FOR FAILURES….
EC2-VPC Architecture for the load balancer.
Customers instances in their VPC, spread across two subnets (shown in blue).
Load Balancer nodes in a separate VPC, owned by the ELB account.
Customer associates subnet with ELB when it is created.
ELB takes 2 ENIs from the customers account and attaches them to each load balancer node
This is how we give you control using security groups, and how we get very very secure access into your network
If public ELB put public IP, if internal, private, which will only be accesible from inside VPC
Amazon Route 53 used for DNS and used round robin to direct traffic to each of the load balancer nodes.
You get the ELB DNS name from the API, that you can CNAME to or use the R53 alias feature
We are HUGE supporters of R53… highly recommend you guys take a look at the health check feature
We built a special relationship with EC2 to get you your cross zone traffic for free
We’ve all started here, a single instance serving a basic application. It does not take much to realize that this is not an architecture you’d want to take into production. From an availability point of view, you don’t have much hope. From a scalability point of view, you’re down to what a single EC2 instance can support with no plan to add capacity if required.
Describe instance health has to be called
One of our awesome features to help you maintain a good experience for your customers, and proactively notify you of potential issues
Mitigating failures is a hugely important feature of ELB
One of the machines starts having issues
Health check is reaching out to backend at an interval set by you the customer
Anything but a 200 is not healthy, and we will fail away from that instance in event of the failure
You get notified, fix the issue, then the backend is marked as healthy, and ELB starts routing traffic again
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
Please please make sure you are using multiple Azs for all of your applications!
Dotted line is Azs
R53 on top left, does the load balancing between the Azs using roud robin
Works very well as long as clients are resolving DNS
We believe so strongly in multiple AZ that we will use multi AZ even if you don’t
We will always use 2 Azs
And in order for this to work we need 2 subnets from you
Even you don’t have instances in that second AZ, that’s fine
Here you can see the one AZ is running hot since it only has 1 instance, the other AZ has 3 instances
Some customers might run like this, however, usually it’s either a deployment, or you may have problem with some instances
Ideally you want to allocate traffic evenly across ALL instances
Cross Zone LB can solve this problem
If client does not obey DNS, we will absorb the balance, bad client might be hitting me in one AZ, but we will scale up and still distribute across all AZs
We built a special relationship with EC2 to get you your cross zone traffic for free
Elastic Load Balancing allows you to route application request traffic over 1 to many EC2 instances and ensures that any failed instances does not impact your customers by removing them from service.
This is how you want your application to look
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
TCP health checks are shallow, they really just require power to your network card, typically we see customers using these since they could not get HTTP to work with their application, good chance TCP could be working, but application not working
How deep should your check go? The should go deep enough to remove an individual node, this can be a whole additional talk
If you have a health check that goes too deep, and there is a shared dependency across all nodes, for example your health check actually makes a DB call, and the DB goes down, and execute an actual full customer path, all nodes will be removed from the ELB, and we’ll start throwing 503s, so make sure you remove your shared dependencies from your health checks and monitor those separately
CW Metrics
Amazed at times when customers do not know how many metrics we actually make available
With ELB we give you 1 min by default all the time
CW Metrics
Amazed at times when customers do not know how many metrics we actually make available
With ELB we give you 1 min by default all the time
HealthHost and Unhealth host count, sum should always be backend instances behidn the load balancer
This comes back to the health check we discussed earlier
The most common reason for unhealthy hosts is timeouts! Looks to them health check succeeds, but it took to long to respond
Check from another EC2 instance and see why failing
This is the other very interesting metric
This measures the time after we sent the first byte to when we receive the first byte of response from the backend
Very good indication of how your app is doing
Surge queue is a que in the LB where we will queue requests if you don’t have enough backend capacity
We will queue them as best we can, we can hold 1024 requests but then we’ll start dropping
Amazon has dropped all surge queues, you can see clients timing out on surge queus and sets problems
If you see this metric, usually indication of under scaled
You can auto scale on all CW metrics, surge queue might be late, but latency could be a great one
You may be at peak multiple times a day!
Important to consider all possible bottlenecks, you may be scaling on CPU, but need to watch IO, memory, etc. different traffic patterns might use different resources
Also many people are under scale at the troughs, they remove too much capacity at the trough of their curve
CW Metrics
Amazed at times when customers do not know how many metrics we actually make available
With ELB we give you 1 min by default all the time
CW Metrics
Amazed at times when customers do not know how many metrics we actually make available
With ELB we give you 1 min by default all the time
Access logs very useful do dive further, and no which event is driving high latency, requests every single request going through LB
Example of customer with very high latencies that were able to diagnose the issue using Access Logs.
Integrated with other log providers like Splunk to give near time traffic analysis
So if you saw a latency spike you can see THE request that caused the problem
Access logs very useful do dive further, and no which event is driving high latency, requests every single request going through LB
Example of customer with very high latencies that were able to diagnose the issue using Access Logs.
Integrated with other log providers like Splunk to give near time traffic analysis
So if you saw a latency spike you can see THE request that caused the problem
Connections:
TCP: each connection terminated to LB, but bound to the connection on the back-end; we don’t look at it, just flip it to the backend
If you want to to SSL on backend, you can just pass through and do it yourself
HTTP: a connection pool is used to the back-end instance.
Headers:
TCP: the headers are left unchanged and forwarded to the back-end instance
HTTP: headers may be inserted depending on the features that are enabled on the load balancer., for example x-forwarded-for
Source IP: Since ELB proxies all incoming connection, the back-end instance will see the connection coming from the ELB nodes themselves.
TCP: proxy protocol can be used to retrieve the source IP address and port, we append this to the front of the packet
HTTP: X-Forwarded-For appended to header contains the source IP address.
Algorithms:
TCP: round robin is used., the reason for this is no connection pooling, we don’t look at the packet
HTTP: least outstanding requests, which is a request-based form of the leastconns algorithm is used, ELB with fewest outstadding requests will get the next request
Sticky Sessions: although we always recommend architectures that utilize caching off instance, such as ElastiCache, we do support cookie-based sticky sessions for HTTP listeners.