Announcing AWS Batch - Run Batch Jobs At Scale - December 2016 Monthly Webina...Amazon Web Services
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.
Learning Objectives:
• Learn about the capabilities and features of AWS Batch
• Learn about the benefits of AWS Batch
• Learn about the different use cases
• Learn how to get started using AWS Batch
AWS Batch is a service that enables developers, scientists, and engineers to easily and efficiently run batch computing workloads at scale on AWS. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads, allowing you to focus on analyzing results and solving problems.
In this session, led by the AWS Batch service team, you will learn core concepts behind AWS Batch and details of how the service functions. We will cover multiple patterns used by customers to leverage storage and GPUs as part of their batch workloads. We will also cover how to integrate AWS Batch with other services such as AWS Step Functions for decision based workloads or Amazon CloudWatch Events to trigger batch jobs based on events or schedules.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
AWS re:Invent 2016: Get Technically Inspired by Container-Powered Migrations ...Amazon Web Services
This session is a technical journey through application migration and refactoring using containerized technologies. Flux 7 recently worked with Rent-a-Center to perform a Hybris migration from their datacenter to AWS and you can hear how they used Amazon ECS, the new Application Load Balancer, and Auto Scaling to meet the customers' business objectives.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications. We will also demonstrate how you can easily launch microservices on Amazon EC2 Container Service and how you can use ELB and Route 53 to easily do service discovery between microservices.
Announcing AWS Batch - Run Batch Jobs At Scale - December 2016 Monthly Webina...Amazon Web Services
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.
Learning Objectives:
• Learn about the capabilities and features of AWS Batch
• Learn about the benefits of AWS Batch
• Learn about the different use cases
• Learn how to get started using AWS Batch
AWS Batch is a service that enables developers, scientists, and engineers to easily and efficiently run batch computing workloads at scale on AWS. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads, allowing you to focus on analyzing results and solving problems.
In this session, led by the AWS Batch service team, you will learn core concepts behind AWS Batch and details of how the service functions. We will cover multiple patterns used by customers to leverage storage and GPUs as part of their batch workloads. We will also cover how to integrate AWS Batch with other services such as AWS Step Functions for decision based workloads or Amazon CloudWatch Events to trigger batch jobs based on events or schedules.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
AWS re:Invent 2016: Get Technically Inspired by Container-Powered Migrations ...Amazon Web Services
This session is a technical journey through application migration and refactoring using containerized technologies. Flux 7 recently worked with Rent-a-Center to perform a Hybris migration from their datacenter to AWS and you can hear how they used Amazon ECS, the new Application Load Balancer, and Auto Scaling to meet the customers' business objectives.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications. We will also demonstrate how you can easily launch microservices on Amazon EC2 Container Service and how you can use ELB and Route 53 to easily do service discovery between microservices.
(CMP407) Lambda as Cron: Scheduling Invocations in AWS LambdaAmazon Web Services
Do you need to run an AWS Lambda function on a schedule, without an event to trigger the invocation? This session shows how to use an Amazon CloudWatch metric and CloudWatch alarms, Amazon SNS, and Lambda so that Lambda triggers itself every minute—no external services required! From here, other Lambda jobs can be scheduled in crontab-like format, giving minute-level resolution to your Lambda scheduled tasks. During the session, we build this functionality up from scratch with a Lambda function, CloudWatch metric and alarms, sample triggers, and tasks.
AWS re:Invent 2016: Running Batch Jobs on Amazon ECS (CON310)Amazon Web Services
Batch computing is a common way for developers, scientists and engineers to run a series of jobs on a large pool of shared compute resources, such as servers, virtual machines, and containers. Amazon ECS makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. In this session will show you how to run batch jobs using Amazon ECS and together with other AWS services, such as AWS Lambda and Amazon SQS. We will see how you can leverage Amazon EC2 Spot Instances to power your ECS cluster and easily scale your batch workloads. You'll hear from Mapbox on how they use ECS to power their entire batch processing architecture to collect and process over 100 million miles of sensor data per day that they use for powering their maps. Mapbox will also discuss how they optimize their batch processing framework on ECS using Spot Instances and demo their open source framework that will help you get up and running with ECS in minutes.
AWS re:Invent 2016: NEW LAUNCH! Lambda Everywhere (IOT309)Amazon Web Services
You can now execute Lambda’s almost anywhere – originating in the cloud, and on connected devices with AWS Greengrass. This advanced technical session explores Lambda Functions and what it means to use them across these diverse environments. We will treat the cloud as the ‘brain’, using local Lambda’s for local executions. This way devices can react instinctively, much like the autonomic nervous system, operating in the periphery and responsible for collecting and filtering information, implementing simple and time-sensitive local actions reflexively.
AWS Step Functions is a new, fully-managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Step Functions is a reliable way to connect and step through a series of AWS Lambda functions, so that you can build and run multi-step applications in a matter of minutes. This session shows how to use AWS Step Functions to create, run, and debug cloud state machines to execute parallel, sequential, and branching steps of your application, with automatic catch and retry conditions. Learn how easy it is to create Step Functions state machines and activities using CloudFormation Templates, and then start them with Amazon API Gateway. We share how customers are using AWS Step Functions to reliably orchestrate and scale multi-step applications such as order processing, report generation, and data transformation–all without managing any infrastructure.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications. We will also demonstrate how you can easily deploy and monitor microservices on Amazon EC2 Container Service.
Batch Processing with Containers on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the options for running batch workloads on AWS
- Learn how to architect a containerized batch processing service on Amazon ECS
- Learn best practices for optimizing and scaling complex batch workload requirements
Batch processing is useful when you need to periodically analyze large amounts of data, but configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. Containers provide a great solution for running batch jobs by providing easily managed, scalable, and portable code environments.
In this tech talk, we’ll show you how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We’ll discuss AWS Batch, our fully managed batch-processing service, and show you how to architect your own batch processing service using the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for your batch jobs.
AWS Lambda allows any Node.js app to be run at scale in a massively parallel environment with no up-front costs or planning. This session shows how to use Lambda to build dynamic analytic data flows that can be tuned as they execute, based on initial results, to provide real-time output streamed to web clients. This process enables a cost-effective and responsive user experience for ad hoc big data jobs and lets developers focus on how data is consumed and presented, instead of how it is obtained.
What if there were an easier way to perform big data analysis with less setup, instant scaling, and no servers to provision and manage? With serverless computing, you can perform real-time stream processing of multiple data types without needing to spin up servers or install software. Come learn how you can use AWS Lambda with Amazon Kinesis to analyze streaming data in real-time and then store the results in a managed NoSQL database such as Amazon DynamoDB. You’ll learn tips and tricks for doing in-line processing, data manipulation, and even distributed MapReduce on large data sets.
This talk (delivered at QConLondon 2016) covers the evolution of Coursera's nearline architecture, delves into our latest generation system, and then covers the flagship application of the architecture (evaluating programming assignments).
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1stYuc2.
Brennan Saeta covers aspects of Coursera’s architecture that enables them to rapidly build sophisticated features for their learning platform. Saeta discusses also their experience running containers in production, what works, what doesn’t, and why. He briefly touches upon container threat models, and how to architect a defense-in-depth strategy to mitigate both known and unknown vulnerabilities. Filmed at qconlondon.com.
Brennan Saeta is a Lead Infrastructure Engineer, leading the ‘Cour’ (core) group responsible for the development environment, core libraries, and the common infrastructure powering Coursera.
(CMP407) Lambda as Cron: Scheduling Invocations in AWS LambdaAmazon Web Services
Do you need to run an AWS Lambda function on a schedule, without an event to trigger the invocation? This session shows how to use an Amazon CloudWatch metric and CloudWatch alarms, Amazon SNS, and Lambda so that Lambda triggers itself every minute—no external services required! From here, other Lambda jobs can be scheduled in crontab-like format, giving minute-level resolution to your Lambda scheduled tasks. During the session, we build this functionality up from scratch with a Lambda function, CloudWatch metric and alarms, sample triggers, and tasks.
AWS re:Invent 2016: Running Batch Jobs on Amazon ECS (CON310)Amazon Web Services
Batch computing is a common way for developers, scientists and engineers to run a series of jobs on a large pool of shared compute resources, such as servers, virtual machines, and containers. Amazon ECS makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. In this session will show you how to run batch jobs using Amazon ECS and together with other AWS services, such as AWS Lambda and Amazon SQS. We will see how you can leverage Amazon EC2 Spot Instances to power your ECS cluster and easily scale your batch workloads. You'll hear from Mapbox on how they use ECS to power their entire batch processing architecture to collect and process over 100 million miles of sensor data per day that they use for powering their maps. Mapbox will also discuss how they optimize their batch processing framework on ECS using Spot Instances and demo their open source framework that will help you get up and running with ECS in minutes.
AWS re:Invent 2016: NEW LAUNCH! Lambda Everywhere (IOT309)Amazon Web Services
You can now execute Lambda’s almost anywhere – originating in the cloud, and on connected devices with AWS Greengrass. This advanced technical session explores Lambda Functions and what it means to use them across these diverse environments. We will treat the cloud as the ‘brain’, using local Lambda’s for local executions. This way devices can react instinctively, much like the autonomic nervous system, operating in the periphery and responsible for collecting and filtering information, implementing simple and time-sensitive local actions reflexively.
AWS Step Functions is a new, fully-managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Step Functions is a reliable way to connect and step through a series of AWS Lambda functions, so that you can build and run multi-step applications in a matter of minutes. This session shows how to use AWS Step Functions to create, run, and debug cloud state machines to execute parallel, sequential, and branching steps of your application, with automatic catch and retry conditions. Learn how easy it is to create Step Functions state machines and activities using CloudFormation Templates, and then start them with Amazon API Gateway. We share how customers are using AWS Step Functions to reliably orchestrate and scale multi-step applications such as order processing, report generation, and data transformation–all without managing any infrastructure.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications. We will also demonstrate how you can easily deploy and monitor microservices on Amazon EC2 Container Service.
Batch Processing with Containers on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the options for running batch workloads on AWS
- Learn how to architect a containerized batch processing service on Amazon ECS
- Learn best practices for optimizing and scaling complex batch workload requirements
Batch processing is useful when you need to periodically analyze large amounts of data, but configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. Containers provide a great solution for running batch jobs by providing easily managed, scalable, and portable code environments.
In this tech talk, we’ll show you how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We’ll discuss AWS Batch, our fully managed batch-processing service, and show you how to architect your own batch processing service using the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for your batch jobs.
AWS Lambda allows any Node.js app to be run at scale in a massively parallel environment with no up-front costs or planning. This session shows how to use Lambda to build dynamic analytic data flows that can be tuned as they execute, based on initial results, to provide real-time output streamed to web clients. This process enables a cost-effective and responsive user experience for ad hoc big data jobs and lets developers focus on how data is consumed and presented, instead of how it is obtained.
What if there were an easier way to perform big data analysis with less setup, instant scaling, and no servers to provision and manage? With serverless computing, you can perform real-time stream processing of multiple data types without needing to spin up servers or install software. Come learn how you can use AWS Lambda with Amazon Kinesis to analyze streaming data in real-time and then store the results in a managed NoSQL database such as Amazon DynamoDB. You’ll learn tips and tricks for doing in-line processing, data manipulation, and even distributed MapReduce on large data sets.
This talk (delivered at QConLondon 2016) covers the evolution of Coursera's nearline architecture, delves into our latest generation system, and then covers the flagship application of the architecture (evaluating programming assignments).
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1stYuc2.
Brennan Saeta covers aspects of Coursera’s architecture that enables them to rapidly build sophisticated features for their learning platform. Saeta discusses also their experience running containers in production, what works, what doesn’t, and why. He briefly touches upon container threat models, and how to architect a defense-in-depth strategy to mitigate both known and unknown vulnerabilities. Filmed at qconlondon.com.
Brennan Saeta is a Lead Infrastructure Engineer, leading the ‘Cour’ (core) group responsible for the development environment, core libraries, and the common infrastructure powering Coursera.
Netflix Container Scheduling and Execution - QCon New York 2016aspyker
Scheduling a Fuller House: Container Management At Netflix
Customers from over all over the world streamed Forty Two Billion hours of Netflix content last year. Various Netflix batch jobs and an increasing number of service applications use containers for their processing. In this talk Netflix will present a deep dive on the motivations and the technology powering container deployment on top of the AWS EC2 service. The talk will cover our approach to cloud resource management and scheduling with the open source Fenzo library, along with details on docker execution engine as a part of project Titus. As well, the talk will share some of the results so far, lessons learned, and end with a brief look at the developer experience for containers.
With Cloud Functions you write simple, functions that doing one unit of execution.
Cloud Functions can be written using JavaScript, Python 3, or Go
and you simply deploy a function bound to the event you want and you are all done.
In our case we will leavrage from Cloud Function to manage our K8s clusters based on work times in order to save budget.
20211202 NADOG Adapting to Covid with Serverless Craeg Strong Ariel PartnersCraeg Strong
This case study describes how we leveraged serverless technology and the AWS serverless application model (SAM) to support the needs of virtual training classes for a major US Federal agency. Our firm was excited to be selected as the main training partner to help a major US Federal government agency roll out Agile and DevOps processes across an organization comprising more than 1500 people. And then the pandemic hit—and what was to have been a series of in-person classes turned 100% virtual! We created a set of fully populated docker images containing all of the test data, plugins, and scenarios required for the student exercises. For our initial implementation, we simply pre-loaded our docker images into elastic beanstalk and then replicated them as many times as needed to provide the necessary number of instances for a given class. While this worked out fine at first, we found a number of shortcomings as we scaled up to more students and more classes. Eventually we came up with a much easier solution using serverless technology: we stood up a single page application that could kickoff tasks using AWS step functions to run docker images in elastic container service, all running under AWS Fargate. This application is a perfect fit for serverless technology and describing our evolution to serverless and SAM may help you gain insights into how these technologies may be beneficial in your situation.
Amazon ECS at Coursera: A unified execution framework while defending against...Brennan Saeta
In this talk, Frank Chen and Brennan Saeta discuss Coursera's use of Docker, and Amazon ECS. We discuss the implementation of our unified processing framework, and delve into the security challenges inherent in running un-trusted code.
(CMP406) Amazon ECS at Coursera: A general-purpose microserviceAmazon Web Services
"Coursera has helped millions of students learn computer science through MOOCs ranging from Introduction to Python, to state-of-the-art Functional-Reactive Programming in Scala. Our interactive educational experience relies upon an automated grading platform for programming assignments. But, because anyone can sign up for a course on Coursera for free, our systems must defend against arbitrary code execution.
Come learn how Coursera uses AWS services such as Amazon EC2 Container Service (ECS), and Amazon Virtual Private Cloud (VPC) to power a defense-in-depth strategy to secure our infrastructure against bad actors. We have modified the Amazon ECS Agent to support security layers including kernel privilege de-escalation, and enabling mandatory access control systems. Additionally, we post-process uploaded grading container images to defang binaries.
At the core of automated grading is a general-purpose near-line & batch scheduling and execution microservice built on top of the Amazon ECS APIs. We use this flexible system to power a variety of internal services across the company including data exports for instructors, course announcement emails, data reconciliation jobs, and more.
In this session, we detail aspects of our success from implementing Docker and Amazon ECS in production, providing ideas for your own scheduling, execution and hardening requirements."
Group of Airflow core committers talking about what's coming with Airflow 2.0!
Speakers: Ash Berlin-Taylor, Kaxil Naik, Kamil Breguła Jarek Potiuk, Daniel Imberman and Tomasz Urbaszek.
20211202 North America DevOps Group NADOG Adapting to Covid With Serverless C...Craeg Strong
This case study describes how we leveraged serverless technology and the AWS serverless application model (SAM) to support the needs of virtual training classes for a major US Federal agency. Our firm was excited to be selected as the main training partner to help a major US Federal government agency roll out Agile and DevOps processes across an organization comprising more than 1500 people. And then the pandemic hit—and what was to have been a series of in-person classes turned 100% virtual! We created a set of fully populated docker images containing all of the test data, plugins, and scenarios required for the student exercises. For our initial implementation, we simply pre-loaded our docker images into elastic beanstalk and then replicated them as many times as needed to provide the necessary number of instances for a given class. While this worked out fine at first, we found a number of shortcomings as we scaled up to more students and more classes. Eventually we came up with a much easier solution using serverless technology: we stood up a single page application that could kickoff tasks using AWS step functions to run docker images in elastic container service, all running under AWS Fargate. This application is a perfect fit for serverless technology and describing our evolution to serverless and SAM may help you gain insights into how these technologies may be beneficial in your situation.
Series of Unfortunate Netflix Container Events - QConNYC17aspyker
Project Titus is Netflix's container runtime on top of Amazon EC2. Titus powers algorithm research through massively parallel model training, media encoding, data research notebooks, ad hoc reporting, NodeJS UI services, stream processing and general micro-services. As an update from last year's talk, we will focus on the lessons learned operating one of the largest container runtimes on a public cloud. We'll cover the migration we've seen of applications and frameworks from VM's to containers. We will cover the operational issues with containers that only showed after we reached the large scale (1000's of container hosts, 100's of thousands of containers launched weekly) we are currently supporting. We'll touch base on the unique features we have added to help both batch and microservices run across a variety of runtimes (Java, R, NodeJS, Python, etc) and how higher level frameworks have taken advantage of Titus's scheduling capabilities.
Scala like distributed collections - dumping time-series data with apache sparkDemi Ben-Ari
Spark RDDs are almost identical to Scala collection, just in a distributed manner, all of the transformations and actions are derived from the Scala collections API.
As Martin Odersky mentioned, “Spark - The Ultimate Scala Collections” is the right way to look at RDDs. But with that great distributed power comes a great many data problems: at first you’ll start tackling the concept of partitioning, then the actual data becomes the next thing to worry about.
In the talk we’ll go through an overview on Spark's architecture, and see how similar RDDs are to the Scala collections API. We'll then shift to the world of problems that you’ll be facing when using Spark for processing a vast volume of time-series data with multiple data stores (S3, MongoDB, Apache Cassandra, MySQL).
When you start tackling many scale and performance problems, many questions arise:
> How to handle missing data?
> Should the system handle both serving and backend processes, or should we separate them out?
> Which solution is cheaper?
> How do we get the best performance for money spent?
In the talk we will tell the tale of all of the transformations we’ve made to our data and review the multiple data persistency layers... and I’ll try my best NOT to answer the question “which persistency layer is the best?” but I do promise to share our pains and lessons learned!
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
8. Bad Old Days of Batch Processing
Cascade
● PHP-based job runner
● Run in screen sessions
● Polled for new jobs
● Fragile and unreliable
● Forced restarts on regular basis
9. Bad Old Days of Batch Processing II
Saturn
● Scala-based scheduled batch process runner
o Powered by Quartz Scheduler library
● Cron-only jobs
o Cannot run jobs on demand
o Some jobs have to wake up frequently to poll
● All jobs ran on same instance (and JVM), causing interference
10. What Do We Want?
Reliability
● Saturn / Cascade were flaky
● Developers became frustrated with
jobs not running properly
flickr.com/rclarkeimages, cc by-nc 2.0
11. What Do We Want?
Easy Development
● Developing & testing
locally was difficult
● Little or no boilerplate
should be required
flickr.com/riebart, cc by 2.0
12. What Do We Want?
Easy Deployment
● Deployment was difficult and
non-deterministic
● “Other services have one-click
tools, why can’t your service
have that too?”
flickr.com/derelllicht, cc by-nd 2.0
13. What Do We Want?
High Efficiency
● Cost-conscious
● Most jobs complete < 20 minutes
o EC2 rounds costs up to full hour
● Startup time of individual instances
too long for batch processingflickr.com/koertmichiels, cc by-nc-nd 2.0
14. What Do We Want?
Low Ops Load
● Only one dev-ops engineer --
can’t manage everything
● Developers own their services
● Developers shouldn’t have to
actively monitor services
flickr.com/reynermedia, cc by 2.0
15. Alternative Technologies
Home-grown Tech
● Tried, but proved
to be unreliable
● Difficult to handle
coordination and
synchronization
● Very powerful, but
hard to
productionize
● Needs actual
DevOps team
● GCE first-class,
everything else
second-class
16. Amazon ECS
● Low-to-no maintenance solution
● Integrated with AWS infrastructure
● Easy to understand and program for
17. However…
● No scheduled tasks
● No fine-grained monitoring of tasks
● No retries / delays when cluster out of resources
● Does not integrate well with our
existing Scala APIs and tooling
18. Iguazú -- Batch Management for ECS
● Named for Iguazú Falls
o World’s largest waterfall
● Batch Task Scheduler
o Immediately
o Deferred (run once at X time)
o Scheduled recurring (cron-like)
● Programmatically accessible
via standard APIs / clients
flickr.com/mrpunto, cc by-nc-nd 2.0
19. Iguazú Semantic Guarantees
● At most once execution for all jobs
● Jobs will be provided with at least the CPU
and RAM they requested
● Scheduler may elect to skip execution of
some scheduled jobs under adverse
conditions
21. Iguazú Design
● Frontend + Scheduler
o Generates requests (either via API calls or internally from the
scheduler)
o Puts new requests in SQS queues
o Handles requests for status from other services
● Backend
o Attempts to run tasks via ECS API
Failure (e.g. lack of resources) means task goes back into queue
to try again later
o Keeps track of task status and updates Cassandra
24. Developing Iguazú Tasks
class Job extends AbstractJob with StrictLogging {
override val reservedCpu = 1024
override val reservedMemory = 1024
def run(parameters: JsValue) = {
logger.info(“I am running my Job!”)
expensiveComputationHere()
}
}
25. Running Tasks Locally
$ sbt
> project iguazuJobs
[info] Set current project to iguazu-jobs
> run sample sample.json
[info] Running org.coursera.iguazu.internal.IguazuJobsMain
sample sample.json
[info] 2015-07-08 13:31:52,368 INFO [o.c.i.j.s.Job] >>> I am
running my job!
[success] Total time: 5 s, completed Jul 8, 2015 1:31:58 PM
>
26. Running Tasks from Other Services
// invoking a job with one command
// from another service via Naptime REST framework
val invocationId = IguazuJobInvocationClient
.create(IguazuJobInvocationRequest(
family = "mailer",
jobName = "recommendationsEmail",
parameters = emailParams))
27. Deploying Tasks
Easy Deployment
1. Merge into master. Done!
Jenkins Build Steps:
1. Builds zip package from master
2. Prepares Docker image
3. Pushes docker image into docker registry
4. Registers updated tasks with ECS APIs
28. Logs
● Logs are in /var/lib/docker/containers/*
● Upload into log analysis service (Sumologic for us)
● Wrapper prints out task name and task ID
at the start for easy searching
● Good enough for now
29. Metrics
● Using third-party metrics collector (Datadog)
● Metrics for both tasks and container instances
● So long as can talk to Internet,
things will work out pretty well
34. Grading Prog. Assignments
● Special case of batch processing
● Near real-time feedback
o <30 seconds for fast graders
● Compiling and running untrusted code
● Infrastructure security huge concern
o Minimize exfiltration of info (e.g. test cases)
o Avoid turning into bitcoin miners or DDoS attack
36. GRID: Defense in depth
Network
Completely separate AWS account
Network ACLs & Routing Tables
Host Docker / Other
+ additional mitigation
techniques and defenses
Custom cleaning of container images
37. Modifying the ECS Agent
● Coursera has 2 simple forks of ECS agent
o Allow privileged docker-in-docker access for the
“cleaning” agent
o Disable networking and disk writes for untrusted
code in the “grading” agent
● Check it out at:
github.com/coursera/amazon-ecs-agent
38.
39. Usage
● Iguazú
o 38 tasks written since launch in April
o 24 scheduled tasks
o >1000 invocations per day
● Grid
o Pre-production at this time (launching in weeks)
o Dozens of graders already written by multiple
instructional teams!
40. Future Improvements
● True Autoscaling
o Scaling up is easy, scaling down not so much
● Task prioritization (multiple queues)
● Simulate memory and CPU limits in dev
modes
41. Lessons Learned / Docker war stories
● Docker instability fun
o Container format changing between 1.0 and 1.5.
● btrfs wedging on older kernels
o Default 14.04 kernel not new enough!
● Disk usage
o Docker-in-docker can’t clean up after itself (yet).
Service-based architecture (dozens of services)
Scala-based backends, with legacy PHP and Python
Cassandra, MySQL and S3 as data storage layer
Completely within the AWS Cloud
Utilize a lot of AWS services
Tooling developed specifically for AWS ops
Our programming assignment infrastructure must be very flexible, powerful and general purpose. We have courses that cover a huge range of topics. Some courses are an introduction to computing with Python. Others are complete tutorials into functional reactive programming in Scala. Other courses use Javascript for learning, or Matlab for Machine Learning. We have a course on full stack web development based upon Ruby on Rails. We even have advanced courses in parallel programming that require the use of the GPU!
This is an example programming assignment for a new course we are about to launch.
This is a view of the web upload.
https://www.coursera.org/learn/spcpp-2/programming/ZZOH8/bian-cheng-zuo-ye
Huge thank you’s also to:
Bryan Kane
Colleen Lee
Nick Dellamaggiore
Kam Syed (Amazon)
KD Singh (Amazon)