The document discusses optimizing content processing in the cloud using Amazon EC2 Spot Instances and GPUs. It provides examples of how customers have saved up to 90% on compute costs by using Spot Instances for batch processing workloads like media transcoding, rendering, and scientific simulations. The document demonstrates how to set up a scalable Spot Instance fleet for these types of jobs using tools like Spot Fleet, Spot blocks, and fault tolerance techniques.
This document discusses using big data and machine learning techniques on AWS for content recommendations. It describes three common approaches: search with boosting which adjusts search rankings based on popularity signals; collaborative filtering which identifies similar users and items; and neural networks which use historical user events to create a model that predicts favorites. It also introduces Amazon DSSTNE (Deep Scalable Sparse Tensor Network Engine) for automating GPU-accelerated training and prediction at scale for recommendation systems.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Amazon Web Services (AWS) provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing. This session provides an overview and describes why companies are flocking to the cloud so quickly.
Deep dive and best practices on real time streaming applications nyc-loft_oct...Amazon Web Services
This document provides an overview of real-time streaming data on AWS and best practices for using Amazon Kinesis, Spark Streaming, AWS Lambda, and Amazon EMR. It discusses ingesting streaming data using Kinesis Streams and Firehose, processing data with Kinesis Client Library, Spark Streaming, and AWS Lambda, and integrating with data stores like S3, Redshift and Elasticsearch. Example use cases are also presented from companies like Sonos, publishers and gaming companies.
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)Amazon Web Services
This document discusses building a big data analytics data lake. It begins with an overview of what a data lake is and the benefits it provides like quick data ingestion without schemas and storing all data in one centralized location. It then discusses important capabilities like ingestion, storage, cataloging, search, security and access controls. The document provides an example of how biotech company AMGEN built their own data lake on AWS. It concludes with a demonstration of an AWS data lake solution package that can be deployed via CloudFormation to build an initial data lake.
Soluzioni di Database completamente gestite: NoSQL, relazionali e Data WarehouseAmazon Web Services
This document discusses several Amazon Web Services (AWS) managed database options, including Amazon RDS, DynamoDB, ElastiCache, and Redshift. It provides an overview of each service's dataset size, data model, query semantics, scaling capabilities, and popular use cases. The key benefits highlighted are that these managed DB services eliminate the need for users to manage hardware provisioning, backups, patching, and scaling. This allows users to focus on their applications rather than database infrastructure.
The document discusses strategies for optimizing Amazon EC2 costs, including:
1) Using different EC2 purchasing options like On-Demand, Reserved Instances, and Spot Instances depending on workload needs to balance costs and flexibility.
2) Right-sizing instances, increasing elasticity through automation, and monitoring resources to identify cost-saving opportunities.
3) Applying these strategies together through examples like a three-tier web application optimized across different tiers and workloads using various purchasing options.
APAC Principal Solutions Architect, Johnathon Meichtry will run through the highlights of 2015 showcasing the biggest announcements and how customers are using these new features. This session will cover the entire breadth of the AWS platform, and is a chance to get a high level overview of all of the announcements, feature updates and new services that AWS has launched in 2015.
This document discusses using big data and machine learning techniques on AWS for content recommendations. It describes three common approaches: search with boosting which adjusts search rankings based on popularity signals; collaborative filtering which identifies similar users and items; and neural networks which use historical user events to create a model that predicts favorites. It also introduces Amazon DSSTNE (Deep Scalable Sparse Tensor Network Engine) for automating GPU-accelerated training and prediction at scale for recommendation systems.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Amazon Web Services (AWS) provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing. This session provides an overview and describes why companies are flocking to the cloud so quickly.
Deep dive and best practices on real time streaming applications nyc-loft_oct...Amazon Web Services
This document provides an overview of real-time streaming data on AWS and best practices for using Amazon Kinesis, Spark Streaming, AWS Lambda, and Amazon EMR. It discusses ingesting streaming data using Kinesis Streams and Firehose, processing data with Kinesis Client Library, Spark Streaming, and AWS Lambda, and integrating with data stores like S3, Redshift and Elasticsearch. Example use cases are also presented from companies like Sonos, publishers and gaming companies.
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)Amazon Web Services
This document discusses building a big data analytics data lake. It begins with an overview of what a data lake is and the benefits it provides like quick data ingestion without schemas and storing all data in one centralized location. It then discusses important capabilities like ingestion, storage, cataloging, search, security and access controls. The document provides an example of how biotech company AMGEN built their own data lake on AWS. It concludes with a demonstration of an AWS data lake solution package that can be deployed via CloudFormation to build an initial data lake.
Soluzioni di Database completamente gestite: NoSQL, relazionali e Data WarehouseAmazon Web Services
This document discusses several Amazon Web Services (AWS) managed database options, including Amazon RDS, DynamoDB, ElastiCache, and Redshift. It provides an overview of each service's dataset size, data model, query semantics, scaling capabilities, and popular use cases. The key benefits highlighted are that these managed DB services eliminate the need for users to manage hardware provisioning, backups, patching, and scaling. This allows users to focus on their applications rather than database infrastructure.
The document discusses strategies for optimizing Amazon EC2 costs, including:
1) Using different EC2 purchasing options like On-Demand, Reserved Instances, and Spot Instances depending on workload needs to balance costs and flexibility.
2) Right-sizing instances, increasing elasticity through automation, and monitoring resources to identify cost-saving opportunities.
3) Applying these strategies together through examples like a three-tier web application optimized across different tiers and workloads using various purchasing options.
APAC Principal Solutions Architect, Johnathon Meichtry will run through the highlights of 2015 showcasing the biggest announcements and how customers are using these new features. This session will cover the entire breadth of the AWS platform, and is a chance to get a high level overview of all of the announcements, feature updates and new services that AWS has launched in 2015.
Pebble uses data science and analytics to improve its smartwatch products. Pebble's data team analyzes over 60 million records per day from the watches to measure user engagement, identify issues, and inform new product design. Their first problem was setting an engagement threshold using the accelerometer. Rapid testing of different thresholds against "backlight data" validated the optimal threshold. Pebble has since solved many problems using their analytics infrastructure at Treasure Data to query, explore, and gain insights from massive user data in real-time.
AWS re:Invent 2016: IoT Blueprints: Optimizing Supply for Smart Agriculture f...Amazon Web Services
30% of global food produce is wasted in the supply chain: storage, movement, and delivery. By using AWS IOT to enable sensors to manage the supply chain and big data to understand patterns, industrial companies can gain efficiencies in electricity and transportation.
AWS’ serverless architecture components such as S3, SQS, SNS, CloudWatch Logs, DynamoDB, Kinesis and Lambda can be tightly constrained in their operation, however it is still possible to use many of them to propagate payloads which may be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores mechanisms for enhancing the default security of these services, from applying permissions-tightening in IAM to integrating tools and techniques for inline and out-of-band payload analysis which are more typically applied to traditional server-based architectures.
BDA307 Real-time Streaming Applications on AWS, Patterns and Use CasesAmazon Web Services
In this session, you will learn best practices for implementing simple to advanced real-time streaming data use cases on AWS. First, we’ll review decision points on near real-time versus real time scenarios. Next, we will take a look at streaming data architecture patterns that include Amazon Kinesis Analytics, Amazon Kinesis Firehose, Amazon Kinesis Streams, Spark Streaming on Amazon EMR, and other open source libraries. Finally, we will dive deep into the most common of these patterns and cover design and implementation considerations.
AWS re:Invent 2016: How Fulfillment by Amazon (FBA) and Scopely Improved Resu...Amazon Web Services
We’ll share an overview of leveraging serverless architectures to support high performance data intensive applications. Fulfillment by Amazon (FBA) built the Seller Inventory Authority Platform (IAP) using Amazon DynamoDB Streams, AWS Lambda functions, Amazon Elasticsearch Service, and Amazon Redshift to improve results and reduce costs. Scopely will share how they used a flexible logging system built on Kinesis, Lambda, and Amazon Elasticsearch to provide high-fidelity reporting on hotkeys in Memcached and DynamoDB, and drastically reduce the incidence of hotkeys. Both of these customers are using managed services and serverless architecture to build scalable systems that can meet the projected business growth without a corresponding increase in operational costs.
Data-driven companies have a need to make their data easily accessible to those who analyze it. Many organizations have adopted the Looker application, LookML on AWS, a centralized analytical database with a user-friendly interface that allows employees to ask and answer their own questions to make informed business decisions.
Join our webinar to learn how our customer, Casper, an online mattress retailer, made the switch from a transactional database to Looker’s data analytics program on Amazon Redshift. Looker on Amazon Redshift can help you greatly reduce your analytics lifecycle with a simplified infrastructure and rapid cloud scaling.
Join us to learn:
• How to utilize LookML to build reusable definitions and logic for your data
• Best practices for architecting a centralized analytical database
• How Casper leveraged Looker and Amazon Redshift to provide all their employees access to their data and metrics
Who should attend: Heads of Analytics, Heads of BI, Analytics Managers, BI Teams, Senior Analysts
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
AWS re:Invent 2016: Zillow Group: Developing Classification and Recommendatio...Amazon Web Services
Customers are adopting Apache Spark ‒ an open-source distributed processing framework ‒ on Amazon EMR for large-scale machine learning workloads, especially for applications that power customer segmentation and content recommendation. By leveraging Spark ML, a set of machine learning algorithms included with Spark, customers can quickly build and execute massively parallel machine learning jobs. Additionally, Spark applications can train models in streaming or batch contexts, and can access data from Amazon S3, Amazon Kinesis, Amazon Redshift, and other services. This session explains how to quickly and easily create scalable Spark clusters with Amazon EMR, build and share models using Apache Zeppelin and Jupyter notebooks, and use the Spark ML pipelines API to manage your training workflow. In addition, Jasjeet Thind, Senior Director of Data Science and Engineering at Zillow Group, will discuss his organization's development of personalization algorithms and platforms at scale using Spark on Amazon EMR.
This document discusses how cloud computing with Amazon Web Services (AWS) can help companies innovate faster by focusing resources on core business initiatives rather than infrastructure maintenance. It notes traditional IT models struggle to keep pace with disruption and security risks, while AWS provides agility, flexibility, and security to help companies deploy applications globally and at "startup speed". The document outlines how AWS computing services like EC2, S3, Lambda, DynamoDB, and Redshift allow businesses to reduce costs while gaining scalability and developer productivity.
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
In recent years, Docker containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources. Using Docker on your local development machine is simple, but running Docker applications at scale in production can be difficult. In this session, we will discuss the difficulties of running Docker in production and how Amazon EC2 Container Service (ECS) can be used to reduce the operational burdens. We will give an overview of the core architectural principles underlying Amazon ECS., and we will walk through a number of patterns used by our customers to run their microservices platforms, to run batch jobs, and for deployments and continuous integration. We will also demonstrate how to define multi-container applications, deploy and scale them seamlessly on a cluster with Amazon ECS.
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...Amazon Web Services
This document introduces Amazon QuickSight, a business analytics service from AWS. QuickSight allows users to easily connect to and analyze data from various AWS and third party sources. It provides fast, self-service analytics capabilities at 1/10th the cost of traditional BI solutions. QuickSight also enables collaboration, sharing of analyses and dashboards, and future integration with machine learning capabilities. The document demonstrates QuickSight through an example implementation at Hotelbeds Group to gain insights from their large and growing data sources on AWS.
ENT203 Monitoring and Autoscaling, a Match Made in HeavenAmazon Web Services
Monitoring your infrastructure is an essential part of running mission critical applications. But what happens when you throw dynamic infrastructure and autoscaling into the mix? How do you define "normal" with servers that scale in and out of existence? In this session, we'll walk through best practices for monitoring an autoscaled application, as well as how to use intelligent alerting to enhance CloudWatch and trigger scaling events. The concepts in this session are not tool-specific, but demos will be based on Datadog. Attendees will learn production-tested lessons and leave with frameworks they can implement to more efficiently autoscale their applications, no matter which platforms and tools they use.
This session is brought to you by AWS Summit Chicago sponsor, Datadog.
AWS Services Overview and Quarterly Update - April 2017 AWS Online Tech TalksAmazon Web Services
• Overview of AWS New & Existing Services
• Advice for Getting Started
Join the “AWS Services Overview and Quarterly Update” webinar to take a fast-paced 45-minute tour through our broad range of new and existing services. We will also provide an update so you can review and catch up on the biggest updates from the past quarter. During the webinar, you will have the opportunity to propose questions for the live Q&A session following the presentation.
AWS re:Invent 2016: Wild Rydes Takes Off – The Dawn of a New Unicorn (SVR309)Amazon Web Services
Wild Rydes (www.wildrydes.com) needs your help! With fresh funding from its seed investors, Wild Rydes is seeking to build the world’s greatest mobile/VR/AR unicorn transportation system. The scrappy startup needs a first-class webpage to begin marketing to new users and to begin its plans for global domination. Join us to help Wild Rydes build a website using a serverless architecture. You’ll build a scalable website using services like AWS Lambda, Amazon API Gateway, Amazon DynamoDB, and Amazon S3. Join this workshop to hop on the rocket ship!
To complete this workshop, you'll need:
Your laptop
AWS Account
AWS Command Line Interface
Google Chrome
git
Text Editor
This document discusses real-time data processing using Amazon Web Services. It describes how to use Amazon Kinesis for real-time data ingestion and processing and Amazon Elastic MapReduce (EMR) for batch processing. It provides examples of using EMR for batch processing large amounts of log data and for interactive querying of data stored in Amazon S3. It also discusses using Kinesis as a data broker to distribute streaming data to multiple applications and using Kinesis with EMR, Spark, and Storm for real-time analytics.
The document summarizes announcements from AWS re:Invent 2016, including:
- New generally available services such as AWS OpsWorks for Chef Automate, EC2 Systems Manager, CodeBuild, X-Ray, Personal Health Dashboard, Shield, Pinpoint, Glue, Batch, and Step Functions.
- New features for Lambda including C# support, Lambda@Edge, and Step Functions integration.
- Previews for services like X-Ray, Shield Advanced, and Batch.
- Updates to services including CloudFormation, ECS, and improvements to the Well-Architected Framework.
This document summarizes key announcements from re:Invent 2016, AWS's annual user conference. The main themes included artificial intelligence, serverless computing, devops, data, and migration tools. Notable product announcements included AWS Batch for batch processing, Aurora for PostgreSQL, Athena for querying data lakes, and X-Ray for debugging distributed applications. The document also discusses AWS's strategy around machine learning and deep learning using MXNet as its primary framework.
AWS October Webinar Series - Using Spot Instances to Save up to 90% off Your ...Amazon Web Services
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this webinar, we dive into best practices and new features that will help you realize immediate cost savings, maximize compute capacity within your budget, and maintain application availability and performance with less up-front or ongoing development effort. Attendees leave with practical knowledge of Spot bidding strategies, market trends, instance selection and benchmarking, and fault-tolerant architecture with examples taken from common Spot use cases such as web services, big data/analytics, media processing, and continuous integration workloads.
Cost Effective Rendering in the Cloud with Spot InstancesAmazon Web Services
Usman Shakeel from Amazon Web Services, explains to us how to use AWS Spot Instances to implement low cost video rendering applications and workflows.
This presentation was delivered during the AWS Toronto Media and Entertainment Symposium
Pebble uses data science and analytics to improve its smartwatch products. Pebble's data team analyzes over 60 million records per day from the watches to measure user engagement, identify issues, and inform new product design. Their first problem was setting an engagement threshold using the accelerometer. Rapid testing of different thresholds against "backlight data" validated the optimal threshold. Pebble has since solved many problems using their analytics infrastructure at Treasure Data to query, explore, and gain insights from massive user data in real-time.
AWS re:Invent 2016: IoT Blueprints: Optimizing Supply for Smart Agriculture f...Amazon Web Services
30% of global food produce is wasted in the supply chain: storage, movement, and delivery. By using AWS IOT to enable sensors to manage the supply chain and big data to understand patterns, industrial companies can gain efficiencies in electricity and transportation.
AWS’ serverless architecture components such as S3, SQS, SNS, CloudWatch Logs, DynamoDB, Kinesis and Lambda can be tightly constrained in their operation, however it is still possible to use many of them to propagate payloads which may be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores mechanisms for enhancing the default security of these services, from applying permissions-tightening in IAM to integrating tools and techniques for inline and out-of-band payload analysis which are more typically applied to traditional server-based architectures.
BDA307 Real-time Streaming Applications on AWS, Patterns and Use CasesAmazon Web Services
In this session, you will learn best practices for implementing simple to advanced real-time streaming data use cases on AWS. First, we’ll review decision points on near real-time versus real time scenarios. Next, we will take a look at streaming data architecture patterns that include Amazon Kinesis Analytics, Amazon Kinesis Firehose, Amazon Kinesis Streams, Spark Streaming on Amazon EMR, and other open source libraries. Finally, we will dive deep into the most common of these patterns and cover design and implementation considerations.
AWS re:Invent 2016: How Fulfillment by Amazon (FBA) and Scopely Improved Resu...Amazon Web Services
We’ll share an overview of leveraging serverless architectures to support high performance data intensive applications. Fulfillment by Amazon (FBA) built the Seller Inventory Authority Platform (IAP) using Amazon DynamoDB Streams, AWS Lambda functions, Amazon Elasticsearch Service, and Amazon Redshift to improve results and reduce costs. Scopely will share how they used a flexible logging system built on Kinesis, Lambda, and Amazon Elasticsearch to provide high-fidelity reporting on hotkeys in Memcached and DynamoDB, and drastically reduce the incidence of hotkeys. Both of these customers are using managed services and serverless architecture to build scalable systems that can meet the projected business growth without a corresponding increase in operational costs.
Data-driven companies have a need to make their data easily accessible to those who analyze it. Many organizations have adopted the Looker application, LookML on AWS, a centralized analytical database with a user-friendly interface that allows employees to ask and answer their own questions to make informed business decisions.
Join our webinar to learn how our customer, Casper, an online mattress retailer, made the switch from a transactional database to Looker’s data analytics program on Amazon Redshift. Looker on Amazon Redshift can help you greatly reduce your analytics lifecycle with a simplified infrastructure and rapid cloud scaling.
Join us to learn:
• How to utilize LookML to build reusable definitions and logic for your data
• Best practices for architecting a centralized analytical database
• How Casper leveraged Looker and Amazon Redshift to provide all their employees access to their data and metrics
Who should attend: Heads of Analytics, Heads of BI, Analytics Managers, BI Teams, Senior Analysts
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
AWS re:Invent 2016: Zillow Group: Developing Classification and Recommendatio...Amazon Web Services
Customers are adopting Apache Spark ‒ an open-source distributed processing framework ‒ on Amazon EMR for large-scale machine learning workloads, especially for applications that power customer segmentation and content recommendation. By leveraging Spark ML, a set of machine learning algorithms included with Spark, customers can quickly build and execute massively parallel machine learning jobs. Additionally, Spark applications can train models in streaming or batch contexts, and can access data from Amazon S3, Amazon Kinesis, Amazon Redshift, and other services. This session explains how to quickly and easily create scalable Spark clusters with Amazon EMR, build and share models using Apache Zeppelin and Jupyter notebooks, and use the Spark ML pipelines API to manage your training workflow. In addition, Jasjeet Thind, Senior Director of Data Science and Engineering at Zillow Group, will discuss his organization's development of personalization algorithms and platforms at scale using Spark on Amazon EMR.
This document discusses how cloud computing with Amazon Web Services (AWS) can help companies innovate faster by focusing resources on core business initiatives rather than infrastructure maintenance. It notes traditional IT models struggle to keep pace with disruption and security risks, while AWS provides agility, flexibility, and security to help companies deploy applications globally and at "startup speed". The document outlines how AWS computing services like EC2, S3, Lambda, DynamoDB, and Redshift allow businesses to reduce costs while gaining scalability and developer productivity.
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
In recent years, Docker containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources. Using Docker on your local development machine is simple, but running Docker applications at scale in production can be difficult. In this session, we will discuss the difficulties of running Docker in production and how Amazon EC2 Container Service (ECS) can be used to reduce the operational burdens. We will give an overview of the core architectural principles underlying Amazon ECS., and we will walk through a number of patterns used by our customers to run their microservices platforms, to run batch jobs, and for deployments and continuous integration. We will also demonstrate how to define multi-container applications, deploy and scale them seamlessly on a cluster with Amazon ECS.
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...Amazon Web Services
This document introduces Amazon QuickSight, a business analytics service from AWS. QuickSight allows users to easily connect to and analyze data from various AWS and third party sources. It provides fast, self-service analytics capabilities at 1/10th the cost of traditional BI solutions. QuickSight also enables collaboration, sharing of analyses and dashboards, and future integration with machine learning capabilities. The document demonstrates QuickSight through an example implementation at Hotelbeds Group to gain insights from their large and growing data sources on AWS.
ENT203 Monitoring and Autoscaling, a Match Made in HeavenAmazon Web Services
Monitoring your infrastructure is an essential part of running mission critical applications. But what happens when you throw dynamic infrastructure and autoscaling into the mix? How do you define "normal" with servers that scale in and out of existence? In this session, we'll walk through best practices for monitoring an autoscaled application, as well as how to use intelligent alerting to enhance CloudWatch and trigger scaling events. The concepts in this session are not tool-specific, but demos will be based on Datadog. Attendees will learn production-tested lessons and leave with frameworks they can implement to more efficiently autoscale their applications, no matter which platforms and tools they use.
This session is brought to you by AWS Summit Chicago sponsor, Datadog.
AWS Services Overview and Quarterly Update - April 2017 AWS Online Tech TalksAmazon Web Services
• Overview of AWS New & Existing Services
• Advice for Getting Started
Join the “AWS Services Overview and Quarterly Update” webinar to take a fast-paced 45-minute tour through our broad range of new and existing services. We will also provide an update so you can review and catch up on the biggest updates from the past quarter. During the webinar, you will have the opportunity to propose questions for the live Q&A session following the presentation.
AWS re:Invent 2016: Wild Rydes Takes Off – The Dawn of a New Unicorn (SVR309)Amazon Web Services
Wild Rydes (www.wildrydes.com) needs your help! With fresh funding from its seed investors, Wild Rydes is seeking to build the world’s greatest mobile/VR/AR unicorn transportation system. The scrappy startup needs a first-class webpage to begin marketing to new users and to begin its plans for global domination. Join us to help Wild Rydes build a website using a serverless architecture. You’ll build a scalable website using services like AWS Lambda, Amazon API Gateway, Amazon DynamoDB, and Amazon S3. Join this workshop to hop on the rocket ship!
To complete this workshop, you'll need:
Your laptop
AWS Account
AWS Command Line Interface
Google Chrome
git
Text Editor
This document discusses real-time data processing using Amazon Web Services. It describes how to use Amazon Kinesis for real-time data ingestion and processing and Amazon Elastic MapReduce (EMR) for batch processing. It provides examples of using EMR for batch processing large amounts of log data and for interactive querying of data stored in Amazon S3. It also discusses using Kinesis as a data broker to distribute streaming data to multiple applications and using Kinesis with EMR, Spark, and Storm for real-time analytics.
The document summarizes announcements from AWS re:Invent 2016, including:
- New generally available services such as AWS OpsWorks for Chef Automate, EC2 Systems Manager, CodeBuild, X-Ray, Personal Health Dashboard, Shield, Pinpoint, Glue, Batch, and Step Functions.
- New features for Lambda including C# support, Lambda@Edge, and Step Functions integration.
- Previews for services like X-Ray, Shield Advanced, and Batch.
- Updates to services including CloudFormation, ECS, and improvements to the Well-Architected Framework.
This document summarizes key announcements from re:Invent 2016, AWS's annual user conference. The main themes included artificial intelligence, serverless computing, devops, data, and migration tools. Notable product announcements included AWS Batch for batch processing, Aurora for PostgreSQL, Athena for querying data lakes, and X-Ray for debugging distributed applications. The document also discusses AWS's strategy around machine learning and deep learning using MXNet as its primary framework.
AWS October Webinar Series - Using Spot Instances to Save up to 90% off Your ...Amazon Web Services
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this webinar, we dive into best practices and new features that will help you realize immediate cost savings, maximize compute capacity within your budget, and maintain application availability and performance with less up-front or ongoing development effort. Attendees leave with practical knowledge of Spot bidding strategies, market trends, instance selection and benchmarking, and fault-tolerant architecture with examples taken from common Spot use cases such as web services, big data/analytics, media processing, and continuous integration workloads.
Cost Effective Rendering in the Cloud with Spot InstancesAmazon Web Services
Usman Shakeel from Amazon Web Services, explains to us how to use AWS Spot Instances to implement low cost video rendering applications and workflows.
This presentation was delivered during the AWS Toronto Media and Entertainment Symposium
AWS re:Invent 2016: Save up to 90% and Run Production Workloads on Spot - Fea...Amazon Web Services
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this session, we dive into how customers who have designed scalable, cloud friendly application architectures can leverage new Spot features to realize immediate cost savings while maintaining availability. Attendees will leave with practical knowledge of how, via well architected applications, they can run production services on the Spot instances just like IFTTT and Mapbox.
Amazon EC2 Spot instances provide acceleration, scale, and deep cost savings to run time-critical, hyper-scale workloads for rapid data analysis. In this session,you will learn best practices on how to scale big data workloads as well as process, store, and analyze big data securely and cost effectively. Lunch will be provided.
The document provides an overview of Amazon EC2 Spot Instances. It discusses what Spot Instances are, the simple rules that govern them, best practices for using them including fault tolerance and diversification, tools for managing Spot Instances like Spot Fleet and the Spot console, and examples of how customers like Guttman Lab and Yelp have used Spot Instances.
Getting Started with EC2 Spot - November 2016 Webinar SeriesAmazon Web Services
This document discusses how to save up to 90% on EC2 costs by using Spot Instances. It provides an overview of AWS EC2 pricing models including On-Demand, Reserved, and Spot Instances. It then focuses on best practices for using Spot Instances, such as using the Spot Bid Advisor, diversifying Spot Fleets across instance types and Availability Zones, and leveraging the two minute warning for Spot termination. Examples are given of customers saving 75-87% on their EC2 costs by using Spot Instances for batch processing, continuous integration, and real-time ad delivery workloads.
(CMP311) This One Weird API Request Will Save You ThousandsAmazon Web Services
"Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this session, we dive into best practices and new features that will help you realize immediate cost savings, maximize compute capacity within your budget, and maintain application availability and performance with less up-front or ongoing development effort. Attendees leave with practical knowledge of Spot bidding strategies, market trends, instance selection and benchmarking, and fault-tolerant architecture with examples taken from common Spot use cases such as web services, big data/analytics, media processing, and continuous integration workloads."
Coding Apps in the Cloud to reduce costs up to 90% - September 2016 Webinar S...Amazon Web Services
Think differently when building apps in the cloud: heterogeneous environments, microservices, and disposable nodes are key components to building robust, deployment, and automated server management.
In this session, we will investigate patterns for engineering scalable stateless systems, indentifying anti-patterns to avoid, and show how you can save 90% on your overall compute costs. When engineered for cloud, you will never worry about the uptime of a node again.
We will dive into best practices and new features that will help you realize immediate cost savings, maximize compute capacity within your budget, and maintain application availability and performance with less up-front and ongoing development effort. Attendees leave with practical knowledge of Spot bidding strategies, Spot market price trends, instance selection, and fault-tolerant architectures for web services.
Learning Objectives:
• Learn more about EC2 Spot and how you can use it to save 90% off your EC2 bill
• Learn how to effective use Spot in production workloads by incorporating new Auto Scaling features
• Learn best practices on bidding and choosing Spot instances
Who Should Attend:
• Customers who are familiar with Amazon EC2, or customers who want to understand how they can use Spot to run applications in the cloud.
Risk Management and Particle Accelerators: Innovating with New Compute Platfo...Amazon Web Services
What does risk modeling and analytics in financial services have in common with large scale computing in high energy physics? Come to this session to hear how financial services customers like Aon are taking advantage of new approaches like predictive analytics and AI/deep learning on AWS to perform risk modeling and how Brookhaven National Laboratory are using 10s of thousands of cores to do large scale grid computing for Monte Carlo simulations in high energy physics. In addition, we will also showcase how CSIRO eHealth team in Australia are innovating with serverless architectures using AWS Lambda for personalized medicine and genomics.
Speakers: Adrian White, Sr SciCo Technical Manager, Amazon Web Services
Top 5 Ways to Optimize for Cost Efficiency with the CloudAmazon Web Services
The document provides tips and strategies for optimizing costs when using cloud computing services like AWS. It discusses turning off unused instances, using auto scaling to align resources with demand, taking advantage of reserved instances for discounts, leveraging spot instances for significant savings, using different Amazon S3 storage classes, optimizing DynamoDB capacity units, buffering requests with SQS, and offloading architecture to services like CloudFront and ElastiCache. It also shares examples and case studies from customers like Pfizer, Zumba, and Airbnb that achieved cost savings through these approaches.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
This document provides a deep dive on Amazon EC2 instances. It discusses how EC2 instances deliver performance through CPU, memory, and I/O capabilities while providing flexibility. It reviews the capabilities of specific instance types like C4, T2, I2, and the new X1 instances. It also discusses features like auto recovery, lifecycle hooks, and how to leverage other AWS services to optimize performance. The document contains charts showing the history and attributes of different EC2 instance types.
Day 3 - Maintaining Performance & Availability While Lowering Costs with AWSAmazon Web Services
AWS provides you several pricing options that can help you significantly reduce your overall IT cost, including On-Demand Instances, Spot Instances, and Reserved Instances. This session covers high-level architectures and when to use and not to use each of the pricing models for components of those architectures. We walk through several customer examples to illustrate when to use each pricing option. Additionally, we walk through tools that may be useful to determine when to use each pricing model. This session is aimed at technically savvy managers and engineers who need to reduce their cloud spending.
Reasons to attend:
- Learn about Reserved Instances, On-Demand Instances and Spot Instances.
- Discover ways of running more for less in Amazon EC2.
- If you are already running a workload in AWS, attend this webinar to learn how to run the same workload at reduced costs.
This document discusses how AWS services can help startups and developers achieve profitability. It provides an example of a company that was able to reduce costs and improve margins by 54% through optimizing its architecture on AWS. Key strategies discussed include leveraging reserved instances, spot pricing, cost-aware architecting techniques like caching with S3 and CloudFront, database optimizations, and rapid prototyping tools to reduce test/dev costs. The document emphasizes starting with understanding usage patterns, doing an apples-to-apples comparison of total costs, and continuously optimizing resources through pricing models and architectural improvements.
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
AWS Cloud Kata | Manila - Getting to Profitability on AWSAmazon Web Services
The document discusses how Lenddo, a financial technology company, has used AWS to scale its operations in a cost-effective manner. It provides details on:
1) How Lenddo started in 2011 in the Philippines and has since expanded to other countries, processing over 50k loan applications for 400k members.
2) How Lenddo's usage of AWS grew significantly from 2011 to 2013 as the company expanded.
3) The various AWS services Lenddo utilizes, including EC2, S3, DynamoDB, RDS, and others, to build its infrastructure in a flexible and scalable way.
4) How using AWS has helped Lenddo focus on coding and
AWS APAC Webinar Week - Maintaining Performance & Availability While Lowering...Amazon Web Services
AWS provides you several pricing options that can help you significantly reduce your overall IT cost, including On-Demand Instances, Spot Instances, and Reserved Instances. This session covers high-level architectures and when to use and not to use each of the pricing models for components of those architectures. We walk through several customer examples to illustrate when to use each pricing option. Additionally, we walk through tools that may be useful to determine when to use each pricing model. This session is aimed at technically savvy managers and engineers who need to reduce their cloud spending
More and more, the scalable on-demand infrastructure provided by AWS is being used by researchers, scientists and engineers in Life Sciences, Finance and Engineering to solve bigger problems, answer complex questions and run larger simulations. In this session we start by talking about the supercomputing class performance and high performance storage available to the scientists and engineers at their fingertips. We will go over examples of how startups are innovating and large enterprises are extending their HPC environments. Finally, we walk through some of the common questions that come up as organizations start leveraging AWS for their high performance computing needs.
Serverless on AWS : Understanding the hard parts at Serverless Meetup Dusseld...Vadym Kazulkin
We will look into the challenges, such as "cold start" with Lamda or "provisioned throughput" with DynamoDB and show you which strategies and options exist. We will also address topics like Tracing of Lambda-Functions and implementation of aggregation logic, Scaling and the Capacity Modes (reserved, provisioned and on-demand) options for DynamoDB. Finally, we'll have a look at the first relational serverless data base "Aurora Serverless" and its new data API.
Similar to Optimize Content Processing in the Cloud with GPU and Spot Instances (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Durante i laboratori pratici, gli esperti AWS ti mostrano quali strumenti aiutano a sviluppare le applicazioni Serverless in locale e nel cloud AWS e ti aiuteranno a programmare i prossimi passi per iniziare ad utilizzare questa tecnologia nella tua azienda.
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Optimize Content Processing in the Cloud with GPU and Spot Instances
1. Optimize Content Processing in the Cloud
with GPU and Spot Instances
Chad Schmutzer | Solutions Architect – EC2 Spot
Amazon Web Services
2. What are we going to do today?
… build a transcoding pipeline with GPUs
… learn about EC2 Spot
… while saving up to 90% on your EC2 Bill
… using AWS CloudFormation, in about 10
minutes
3. On-Demand
Pay for compute
capacity by the hour
with no long-term
commitments
For spiky workloads,
or to define needs
AWS EC2 Consumption Models
Reserved
Make a low, one-time
payment and receive
a significant discount
on the hourly charge
For committed
utilization
Spot
Bid for unused
capacity, charged at
a Spot Price which
fluctuates based on
supply and demand
For time-insensitive
or transient
workloads
4. Spare capacity at scale
AWS has more than a
million active customers
in 190 countries.
Amazon EC2 instance
usage has increased 93%
YoY, comparing Q4 2014
and Q4 2013, not
including Amazon use.
7. $0.27 $0.29$0.50
1b 1c1a
8XL
$0.30 $0.16$0.214XL
$0.07 $0.08$0.082XL
$0.05 $0.04$0.04XL
$0.01 $0.04$0.01L
C3
$1.76
On
Demand
$0.88
$0.44
$.22
$0.11
Show me the markets!
Each instance family
Each instance size
Each Availability Zone
In every region
Is a separate Spot Market
9. Amazon EC2 Spot – in the wild
1) We make this easy using the
Spot bid advisor
2) With deliberate pool
selection and bidding, you
will keep your Spot instance
as long as you need to.
3) And with new features like
Spot fleet diversified we do
the heavy lifting for you...
13. Why use Spot – customer examples
39 years of drug research re-processed, using over 80,000 cores, in 9
hours for $4,232
- Approximately 87,000 compute cores at peak
- Estimated 39 years of computational chemistry performed in 9 hours
- Three candidate compounds successfully identified
14. “By using AWS Spot instances, we've been able to save 75% a month
simply by changing four lines of code. It makes perfect sense for saving
money when you're running continuous integration workloads or
pipeline processing.” - Matthew Leventi, Lead Engineer, Lyft
Why use Spot – customer examples
15. The $9 Billion Experiment
Why use Spot – customer examples
16. Why use Spot – customer examples
Scaling up as many as 1000 Spot instances a day to handle real time ad
delivery
Petabyte-Scale Data Pipelines with Docker, Luigi and Elastic Spot
Instances
17. A large scale POC for animation rendering on AWS:
•Cloud Rendering at Walt Disney Animation Studios (available on SlideShare)
•Automated environment leveraging Spot Fleet
•Launched 40K cores in 20 min
at less than $0.02 per core-hour
Why use Spot – customer examples
19. Spot fleet helps you
Launch Thousands of Spot Instances
with one RequestSpotFleet call.
Get Best Price
Find the lowest priced horsepower that works for you.
or
Get Diversified Resources
Diversify your fleet. Grow your availability.
And
Apply Custom Weighting
Create your own capacity unit based on your application
needs
20. Diversification with EC2 Spot fleet
Multiple EC2 Spot instances
selected
Multiple Availability Zones
selected
Pick the instances with similar
performance characteristics e.g.
c3.large, m3.large, m4.large,
r3.large, c4.large.
21. Results - Grid
Requested 1000
vCores over 30 days
Minimum 960 vCores
Mode 1024 vCores
Average 1012 vCores
Average Price of $0.012
per vCore
Savings of over 80%
22. Walt Disney Animation Studios
Core Count
./aws_spot_fleet_request -p reinvent --cpu 8 --ram 64 -m 4.7 -c 1500
25. An easy to use interface that
lets you launch spare EC2
instances in seconds
Helps you select and bid on the
EC2 instances that meet your
applications requirements
Simple to use dashboard lets
you modify and manage your
application’s compute capacity
EC2 Spot Console
26. Using a single
additional Parameter
Run continuously
for up to 6 hours
Save up to 50% off
On-Demand pricing
EC2 Spot block
$1
27. What’s in 6 hours?
~ 21% less than 1 hour
~ 35% less than 2 hours
~ 40% less than 3 hours
In total roughly 50% of all
instances live less than 6
hours
28. Capitalizing on two minute warning
When the Spot price exceeds
your bid price, the instance will
receive a two-minute warning
Check for the 2 minute spot
instance termination
notification every 5 seconds
leveraging a script invoked at
instance launch
29. Sample script – two minutes left!
1) Check for 2 minute
warning
2) If YES, detach instance
from ELB
3) OTHERWISE, do nothing
4) Sleep for 5 seconds
$ if curl -s http://169.254.169.254/latest/meta-
data/spot/termination-time |
grep -q .*T.*Z; then instance_id=$(curl -s
http://169.254.169.254/latest/meta-data/instance-id);
aws elb deregister-instances-from-load-balancer
--load-balancer-name my-load-balancer
--instances $instance_id;
/env/bin/flushsessiontoDBonterminationscript.sh; fi
31. Batch oriented applications can leverage on-demand
processing using EC2 Spot to save up to 90% cost:
Batch Processing with Amazon EC2 Spot
Monte Carlo
simulation
Molecular
modeling
Media
processing
High energy
simulations
33. EC2 Spot fleet to setup a
heterogeneous, scalable “grid”
of EC2 spot instances with
multiple capacity pools as
worker nodes
Scaling to 50,000 cores
EC2 Spot blocks for less
flexible jobs that must run
continuously.
35. Disney Animation Renderfarm
Renderfarm
Avere FXT
cluster
WDAS Data Center
Renderfarm
Avere FXT
cluster
Storage
Remote Data Center
Renderfarm
Avere FXT
cluster
Remote Data Center
San Francisco
Los Angeles
Burbank
Artists
Redundant 10Gb
Redundant10G
b
36. Disney Animation Renderfarm
Renderfarm
Avere FXT
cluster
WDAS Data Center
Renderfarm
Avere FXT
cluster
Storage
Remote Data Center
Renderfarm
Avere FXT
cluster
Remote Data Center
San Francisco
Los Angeles
Burbank
Artists
Redundant 10Gb
Redundant10Gb
virtual private cloud
Avere vFXT
Oregon
Spot Instances
10Gb Primary, 1Gb backup
EFS
37. Mez
Archival
Backup Origin
Primary Origin
G2
G2
Ingest Bucket S3 Events SQS Queue
Source Encoder
SPOT or On-Demand
Edge Cache Fleet
Failover
ALB CloudFront Viewers
Diversified SPOT Fleet
G2
M4
Egress Bucket
Live/VOD 360 OTT on EC2 Spot
Direct
Connect
Multi
Tenancy
Multi
CDN
ContainerEncoding
Full OTTCMS / DRM
GPU / CPU
38. Ingest Store Transform Process
PUSH OR PULL
MEZ, LIVE & VOD
CREATE A CENTRALIZED
CONTENT LAKE ON S3
MEDIA DELIVERY AND/OR
HANDS-ON POSTPRODUCTION
SCALE OUT ON ELASTIC
CAPACITY FOR ALL PROCESSING
Content production and post-production companies are leveraging AWS to accelerate and streamline creative,
editing, compositing and streaming delivery workloads with highly scalable cloud computing and storage.
Media Pipeline
Slide: AWS Purchase Models
As shown by the previous slide, it is possible to launch significant amounts of compute power for a low cost. Customer have several models available when using Amazon EC2.
- Cover the three pricing models on the slide
On demand is the easiest way to get started with AWS. No commitment, pay as you go.
Reserved instances provide a significant discount in exchange for a commitment to use the services for some period of time, either 1 or 3 years. Reserved instances also come with an actual capacity reservation, which can be important for large enterprises who need a high level of assurance that computing resources will be available when they are needed.
Spot instances are a unique and powerful pricing model, in particular for HPC. With Spot, customers can bid on unused AWS capacity and are often able to launch instances on the cloud for as little as 10% of the equivalent on-demand rate. The tradeoff for Spot is if other customers are willing to pay more than you for the same AWS instance type, or capacity of that type becomes constrained, your running jobs may be terminated without warning. Jobs running on Spot therefore need to be fault-tolerant, or able to be restarted again at a later time.
What spare capacity looks like at scale.
AWS has more than a million active customers in 190 countries.
Amazon EC2 instance usage has increased 93% YoY, comparing Q4 2014 and Q4 2013, not including Amazon use.
Amazon S3 holds trillions of objects and regularly peaks at millions of requests per second.
So with EC2 Spot the rules are actually really simple.
Rule 1: The Spot market is where price of compute fluctuations based on supply and demand.
Rule 2: You’ll never pay more than your bid, in fact you’ll only ever pay the market price. When the market price exceeds your bid you get 2 minutes to wrap up.
Market price is on average 85% lower than On-Demand prices
What is in a market.. This is one of the most important, and unfortunately misunderstood elements of how the spot market works. While we say Spot market there are actually hundreds of Spot markets available to all our customers. AWS has 11 (?) regions around the world, in each region there are multiple availability zones and multiple instance families and multiple instance sizes per family.. (START CLICK THROUGH and READ). E.g. c3. e.g. large, xlarge, 8xlarge, e.g. US-West-2a, US-West-2b, e.g. Dublin Region, Oregon Region, Sydney Region.
Now that we understand what a spot market is and that there are many I’ll explain how we acquire the capacity. I’m going to pick just one market to highlight this. There are two numbers you care about with Spot.
Bid price. Think of this as the cap, the maximum you’re willing to pay for a given instance per hour.
Market price. This is the price you pay. Market price is set by periodic auctions
The r3.4xlarge costs $1.4 under our On-Demand purchasing option.
See it in action via 3 bids. 25%, 50%, 75%. Single Zone.
25% you kept your instance for almost 7 days, being impacted during a few short periods. However, you only paid the market price which was 86% off, just less than 20c per hour during the last week, only 14% of the OD price.
At 50% you would have been interrupted just once, for a very short period of time during the sixth day. You’re average discount during the week is 85% just 21c per hour, paying just 15% of OD.
At 75% you would not once have been interrupted, achieving an average discount of 85% just 21c an hour, again paying just 15% of OD.
1st - Check out the Spot Bid Advisor, which we launched earlier this year to guide customers in finding the resources, discount and instance lifetime they need.
The bid advisor has helped many new customers discover what some already knew. That with deliberate instance pool selection it can be straight forward to begin using Spot.
Take this is a snap I took from the tool last week and it shows that even at a 50% max bid there many different Spot markets that would have gone uninterrupted for over a week, while they got an average discount over 80-90%!
Now you might realize, wouldn't it be great if I could automate using all the pools that suit my application? Lets not get ahead of ourselves. First we need to understand, what is a Spot market?
1st - Check out the Spot Bid Advisor, which we launched earlier this year to guide customers in finding the resources, discount and instance lifetime they need.
The bid advisor has helped many new customers discover what some already knew. That with deliberate instance pool selection it can be straight forward to begin using Spot.
Take this is a snap I took from the tool last week and it shows that even at a 50% max bid there many different Spot markets that would have gone uninterrupted for over a week, while they got an average discount over 80-90%!
Now you might realize, wouldn't it be great if I could automate using all the pools that suit my application? Lets not get ahead of ourselves. First we need to understand, what is a Spot market?
We will first run through what the ‘best practices’ for EC2. While these are not necessary, they’re what the most sophisticated customers do to get high performance, high availability and low costs.
Standard practice
Stateless
Fault tolerant
Multi-AZ
SOA/Loosely coupled design
Spot Practice
Be instance flexible
This can mean c3.large, c3.xlarge,..r3.large
Or m3.large, r3.large, c3.large (ELB)
No seriously, your application can work with other instances (use example, drive this message home hard).
You use c3.xlarge and you can’t AT all use c3.2xlarge? Really? Really? Even if we give you 70% off for twice the c3.xlarge specs?
Lyft: Savings $15K per month with 4 lines of code. After using Spot in CICD Lyft recognized the stability of the platform, and the opportunity to leverage it as part of their Hadoop stack (run by Qubole) arose. They’ve since been able to shift more than a third of their Qubole managed Hadoop cluster onto EC2 Spot. Saving even further.
Brookhaven Labs, ATLAS experiment needed instances to live for as much as 24 hours in order to add value. Some software simply cannot check point. They needed the equivalent of 50,000 physical cores to meet the 1500 scientific researchers demand for resources. It takes a trillion proton collisions in the collider to produce evidence of a single Higgs boson particle’s decay. Over 5 days, less than 1% of instances were terminated, leaving them with a significant margin of safety. Instead of building a 50,000 core data center they were able to successfully use AWS Spot for 5 days and pay just $45,000.
ATLAS - The experiment is designed to take advantage of the unprecedented energy available at the LHC and observe phenomena that involve highly massive particles which were not observable using earlier lower-energy accelerators. It is hoped that it will shed light on new theories of particle physics beyond the Standard Model.
Spot is a powerful economic reward for fault tolerant, cloud first architectures. How powerful? Examples
Novartis: 39 years of drug research re-processed, using over 80,000 cores, in 9 hours for $4,232.
Lyft: Savings $15K per month with 4 lines of code
Adroll: Petabyte-Scale Data Pipelines with Docker, Luigi and Elastic Spot Instances
Hopefully many of you have come across the EC2 Spot fleet API. This one weird API makes it easy to:
Launch 1,2 or 3000 Spot instances with one API call
You can select whether you’d like to put your capacity into the single cheapest market,
Or opt to diversify to minimize the impact of any individual Spot market
Finally, by introducing Weights you can now scale based on the metric that matter most to you. It might be cores, memory, instances, latency.. It is your call.
Why does it have to be different to a normal ASG? As we’ve discussed there are multiple independent markets available in Spot. These markets are NOT correlated. Customers have for a long time followed a diversification strategy for time sensitive, mission critical workloads. With fleet you can scale. With Spot fleet we’ve made it easy. E.g. if you can use the 5 instances above across 2 availability zones we know that any one price fluctuation will only impact 1/8 of our capacity or 12.5%. Much like the index fund.
1000 vCores, at an average saving of 80% off On-Demand. While some capacity fluctuated we had our desired capacity of 1000 for over 98% of the time. During the 30 days we were never more than 4%, or 40 cores below our desired capacity while maintaining an average of 1012 cores.
Instances used - c3.2xlarge c3.4xlarge c3.8xlarge cc2.8xlarge cr1.8xlarge r3.2xlarge r3.4xlarge r3.8xlarge in All AZs
Just 5 months ago we launched fleet and have continued the AWS trend of rapid innovation based on customer feedback. We’ve launched 4 major features to spot fleet over the last 5 months and we’re nowhere near finished. We’ve also made it so easy!
We’re already architected the application to be resilient to instance termination. However, while we might have minimize the impact of an instance termination we can use the two minute warning to take it a step further. As I mentioned we can capitalize on the two minute warning by detaching it from an ELB set to drain connections. To do that we recommend checking the instance meta data regularly, about every 5 seconds, for the two minute warning.. Then.
Here is a simple sample of what some customers will back into their AMI, or bootstrap actions. This small script checks for a instance termination notice (404 will be returned if you aren’t in the two minute warning) then detaches itself from the ELB if that two minute warning is active.
Batch has long been in the wheelhouse for Spot usage. Customers have been using Spot in
Monte Carlo simulations in risk analytics for insurance and finserv (Ufora)
Molecular modeling (Novartis)
Media rendering Animation and FX rendering, and batch image processing pipeline (FinDesign)
High energy simulations (Brookhaven)
They’ve found it valuable to accelerate processing and results. To run simulations that are otherwise cost prohibitive. To train algorithms at the lowest possible price. To achieve the scale they need i.e. . For example, an engineer running electromagnetic simulations could run larger numbers of parametric sweeps than would otherwise be practical, by using very large numbers of Amazon EC2 Spot Instances (and/or OD instances), and using automation to launch independent and parallel simulation jobs.
There are numerous batch oriented applications in place today that can leverage this style of on-demand processing, including claims processing, large scale transformation, media processing and multi-part data processing work. There are many different architectures for Batch processing architecture because while components here are certainly useful as a guide there are lots of different approaches here. However at a high level there are some common methods using AWS services
- these are common processes for content production across pp/p/finishing/etc
Some additional considerations I’ll cover briefly.
Options for shifting state off web/app servers
Load balancing a fault tolerant application with ELB
Capitalizing on the Two Minute Warning
Some additional considerations I’ll cover briefly.
Options for shifting state off web/app servers
Load balancing a fault tolerant application with ELB
Capitalizing on the Two Minute Warning
I mentioned Novartis at the beginning who back in2013, ran a project that involved virtually screening 10 million compounds against a common cancer target in less than a week. They calculated that it would take 50,000 cores and close to a $40 million investment if they wanted to run the experiment internally. Partnering with Cycle Computing and Amazon Web Services (AWS), Novartis built a platform leveraging Amazon Simple Storage Service (Amazon S3), Amazon Elastic Block Store (Amazon EBS), and four Availability Zones. The project ran across 10,600 Spot Instances (approximately 87,000 compute cores) and allowed Novartis to conduct 39 years of computational chemistry in 9 hours for a cost of $4,232. Out of the 10 million compounds screened, three were successfully identified.
Schrodinger in their quest for computational chemistry for better solar power stood up a 156,314 core cluster. The estimated computation time to process the 205,000 organic compounds was 264 years, but was completed in 18 hours. They achieved 1.21 petaFLOPS (Rpeak) for just $33,000 or 16¢ per molecule.