Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations.
We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Speaker:
Ian Massingham, AWS Technical Evangelist
Deep Dive on Amazon EC2 Instances - January 2017 AWS Online Tech TalksAmazon Web Services
Â
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We will also provide an overview of the newest instances announced at re:Invent, including the latest generation of Memory and Compute Optimized Instances R4 and C5 instances, new Storage Optimized High I/O I3 instances, and new larger T2 instances. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives:
⢠Get an overview of the EC2 instance platform, key platform features, and the concept of instance generations
⢠Learn about the latest generation of Amazon EC2 Instances
⢠Learn best practices around instance selection to optimize performance
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
source: http://www.sfbayacm.org/?p=1394
The specifics of a cloudâs computing architecture may have an impact on application design. This is particularly important in Infrastructure as a Service (IaaS) cloud environments.
This presentation analyzes aspects of the Amazon EC2 IaaS cloud environment that differ from a traditional datacenter and introduces general best practices for ensuring data privacy, storage persistence, and reliable DBMS backup. Best practices for application robustness and scalability on demand are reviewed and are especially significant in leveraging the full potential of an IaaS cloud. The need for a cloud application management and configuration system is briefly reviewed and two alternate approaches to cloud application management are described (RightScale and Kaavo).
AWS re:Invent 2016: Save up to 90% and Run Production Workloads on Spot - Fea...Amazon Web Services
Â
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this session, we dive into how customers who have designed scalable, cloud friendly application architectures can leverage new Spot features to realize immediate cost savings while maintaining availability. Attendees will leave with practical knowledge of how, via well architected applications, they can run production services on the Spot instances just like IFTTT and Mapbox.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations.
We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Speaker:
Ian Massingham, AWS Technical Evangelist
Deep Dive on Amazon EC2 Instances - January 2017 AWS Online Tech TalksAmazon Web Services
Â
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We will also provide an overview of the newest instances announced at re:Invent, including the latest generation of Memory and Compute Optimized Instances R4 and C5 instances, new Storage Optimized High I/O I3 instances, and new larger T2 instances. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives:
⢠Get an overview of the EC2 instance platform, key platform features, and the concept of instance generations
⢠Learn about the latest generation of Amazon EC2 Instances
⢠Learn best practices around instance selection to optimize performance
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
source: http://www.sfbayacm.org/?p=1394
The specifics of a cloudâs computing architecture may have an impact on application design. This is particularly important in Infrastructure as a Service (IaaS) cloud environments.
This presentation analyzes aspects of the Amazon EC2 IaaS cloud environment that differ from a traditional datacenter and introduces general best practices for ensuring data privacy, storage persistence, and reliable DBMS backup. Best practices for application robustness and scalability on demand are reviewed and are especially significant in leveraging the full potential of an IaaS cloud. The need for a cloud application management and configuration system is briefly reviewed and two alternate approaches to cloud application management are described (RightScale and Kaavo).
AWS re:Invent 2016: Save up to 90% and Run Production Workloads on Spot - Fea...Amazon Web Services
Â
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate.
In this session, we dive into how customers who have designed scalable, cloud friendly application architectures can leverage new Spot features to realize immediate cost savings while maintaining availability. Attendees will leave with practical knowledge of how, via well architected applications, they can run production services on the Spot instances just like IFTTT and Mapbox.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Â
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
Â
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current-generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances. Other Compute options such as ECS and Lambda for processing in the cloud will be introduced and explained at a high level.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
In this webinar we will take you on a journey, starting with the basics of key creation and security groups and ending with an Auto Scaling application driven by dynamic policies.
Learning Objectives:
⢠Understand how to use Amazon EC2 beyond a simple single instance use case
⢠Learn about instance bootstrapping, AMIs and Elastic IPs
⢠Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
⢠Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
⢠Find out how to create an Amazon RDS database and how to test failover between Availability Zones Who Should Attend:
⢠Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
Learn how the Blue/Green Deployment methodology combined with AWS tools and services can help reduce the risks associated with software deployment. We will illustrate common patterns and highlight ways deployment risks are mitigated by each pattern. Topics will include how services like AWS CloudFormation, AWS Elastic Beanstalk, Amazon EC2 Container Service, Amazon Route53, Auto Scaling and Elastic Load Balancing can help automate deployment. We will also address how to effectively manage deployments in the context of data model and schema changes. Learn how you can adopt blue/green for your software release processes in a cost-effective and low-risk way.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
Â
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
AWS re:Invent 2016: Getting the most Bang for your buck with #EC2 #Winning (C...Amazon Web Services
Â
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO.
In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2âs purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Automating Management of Amazon EC2 Instances with Auto Scaling - March 2017 ...Amazon Web Services
Â
Automation is vital to efficient DevOps, and getting your fleets of EC2 instances to launch, provision software, and self-heal automatically is a key challenge. Auto Scaling provides essential features for each of these instance lifecycle automation steps, which are widely applicable to just about any type of application running on EC2. In this tech talk, you will learn about how to automate launches with Launch Configurations, configure the software environment before your instance accepts traffic using Lifecycle hooks, and how to create a resilient multi-AZ fleet to run your application with minimal effort.
Learning Objectives:
1. Learn how you can improve application availability and operational efficiency by automating fleet L10management for Amazon EC2 instances
2. Understand how Auto Scaling works and how easy it is to control the lifecycle of your fleet and the applications they run
3. Hear about recent developments in the Auto Scaling service how they provide an advantage to a wide variety of applications
Join us for a live session based on our popular Masterclass series of online events. Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? In this session, we will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
AWS re:Invent 2016: T2: From Startups to Enterprise, Performance for a Low Co...Amazon Web Services
Â
In this session, customers learn more about the T2 instance type and the performance and cost savings it can bring to startups, SMBs, and enterprises. Customers will share best practices and tips for how they use T2 instances across workloads including development and test, production web servers, continuous integration and more.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
SRV402 Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization ...Amazon Web Services
Â
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and Accelerated Computing (GPU and FPGA) instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
Â
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current-generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances. Other Compute options such as ECS and Lambda for processing in the cloud will be introduced and explained at a high level.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
In this webinar we will take you on a journey, starting with the basics of key creation and security groups and ending with an Auto Scaling application driven by dynamic policies.
Learning Objectives:
⢠Understand how to use Amazon EC2 beyond a simple single instance use case
⢠Learn about instance bootstrapping, AMIs and Elastic IPs
⢠Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
⢠Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
⢠Find out how to create an Amazon RDS database and how to test failover between Availability Zones Who Should Attend:
⢠Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
Learn how the Blue/Green Deployment methodology combined with AWS tools and services can help reduce the risks associated with software deployment. We will illustrate common patterns and highlight ways deployment risks are mitigated by each pattern. Topics will include how services like AWS CloudFormation, AWS Elastic Beanstalk, Amazon EC2 Container Service, Amazon Route53, Auto Scaling and Elastic Load Balancing can help automate deployment. We will also address how to effectively manage deployments in the context of data model and schema changes. Learn how you can adopt blue/green for your software release processes in a cost-effective and low-risk way.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
Â
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
AWS re:Invent 2016: Getting the most Bang for your buck with #EC2 #Winning (C...Amazon Web Services
Â
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO.
In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2âs purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Automating Management of Amazon EC2 Instances with Auto Scaling - March 2017 ...Amazon Web Services
Â
Automation is vital to efficient DevOps, and getting your fleets of EC2 instances to launch, provision software, and self-heal automatically is a key challenge. Auto Scaling provides essential features for each of these instance lifecycle automation steps, which are widely applicable to just about any type of application running on EC2. In this tech talk, you will learn about how to automate launches with Launch Configurations, configure the software environment before your instance accepts traffic using Lifecycle hooks, and how to create a resilient multi-AZ fleet to run your application with minimal effort.
Learning Objectives:
1. Learn how you can improve application availability and operational efficiency by automating fleet L10management for Amazon EC2 instances
2. Understand how Auto Scaling works and how easy it is to control the lifecycle of your fleet and the applications they run
3. Hear about recent developments in the Auto Scaling service how they provide an advantage to a wide variety of applications
Join us for a live session based on our popular Masterclass series of online events. Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? In this session, we will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
AWS re:Invent 2016: T2: From Startups to Enterprise, Performance for a Low Co...Amazon Web Services
Â
In this session, customers learn more about the T2 instance type and the performance and cost savings it can bring to startups, SMBs, and enterprises. Customers will share best practices and tips for how they use T2 instances across workloads including development and test, production web servers, continuous integration and more.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing's configuration and day-to-day management, as well as its use in conjunction with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
Protecting a small number of VPCs with a next-generation firewall is relatively easy, but what happens when you have hundreds of VPCs and regularly add more as business groups or new apps come on-line? How can you maintain a prevention architecture without slowing the business? One concept is to build a services VPC that protects your existing and new VPCs. This deep dive session will discuss how to integrate next-generation firewalls in a services VPC with the Palo Alto Networks VM-Series in AWS. Topics will include architectural design considerations, routing recommendations, and dynamic fail-over. Session sponsored by Palo Alto Networks.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
Â
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Andy Shenkler, Sony's EVP & Chief Solutions & Technology Officer's presentation to the Storage & Archive track at the Media & Entertainment Cloud Symposium on Nov 4, 2016
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Auroraâpowered application.
Expanding Your Data Center with Hybrid Cloud InfrastructureAmazon Web Services
Â
Cloud is a new common for the Hybrid IT strategies. In this session, we will explain whatâs different between cloud and your datacenter as well as how to make your Hybrid Cloud strategies.
Session Sponsored by Trend Micro: 3 Secrets to Becoming a Cloud Security Supe...Amazon Web Services
Â
While security is a top concern in every organization these days, it often gets a bad rap. In many minds, security has the reputation of the bothersome villain who attempts to hinder performance or restrain agility. In this session we will outline three strategies to protect your valuable workloads, without falling into traditional security traps. We will walk through three stories of EC2 security superheroes who saved the day by overcoming compliance and design challenges, using a (not so) secret arsenal of AWS and Trend Micro security tools.
Key takeaways from this session include how to:
- Design a workload-centric security architecture
- Improve visibility of AWS-only or hybrid environments
- Stop patching live instances but still prevent exploits
Speaker: Sasha Pavlovic, Director, Cloud & Datacentre Security, Asia Pacific, Trend Micro
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
Â
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Deep Dive: Developing, Deploying & Operating Mobile Apps with AWS Amazon Web Services
Â
In this session weâll dive deeper into how you can test mobile applications on real devices, using AWS Device Farm, how to get business insights wirh AWS Mobile Analytics and Amazon Redshift, and keep your customers engaged using Amazon SNS Mobile Push and the new Worldwide Delivery of Amazon SNS Messages via SMS.
SRV301 Getting the most Bang for your buck with #EC2 #WinningAmazon Web Services
Â
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2âs purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
SRV301 Getting the Most Bang for your Buck with #EC2 #WinningAmazon Web Services
Â
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2âs purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
SRV301 Getting the Most Bang for your Buck with #EC2 #WinningAmazon Web Services
Â
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2âs purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2âs purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Get the Most Out of Amazon EC2: A Deep Dive on Reserved, On-Demand, and Spot ...Amazon Web Services
Â
With Amazon EC2, you have the flexibility to mix-and-match purchasing models to suit your business needs. By combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models, you can optimize cost, grow your compute capacity and throughput, and enable new types of cloud computing applications. This presentation will guide you on how to achieve high performance and availability at the lowest TCO. We will explore how to best combine EC2's purchasing models across several common applications with immediately actionable takeaways.
Get the Most Out of Amazon EC2: A Deep Dive on Reserved, On-Demand, and Spot ...Amazon Web Services
Â
With Amazon EC2, you have the flexibility to mix-and-match purchasing models to suit your business needs. By combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models, you can optimize cost, grow your compute capacity and throughput, and enable new types of cloud computing applications. This presentation will guide you on how to achieve high performance and availability at the lowest TCO. We will explore how to best combine EC2's purchasing models across several common applications with immediately actionable takeaways.
In this session, learn how to seamlessly combine Amazon EC2 On-Demand, Spot, and Reserved Instances. Also learn how to use the best practices deployed by customers all over the world for the most common applications and workloads. Discover multiple ways to grow your compute capacity and enable new types of cloud computing applicationsâwithout it costing you a lot of money.
Optimize Amazon EC2 for Fun and Profit - SRV203 - Chicago AWS SummitAmazon Web Services
Â
Learn how to seamlessly combine Amazon EC2 On-Demand, Spot, and Reserved Instances to optimize cost, scale, and performance. Understand best practices used by customers all over the world for the most commonly used applications and workloads. Discover multiple ways to grow your compute capacity, and enable new types of cloud computing applications, without it costing a lot of money.
AWS Summit London 2014 | Introduction to Amazon EC2 (100)Amazon Web Services
Â
This session will provide an overview of the Amazon Elastic Compute Cloud (EC2) service capability and help you understand the latest updates to the range of instances types, virtual private cloud (VPC) features. It will also help you to understand the broad range of pricing options that EC2 provides and how you can use these to make smart decisions that reduce your costs.
AWS has different pricing models to match your needs. One example is the different instance types available such as On-Demand, Reserved and Spot Instances. Customers can develop cost-saving strategies based upon their usage patterns, models and growth expectations. In some cases, a set of larger instances can be cheaper than multiple small instances. Learn how to size your AWS applications to maximize your use and minimize your spend. Companies such as Pinterest take very active roles to constantly reduce their spend; learn how they do it and develop your own cost-saving approaches.
Optimize EC2 for Fun and Profit - SRV203 - Anaheim AWS SummitAmazon Web Services
Â
In this session, learn how to seamlessly combine Amazon EC2 On-Demand, Spot, and Reserved Instances to optimize cost, scale, and performance. Hear about the best practices used by customers all over the world for the most commonly used applications and workloads. Finally, discover multiple ways to grow your compute capacity and enable new types of cloud computing applications without spending much money.
AWS Summit Auckland 2014 | Moving to the Cloud. What does it Mean to your Bus...Amazon Web Services
Â
AWS launched in 2006, and since then we have released more than 530 services, features, and major announcements. Every year, we outpace the previous year in launches and are continuously accelerating the pace of innovation across the organization. Ever wonder how we formulate customer-centric ideas, turn them into features and services, and get them to market quickly? This session dives deep into how an idea becomes a service at AWS and how we continue to evolve the service after release through innovation at every level. We even spill the beans on how we manage operational excellence across our services to ensure the highest possible availability. Come learn about the rapid pace of innovation at AWS, and the culture that formulates magic behind the scenes.
AWS Summit Sydney 2014 | Moving to the Cloud. What does it Mean to your BusinessAmazon Web Services
Â
You have attended AWS training. Gathered all the relevant information about AWS services but how do you now show the value of the AWS Cloud to your business. This session will run through how you would build a business case for the cloud including TCO and cost comparisons.
The less you spend on infrastructure, the more you can invest in other areas of your business. We'll look at how Spot Instances, Reserved Instances, CloudFront, Billing Alarms, and more can help you lower your spend. Learn how to quickly identify under-utilized resources and what steps to take to remediate those money wasters. You'll build the essential cost savings toolkit that is applicable across all industries and win the admiration of your future finance team.
Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. In practice, it is not as simple as just measuring potential hardware expense alongside utility pricing for compute and storage resources. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare direct and indirect costs of a product or a service. Given the large differences between the two models, it is challenging to perform accurate apples-to-apples cost comparisons between on-premises data centers and cloud infrastructure that is offered as a service. In this session, we explain the economic benefits of deploying applications in AWS over deploying equivalent applications hosted in an on-premises environment.
Similar to Get the Most Bang for Your Buck with #EC2 #WINNING (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Â
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, lâutilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalitĂ Server...Amazon Web Services
Â
La varietĂ e la quantitĂ di dati che si crea ogni giorno accelera sempre piĂš velocemente e rappresenta una opportunitĂ irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantitĂ di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma lâelasticitĂ del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dellâinfrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende piĂš semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilitĂ , la velocitĂ di rilascio e, in definitiva, ci ha consentito di creare applicazioni piĂš affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
Â
Lâutilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question â how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
⢠PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
⢠Scope
⢠Features
⢠Tech overview and Demo
The role of the Cloud
The Future of APIs
⢠Complying with regulation
⢠Monetizing data / APIs
⢠Business models
⢠Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica lâofferta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Â
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Â
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilitĂ del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Â
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilitĂ messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledÏ 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ⢠on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphereŽ e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilitĂ ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Â
Molte aziende oggi, costruiscono applicazioni con funzionalitĂ di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessitĂ di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalitĂ di QLDB.
Con lâascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono piĂš importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi dâuso creando API moderne con funzionalitĂ di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud⢠on AWS: i miti da sfatareAmazon Web Services
Â
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilitĂ ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno lâarchitettura e dimostreranno come sfruttare a pieno le potenzialitĂ di VMware Cloud ⢠on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o piÚ dei tuo container.
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Â
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion âCompetition and Regulation in Professions and Occupationsâ held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the authorâs consent.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
Â
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
Â
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
0x01 - Newton's Third Law: Static vs. Dynamic Abusers
Â
Get the Most Bang for Your Buck with #EC2 #WINNING
1. Š 2016, Amazon Web Services, Inc. or its Affiliates. All rights reservedŠ 2016, Amazon Web Services, Inc. or its Affiliates. All rights reserved
Get the Most Bang for Your Buck with #EC2 #Winning
Boyd McGeachie,
Product Manager
2. Amazon EC2 purchasing options
On-Demand
Pay for compute capacity by the
hour with no long-term
commitments
For spiky workloads, or to define
needs
Reserved
Make a 1 or 3 Year commitment
and receive a significant
discount over On-Demand
For committed or baseline
utilization
Spot
Pay market price for unused
compute capacity at a steep
discount over On-Demand
For fault tolerant, time-insensitive or
transient workloads
3. Pillars of performance and cost-optimization
Right sizing Purchasing
options
Increase
elasticity
Measure,
monitor, &
improve
4. Right sizing
Right sizing
⢠Selecting the cheapest instance available
while meeting performance requirements
⢠Looks at CPU, RAM, storage, and network
utilization to identify potential instances that
can be downsized
5. Increase elasticity
Turn off non-production instances
⢠Look for dev/test, non-prod instances that are
running always-on and turn off
Automatically scale production
⢠Use Auto Scaling to scale in and out based on
demand and usage (for example, spikes)
6. Measure, monitor, and improve: Uncover the
cost-optimization opportunities
Auto-tag resources
Identify always-on non-prod
Identify instances to downsize
Recommend Reserved Instances to
purchase
Dashboard our status
Report on savings
7. AWS pricing principles
Pay as you go
Pay less when you reserve Pay less when AWS grows
No up-front investment
8. We completed the equivalent
of thirty-nine years of
computational chemistry in just
under 9 hours for a cost of
around $4200.
Steve Litster
Global Head of Scientific Computing, Novartis
â
â
Novartis: Acceleration of pre-clinical R&D
⢠Existing infrastructure to screen 10
million compounds in a computational
model not available
⢠New infrastructure would have cost
approximately $40 million to build
Novartis used AWS for HPC
computational chemistry
12. Characteristic Standard
Payment No upfront
Partial upfront
All upfront
Commitment 1 year
3 year
Sellable on RI Marketplace Yes
Change Availability Zone, instance size
(Linux), networking type
Yes
Console and API:
ModifyReservedInstances
Change instance families, operating
system, and tenancy
No
Savings* Up to 75%
Standard Reserved Instance details
* Dependent on AWS service, size/type, and region
14. âŚConvert Your Reserved Instances
The Convertible Reserved Instance is a new type of Reserved Instance
that can be exchanged during the 3 year term for new Convertible
Reserved Instances of equal or greater value. The new Convertible
Reserved Instances can correspond to a different instance family or a
new price, instance size, platform, or tenancy
instance optimized
instance
15. Convertible Reserved Instance details
* Dependent on AWS service, size/type, and region
Characteristic Standard Convertible
Payment No upfront
Partial upfront
All upfront
No upfront
Partial upfront
All upfront
Commitment 1 year
3 year 3 year
Sellable on RI Marketplace Yes Coming soon
Change Availability Zone, instance size
(Linux), networking type
Yes
Console and API:
ModifyReservedInstances
Yes
Console and API:
ExchangeReservedInstances
Change instance families, operating
system, and tenancy
No Yes
Savings* Up to 75% Up to 45%
16. EC2 Spot pricing
Users with urgent
computing needs or
large amounts of
additional capacity
Time or instance
flexible
Experiment and/or
build cost-sensitive
businesses
17. Spot Instance details
Options
⢠Spot Fleet to maintain instance
availability
⢠Spot block durations (1-6 hours) for
workloads that must run continuously
Commitment level
⢠None
* Compared to On-Demand price based on specific EC2 instance type, region, and Availability Zone
18. Spot rules
Markets where the price of compute changes based on supply and demand
Youâll never pay more than your bid
50% bid
of OD
75% bid of OD
25% bid of OD
You pay the
market price
87% discount!
21. Use a combination of all three!
1. Use Reserved Instances for
known/steady-state workloads
2. Set up multiple Auto Scaling
groups
3. Scale using Spot, On-Demand,
or both
0
2
4
6
8
10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
/Spot
On-Demand
Spot
Reserved
29. Summary: Three-tier web app
0
10
20
30
40
50
60
70
80
90
0 2 4 6 8 10 12 2 4 6 8 10
Three-tier application servers
Reserved On-Demand Spot
Summary
Have a balanced meal! Across the three
tiers, our meal consists of:
⢠Spot 13%
⢠On-Demand 11%
⢠Reserved 76%
Remember!
âNo server is easier to manage than no
serverâ - Werner Vogels, CTO, Amazon.com
30. Ubisoft uses AWS to develop and launch
social games quickly
⢠Ubisoft is a Paris-based gaming company, and
creator of popular gaming titles, including Assassins
Creed, Far Cry, and Just Dance
⢠Moving games to social and mobile platforms
required capacity to scale fast; using a traditional
environment would be an extensive and costly
investment
⢠Using the AWS Cloud to optimize games at the
application, caching, and data layers, improving the
user experience
â
â By using the AWS cloud we
were able to launch 10 social
games within 18 months.
Lenin Gali
Senior Director, Ubisoft
32. Time
Typical server utilization rates are low due to need to deploy for peak needsâŚ
The old way: Low utilization, high costs
33. Time
Higher grid utilization rates result in hidden costs: longer queue wait times
and delayed results
The old way: Managing utilization with grids
38. ?
The new way: In the cloud!
0
100
200
300
400
500
600
1 2 3 4 5 6 7 8 9 10 11 12
Optimizing for cost and business results
Spot
Spot block
On-Demand
3 year Reserved Instances
39. 0
100
200
300
400
500
600
1 2 3 4 5 6 7 8 9 10 11 12
Going a step further with Spot blocks!
Spot
Spot block
On-Demand
3 year Reserved Instances
The new way: In the cloud!
?
40. âWe constantly understate what our capabilities are to solve problems. The
biggest constraint is never the constraint of time or money, itâs generally the
constraint of thought.â
â Jeff Smith, CEO, Suncorp Business Services
Founded: 1996 ⢠Employees: 15,000+ ⢠Headquarters: Brisbane, Australia
Accelerating transformation
45. Different purchasing options in a single company
Data science
New app development Test and development
Internal IT
46. Letâs recap
ďź Remember the pillars of optimization
ďź Right-sizing
ďź Increase elasticity (turn stuff off!)
ďź Measure, monitor, and improve
ďź Use tags to understand your services
ďź There are 3 core purchasing options â have a
balanced meal
ďź Architect your workloads with performance
and cost in mind
47. Summary
Freedom to build
unfettered
Freedom to get real
value from data
Freedom to say yes
AWS is more cost-effective in both short-term and long-term than on-premises
environments. By leveraging the EC2 purchase models, you gain theâŚ
48. Š 2016, Amazon Web Services, Inc. or its Affiliates. All rights reservedŠ 2016, Amazon Web Services, Inc. or its Affiliates. All rights reserved
aws.amazon.com/activate
Everything and Anything Startups
Need to Get Started on AWS
Editor's Notes
Today weâre here to discuss the AWS EC2 Purchasing Options, how to leverage them to get the highest availability and performance while minimizing costs. AWS offers three core Purchasing Options On-Demand, Reserved, Spot instances. [read slide]. Each purchasing model launches the same underlying EC2.
There are four consistent, heavy hitting drivers of cost optimization â
Instance right sizing,
Purchasing reserved or dedicated instances, and using Spot
Increasing instance elasticity.Â
When combined with the ability to measure and monitor, we come up with the four pillars of cost optimization.
Today weâre focusing just on the Purchasing Options, but the remaining three are important so lets do a quick refresher.
 AWS offers roughly 60 instance types and sizes (you saw just a few of the available families earlier), which is great for customers because it allows users to select the best fit instance for their workload. It can also be a bit daunting, knowing where to start and what the best instance is from a cost perspective â not just technical.
We define right sizing as the cheapest instance while meeting performance requirements.
Itâs the process of look at deployed resources and looking for opportunities to downsize when possible.
Testing is cheap, one can easily provision any type and size of instance and to test their application on, use this advantage (you can test using Spot to do it even cheaper!).
Do keep in mind when we talk about scaling weâve largely focused here on scaling down. When running inside your own environment you need to guestimate peak demand then provision for that. Within the cloud you no longer need to guess, and should only run at peak during your peak demand periods. What this results in when we talk about scalability is returning capacity you no longer need and not paying for it.
Elasticity is using an instance when you need it, but turning it off when you donât. Itâs one of the most central tenants of the cloud, but often times, we see customers go through a learning process for figuring out how to operationalize this in order to drive cost savings. So lets start with a very simple example â if your developers only work from 8-5, turn the instances off from 5-8! This can save you over 50% with almost no technical consideration needed!
As a slightly more complicated example for production workloads, getting more precise and granular with auto scaling is going to help ensure that youâre able to take advantage of horizontal scaling in order to meet peak capacity needs â while not paying for peak capacity.
In almost any environment you must measure, monitor and seek to improve performance. However, the beautiful thing about leveraging the cloud is as you identify opportunities to improve you (right sizing, scaling down etc.) you can immediately capitalize on it to improve performance, availability and/or cost.
Is essential to provide information about what resources are being used by who and for what purpose. You can use tags as an example to identify always on non-prod and scale them down, or move them to Spot! AWS even provides recommendations through Trusted Advisor and there are a number of partners who have good tooling here as well (CloudAbility, Cloud Checkr, Cloudyn, and Cloud Health being a few of them).
You can react and reap reward rapidly because of the AWS pricing principles. Our pricing principles are designed around the you the customer, understanding and using these will give you confidence AWS has your workload and needs covered. Since its inception AWS has believed that to innovate you need to reduce the barrier to entry and cost of failure. EC2 enables this by enabling no upfront investment, pay as you go but if an innovation is successful you can reserve to pay less. As as we continue to innovate on behalf of our customers we pass those savings on to customers in the form of over 50 price reductions since 2006.
Note to speaker:
See case study online:
http://aws.amazon.com/solutions/case-studies/novartis/
Scientists at Novartis had identified a target molecule and needed to screen 10 million compounds against it in a computational model
Existing infrastructure was not available
New infrastructure would have cost approximately $40 million to build
Novartis built a virtual high-performance computing data center in the cloud to run the experiments
Dramatically reduced time to science â able to receive an answer in a fraction of the time at a fraction of the cost
Ok now into the meat of the discussion!
Users that want the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
Applications being developed or tested on Amazon EC2 for the first time
Applications with short term, spiky, or unpredictable workloads that cannot be interrupted
Applications with steady state or predictable usage
Applications that require reserved capacity
Users able to make upfront payments to reduce their total computing costs even further
Sept 2016:
Regional Benefit -Many customers have told us that the reserved pricing discount is more important than the capacity reservation, and that they would be willing to trade it for increased flexibility. You can choose to waive the capacity reservation associated with Standard RI, run your instance in any AZ in the Region, and have your RI discount automatically applied.
This means that you no longer have to worry about the AZ alignment between your RIs and instances. The regional benefit is offered without a capacity reservation since the selection of an AZ is required to reserve capacity. If you need a capacity reservation, the option to reserve capacity and apply your RI to a specific AZ is still available.
Standard RI: The Reservation Benefit: This benefit only exists for the account that the RI âlivesâ in as it is based on the specific account and that accountâs AZ mapping to a specific physical location
The Regional Benefit: This regional benefit allows you to run instances in any AZ in an AWS Region and AWS will apply the reservation pricing benefit to your instances automatically.
You can save up to 75% off the On-Demand rate. You can choose between three payment options when you purchase a Standard Reserved Instance. With the All Upfront option, you pay for the entire Reserved Instance with one upfront payment. This option provides you with the largest discount compared to On-Demand Instance pricing. With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term. The No Upfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term. And the longer you commit, the more you can save.
When you define an Availability Zone, you also get the benefit of capacity reservation. This can provide you additional confidence in your ability to launch the number of instances you have reserved when you need them.
Now if your needs change after you have purchased a Reserved Instance, you can request to move your Reserved Instance to another Availability Zone within the same region, change its Network Platform or, for Linux/UNIX RIs, modify the instance type of your reservation to another type in the same instance family, at no additional cost. The other option is to sell your unneeded RIâs on the RI Marketplace!
Now with a standard RI, youâve defined the instance family, OS, tenancy, and payment option you want to use to get the discounted price. Deciding to lock in for a 3 year commitment with todayâs technology can be a hard decision.
As technology improves, we want to ensure customer are able to benefit from improvements in CPU, SSD, and other components. We do this through our current generation instance types and families. From 2007 to today, we have grown are offering to from 3 to over 50 instance types, and moved from 1 to 7 family types. Todayâs families are
General-purpose: M1, M3 , M4, T2
Compute-optimized: C1, CC2, C3, C4
Memory-optimized: M2, CR1, R3, X1
Dense-storage: HS1, D2
I/O-optimized: HI1, I2
GPU: CG1, G2
Micro: T1, T2
And more importantly, weâve had over 50 price changes since 2006, passing the benefits on to our customers.
And customer of RIs have can also benefit with a new RI offering⌠<next slide>
----------------------------- Background ---------------
Here are some highlights of things weâve launched in the past few months:
D2 is our newest and latest instance. It is powered by 2.4 GHz Intel Xeon E5 2676v3 (Haswell) processors; improves on HS1 instances by providing additional compute power, more memory, additional instance sizes, and Enhanced Networking. D2 instances are available in four instance sizes, with 6 TB, 12 TB, 24 TB, and 48 TB storage options.
C4 â 2.9 GHz Intel Xeon E5-2666v3 (Haswell â AVX2) CPUs
T2 is our lowest cost general purpose instance at $0.013 per our On-Demand, thatâs around $9.50 per month, and can drop to $4.20 per month with 3 Year RIs
R3 is our current generation memory optimized instance with enhanced networking and memory up to 244 GiB
I2 is the current generation storage optimized instance offering up to 350,000+ random read IOPS (4k)
G2 with NVIDIA GK104 GPUs
Convertible RIs is a new offering that provides flexibility to you to trade up for your benefit. <Read Slide>
Let me give you some examples of how you can convert you RIs. With a Convertible Reserved Instances you can:
Convert to a new instance family e.g. R3 to C3 to T2 to M4
Convert to a new instance price e.g. if AWS reduces the public rate of your instances
Convert to a new operating system e.g. Windows to Linux
Convert tenancy e.g. from Dedicated Instances to normal
Convert to a higher discount tier e.g. No Upfront to Partial Upfront
Leverage the new Regional benefit
Convertible Reserved Instances provide a discount (up to 45% off On-Demand) and the capability to change the attributes of the Reserved Instance as long as the exchange results in the creation of Reserved Instances of equal or greater value. Like Standard Reserved Instances, Convertible Reserved Instances are best suited for steady-state usage.Â
Applications that have flexible start and end times
Experiments that can only be conducted at very low compute prices (Brookhaven and Fermi â analyzing the origins of our universe). Or business that need extremely low infrastructure costs to achieve profitability such as Adtech.
Users with urgent computing needs or large amounts of additional capacity
Spot Instances provide the ability for customers to purchase compute capacity with no upfront commitment and at hourly rates usually lower than the On-Demand rate, often as much as 90% cheaper - for those wondering what is a 90% discount? It is about 1c per core hour. Ask yourself what could your best people do, or how well could your application perform with a 10,000 core data center that costs just $100 per hour..
So the spot rules are actually pretty simple. There is a market determined pricing mechanism, that is often as much as 90% off the On-Demand price. You never pay more than your bid, in fact youâll often pay significantly less than your bid! Should the market price exceed your bid, we give you 2 minutes to wrap up your work! Here is a quick example of the impact of bidding on interruptions and price.
25% you kept your instance for almost 7 days, being impacted during a few short periods. However, you only paid the market price which was 86% off, just less than 20c per hour during the last week, only 14% of the OD price.
At 50% you would have been interrupted just once, for a very short period of time during the sixth day. Youâre average discount during the week is 85% just 21c per hour, paying just 15% of OD.
At 75% you would not once have been interrupted, achieving an average discount of 85% just 21c an hour, again paying just 15% of OD.
So a simple tip for getting started with Spot is bid the on-demand price and youâll still only pay the market rate often just 10% of the on-demand price! If youâre using Spot fleet it will automagically handle the re-provisioning of your capacity!
Picking just one EC2 purchasing model is the wrong way to ask the question.
[click]
Its like picking a single type of food and eating only that for the rest of your life. You should have a balanced meal!
You might be wondering, well which is which? Ask yourself which is your favorite, that is Spot! Pause for laugh.
So having a balanced meal means -
Use Reserved Instances for known/steady-state workloads
Set-up multiple Scaling groups
Scale using Spot, On-Demand or both
Most web applications seek to optimize on three angles, often in this this order of importance -
High Availability. In the modern world we expect services to be available when we need them. No excuses.
Performance. An old Amazon.com piece of research found a 1 second performance delay could reduce revenue $1.6B per annum!
Finally, having satisfied availability and performance we aim to achieve both at the lowest possible cost.
Just to explain the core concepts of what is going on in this picture. HTTP requests are first handled by the ELB, which automatically distributes incoming application traffic among multiple Amazon EC2 instances across multiple AZ's. Web servers and applications are deployed on Amazon EC2 instances. Most organizations will select an AMI and then customize it to their needs, creating a custom AMI or using technology such as Docker, Chef or Puppet to deploy their software on the instance. These EC2 instances are deployed into an Auto Scaling Group that automatically adjusts your capacity up or down according to conditions you define. To provide high availability, the relational database that contains application data is hosted redundantly on a Multi-AZ deployment using Amazon RDS.
To help you manage your instances, images, and other Amazon EC2 resources, you can assign your own metadata to each resource in the form of tags
Tagging accurately, enables us to understand different components of our compute resources and therefore how to best optimize. It also allows us to report back to the business costs. You can have up to 10 tags per ec2 instance!
Tagging instances enables us to turn a random sequence of numbers into actionable information. This is how we will have a clear understanding of what are Web servers, app servers and Database servers. We can take it a step further and know where theyâre test, QA or prod nodes. Because we can add 10 tags we might use one of them to define an internal cost center, one for the individual user launching capacity.. There are many options. But tagging is core.
Because weâre smart and have used tagging now we can dive into how to optimize based on the nature of the application. Because how we optimize for the web tier might be different than the app tier, and is almost certainly different from the database tier! Lets dig in.
In this well designed application the web tier is entirely stateless, enabling a highly available deployment where weâre treating EC2 instances as cattle not pets, any individual server coming in or out of service does not have any negative impact on customer experience. It enables a fast and powerful response to customer demand, no longer do we need to âguestimateâ peak load and deploy capacity to meet it. It also allows us to make aggressive use of Spot instances, while achieving these goals to deliver an incredibly low TCO.
Here weâre assuming their app tier is scalable, but âstatefulâ. Even in some well designed applications there may be a reason to treat the application tier as stateful. E.g. if youâre keeping a local cache of results to improve responsiveness for recurring/common requests. In this example weâre going to run a higher percentage of reserved instances, and largely use On-Demand instances to deal with peak traffic during the day. However, in a pattern where we have short term peaks it is an opportunity to use Spot block duration instances, which come at a 30-50% discount over on-demand without the risk of interruption for up to 6 hours.
Iâm the Spot guy, so Iâm only going to say this once. You do not run your databases on Spot. You run your databases using Reserved Instances.
However, there is still an opportunity to leverage the dynamic nature of Spot. What we often see customers struggling with on premise is managing the reporting process, whether that be daily, weekly or monthly. Unfortunately what we often see is customers having to run an oversized database just to deal with reporting needed, while ensuring it doesnât impact customer performance (meaning having to run reports at odd hours, delaying adhoc reporting or accepting degraded customer performance). Because there is no long term commitment using On-Demand you can now simply take a backup/snapshot of your database and spin it up on independent infrastructure as you need it and turn it back off when youâre done.
While weâve treated each tier separately we are likely using one of our tags common across all 3. This is our âwebsiteâ tag. As we drill down on the website tag we recognize weâre having a balanced meal! Across the three tiers our meal consist of
Spot 13%
On-Demand 11%
Reserved 76%
Remember, there may be an opportunity to not run servers at all. The easiest server to manage is no server at all! Whether that being moving simple functions/cron to Lambda, removing the need for queuing servers via SQS, using SES instead of running mail servers. The list goes on. Ask yourself, is this something we want to manage and maintain? Is there a service for this?
âInvention requires two things: 1. The ability to try a lot of experiments, and 2. not having to live with the collateral damage of failed experiments.â â Andy Jassy
Zynga chat.
Grid is broad. Grid computing is the collection of computer resources from multiple locations to reach a common goal.. You can think of these same principles as applying to many different type of batch processing, from schedulers through to Hadoop and/or Spark.
There are many different patterns around grid, but a few commonalities. Youâre going to have files, whether its lots of small files, a few large or some combination stored in either object storage (such as S3) if the application can use HTTP or on a service acting as a FUSE layer if the service requires POSIX-style file system (Such as EFS, or Luster). You grid of computers, EC2, will then process that data using local scratch (many of our instances have local attached ephemeral storage, or you can attach EBS volumes!) or the file systems I just mentioned for temp files. Finally when done, ideally your putting output files into S3 where there is 11 9âs of durability and can easily grant access to internal or external users in a secure and scalable manor.
In order to keep up with the demand for computing youâre going to be buying computers ahead of demand. Naturally this creates low utilization and high costs. So what you see here on the screen is not what most users actually experience. How do you calculate ROI on an experiment with unknown outcomes?
For those of you whoâve read the innovators dilemma this might hit home, invest in an idea with a completely unknown ROI. Internal IT starts servicing only the large profitable business, neglecting the new opportunities to innovate and improve or deliver entirely new services. Therefore what we often see is..
This is the reality. The vast majority, if not all internal users will be constrained. Not delivering compute dynamically based on demand, but rather ensuring high levels of resource utilization. Can you imagine being the PhD at Novartis who has a wild idea to cure cancer but you donât know if it will work.. And you need to make the case to invest in a $40M datacenter before working out if its viable?
There is a hidden cost here, but first.. Could we do this in the cloud?
Well of course you can do the same thing in the cloud. Buy 3 year reserved instances and save up to 75% while ensuring you have high utilization. But as Iâve already mentioned, there are some significant hidden costs here.
Yay weâre crushing it! Using our data center close to 100% of the time!
Create a new grid with the blue line up the top and the red line banging against it.
Conflicting goals
Grid users seek fastest possible time-to-results
Grid workloads are not steady-state
IT support team seeks highest possible utilization
Result:
The job queue becomes the capacity buffer
Job completion times are hard to predict
Users are frustrated and run fewer jobs
Innovation is throttled by IT resources
Innovation is no longer throttled by IT resources. Your base use is covered by Reserved Instances to service the mature (often profitable) parts of the business. Leveraging On-Demand for some of the workloads that do not have fault tolerance then enabling access to all those crying out for resources to Spot. Youâd be surprised how high the ROI is when an engineer will make their code fault tolerant when it means getting access to 10,000 core data center for just $100 per hour.
You might take this a step further and identify the jobs using OD that live for <=6 hours. Those can immediately move to Spot block to save even more!
The flexibility to operate the way your business operates. Youâre no longer forcing a business to work in a capital intensive way, youâre responding to demand of your uses and again removing the impediment to innovation. As Andy Jassy the CEO of AWS has said - âInvention requires two things: 1. The ability to try a lot of experiments, and 2. not having to live with the collateral damage of failed experiments.â
I love this quote because it speaks to how the clouds flexibility is enabling freedom of thought to transform into action. When leveraging the Cloud effectively youâre no longer constrained by your infrastructure investments, you donât need to question your ability to solve the hardest challenges. When using the cloud effectively youâre generally only constrained by your thoughts.
Suncorp Group is a multi-billion dollar, diversified Australian financial services company and runs a complex and expensive IT environment to support 14 brands and 4 lines of business in 5 countries. The organization embraces a culture of innovation and takes pride in their talent, what they consider their primary business enabler and competitive advantage.
I call Adtech an inherently web scale business because thatâs exactly what these companies are doing. Real-time bidding, dynamically to buy and sell ads per-impression on the internet. Iâll give you a little background here, a typical transaction begins with a user visiting a website. This triggers a bid request that can include various pieces of data. The an ad exchange where multiple advertisers automatically submit bids in real time to place their ads. The impression goes to the highest bidder and their ad is served on the page. Typically this whole process needs to be done within 100 milliseconds from the moment the ad exchange received the request. Many adtech companies are doing this process over 50 billion times a day. Adtech companies scale up and down based on demand and have therefore often designed highly resilient architectures that can seamlessly hide individual instance failure. Because each transaction must be processed within 100ms the Spot 2 minute warning is a lifetime for these companies which is why we see such a high %.
The exact inverse is what we might see in an enterprise SaaS company that long ago built a monolithic application. As the user interface and data access are combined into a single program on a single platform it is not easy to mask instance failure and scale dynamically during the day. This means deployments tend to be very static, this lends the model very well to purchasing reserved instances. Even better a deployment might be directly related to a customer contract given easy predictability to purchase 3 year reserved instances and save up to 75%! Now of course we see some on-demand use, likely internal dev-test or PoC with new customers!
When we look at a traditional enterpriseâs journey to the cloud it often begins as teams/businesses individually begin leveraging cloud to rapidly prototype and deliver new technologies or revenue streams to their business. For the same reasons as my earlier quote from Andy Jassy, innovation requires lots of experimentation and not having to deal with fallout of the failed experiments On-Demand is where they begin. As these enterprises begin thinking beyond a constrained environment, pushing the boundaries of innovation they will find themselves leveraging compute capacity from the cloud on demand. For those âexperimentsâ that prove themselves successful the enterprises reserve capacity and get up to 75% discounts. As they recognize the flexibility and agility the cloud delivers the enterprise will migrate many of those monolithic applications weâve just spoken about to the cloud, and similarly after they reach steady state reserve.
If we look at a specific type of enterprise, a gaming enterprise as example the patterns become even clearer. Enterprise game companies are in the business of creating, building and launching fantastic experiences for gamers. The compute pattern leading up to and through a launch is very predictable, in particular the public beta and launch is clear here. However, despite a knowledge of when youâre going to need capacity you donât necessary know how much or for how long. What we can see here is a company with many successful games (and no doubt standard backend company infrastructure) who launches beta small increase and a huge launch! But we all know the peaks of launch often donât run long term so they come off a peak and increase their reserve capacity. As the Spot guy I have to assume his gaming company doesnât manage a lot of their own analytics so theyâre not using Spot a lot!!
To round this out we have an entirely different type of company, a company focused on pushing the boundaries of scientific research. This graph is actually a little misleading, one might assume they donât use a lot of compute over a year. However, that is just because when doing computational science the multiple off base usage can be huge! What is already a lot of base capacity, and running on-premise might be the limiting factor on experimentation or innovation, may be less than 1/20th of whatâs needed during those weeks you push the boundaries of science to discover..
Finally, what about a large technology company? The type of very large, multi-business unit, 10âs of thousand employee organization. Theyâve been running their common core business applications such as HR SAP, SharePoint, CRMs, exchange etc. for years leveraging reserved instances. Theyâve been testing new versions of the environment using On-demand servers. However, theyâre also building new lines of business based on IoT, discovering decision making information from their data via new big data platforms and building the next generation of their platform via electronic design automation leveraging On-Demand and Spot instances.
And thatâs exactly what we see within every large scale organizations. A single company can have different units within a single company. Whether it be the data science or quant team, discovering new opportunities to optimize supply chain using 1c per core hour compute via EC2 Spot instances. There are teams building net new applications trying to build the next million (or billion) dollar business unit, here is one that may have been tracking costing just a $100 a week but then one new line takes off rapidly and capacity dynamically scales to meet customer demand. Of course there is the traditional IT arm, delivering internal services like HR systems making heavy use of reserved instances. Finally test and development as you got through the process to deliver new versions, making use of Spot, On-Demand and reserved instances.
Remember the pillars of optimization
Right Sizing
Increase elasticity (turn stuff off!)
Measure, Monitor and Improve
Use Tags to understand your services. You donât need to hire people to do this.
There are 3 core purchasing options â have a balanced meal!
Architect Your Workloads with Performance and Cost in Mind
As you move away from monolithic applications to service oriented architectures youâll find yourself with more opportunities to use Spot.
AWS is more cost-effective than on-premises environments in both short-term and long-term and leveraging the EC2 purchase models enables
Freedom to build unfettered
Freedom to get real value from data
Freedom to say Yes