In this presentation, we provide an overview of Cloud Computing and provide some details on the wide range of services that Amazon Web Services offers today. This presentation is intended for people new to cloud computing or experienced cloud developers who have not yet used AWS.
Amazon Web Services or simply known as AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 40 services.
For more details - http://www.i2k2.com/services/amazon-web-services/aws/
In this presentation, we provide an overview of Cloud Computing and provide some details on the wide range of services that Amazon Web Services offers today. This presentation is intended for people new to cloud computing or experienced cloud developers who have not yet used AWS.
Amazon Web Services or simply known as AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 40 services.
For more details - http://www.i2k2.com/services/amazon-web-services/aws/
Introduction to amazon web services for developersCiklum Ukraine
Introduction to Amazon Web Services for developers
About presenter
Roman Gomolko with 11 years of experience in development including 4 years of day-to-day work with Amazon Web Services.
Disclaimer
Cloud-hosting is buzz-word for a while and in my talk I would like to give an introduction to Amazon Web Services (AWS).
We will talk about basic building blocks of AWS like EC2, ELB, ASG, S3, CloudFront, RDS, IAM, VPC and other scary or funny abbreviations.
Then we will discuss how to migrate existing applications to AWS. This topic includes:
• how to design infrastructure and services to use when migrating
• how to choose proper instance types
• how to estimate infrastructure cost
• how it will affect performance of application migrated
Then we will make an overview of services provided by AWS and possible apply in your current of future applications:
• SQS
• DynamoDB
• Kinesis
• CloudSearch
• CodeDeploy
• CloudFormation
And if we survive we will talk a little how to design Cloud applications. That’s mainly about general principles.
My talk mostly targeted towards decision makers and decisions pushers of small and medium size companies which are consider “going cloud” or already moving into this direction. Everyone interested in gaining knowledge in these areas are welcomed as well.
We will spend around 2–3 hours together and you will be able to pitch-in any questions until we totally goes away from original plan.
AWS is an internet-based computing service in which large groups of remote servers are networked to allow centralized data storage, and online access to computer services or resources.
Webinar aws 101 a walk through the aws cloud- introduction to cloud computi...Amazon Web Services
Whether you are running applications that share photos or support critical operations of your business, you need rapid access to flexible and low cost IT resources. The term "cloud computing" refers to the on-demand delivery of IT resources via the Internet with pay-as-you-go pricing. Whether you are a start-up who wants to accelerate growth without a big upfront investment in cash or time for technology or an Enterprise looking for IT innovation, agility and resiliency while reducing costs, the AWS Cloud provides a complete set of web services at zero upfront costs which are available with a few clicks and within minutes. Join this webinar to learn more about the benefits of Cloud Computing and:
- The history of AWS and how a global online retailer got into cloud computing
- The concepts of utility computing and elasticity and why these are important to a cost-effective, scalable and reliable IT architecture
- The AWS service portfolio and the global footprint on which it is delivered
- The value proposition of the AWS Cloud
- Use cases to help you relate cloud based infrastructure to your own needs
- Busting the myths around cloud computing
- No prior experience is necessary, so join us for an overview of the AWS cloud services, and a discussion on how cloud computing can help accelerate innovation in your company.
Cloud Architecture Tutorial - Why and What (1of 3) Adrian Cockcroft
Introduction to the Netflix Cloud Architecture Tutorial - discusses the why and what of cloud including the thinking behind Netflix choice of AWS, and the product features that Netflix runs in the cloud.
In this presentation from the recent AWS Oil & Gas event in Aberdeen we introduce the AWS cloud, its benefits and some of the organisations that are using AWS today.
We also cover some specific use-case and case-studies in the oil and gas sector.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
Walk through this hands-on workshop to expand your AWS technical skills. Gain credibility for your experience working with AWS by building proficiency with services and solutions in the areas of AWS Architecture Fundamentals.
Introduction to amazon web services for developersCiklum Ukraine
Introduction to Amazon Web Services for developers
About presenter
Roman Gomolko with 11 years of experience in development including 4 years of day-to-day work with Amazon Web Services.
Disclaimer
Cloud-hosting is buzz-word for a while and in my talk I would like to give an introduction to Amazon Web Services (AWS).
We will talk about basic building blocks of AWS like EC2, ELB, ASG, S3, CloudFront, RDS, IAM, VPC and other scary or funny abbreviations.
Then we will discuss how to migrate existing applications to AWS. This topic includes:
• how to design infrastructure and services to use when migrating
• how to choose proper instance types
• how to estimate infrastructure cost
• how it will affect performance of application migrated
Then we will make an overview of services provided by AWS and possible apply in your current of future applications:
• SQS
• DynamoDB
• Kinesis
• CloudSearch
• CodeDeploy
• CloudFormation
And if we survive we will talk a little how to design Cloud applications. That’s mainly about general principles.
My talk mostly targeted towards decision makers and decisions pushers of small and medium size companies which are consider “going cloud” or already moving into this direction. Everyone interested in gaining knowledge in these areas are welcomed as well.
We will spend around 2–3 hours together and you will be able to pitch-in any questions until we totally goes away from original plan.
AWS is an internet-based computing service in which large groups of remote servers are networked to allow centralized data storage, and online access to computer services or resources.
Webinar aws 101 a walk through the aws cloud- introduction to cloud computi...Amazon Web Services
Whether you are running applications that share photos or support critical operations of your business, you need rapid access to flexible and low cost IT resources. The term "cloud computing" refers to the on-demand delivery of IT resources via the Internet with pay-as-you-go pricing. Whether you are a start-up who wants to accelerate growth without a big upfront investment in cash or time for technology or an Enterprise looking for IT innovation, agility and resiliency while reducing costs, the AWS Cloud provides a complete set of web services at zero upfront costs which are available with a few clicks and within minutes. Join this webinar to learn more about the benefits of Cloud Computing and:
- The history of AWS and how a global online retailer got into cloud computing
- The concepts of utility computing and elasticity and why these are important to a cost-effective, scalable and reliable IT architecture
- The AWS service portfolio and the global footprint on which it is delivered
- The value proposition of the AWS Cloud
- Use cases to help you relate cloud based infrastructure to your own needs
- Busting the myths around cloud computing
- No prior experience is necessary, so join us for an overview of the AWS cloud services, and a discussion on how cloud computing can help accelerate innovation in your company.
Cloud Architecture Tutorial - Why and What (1of 3) Adrian Cockcroft
Introduction to the Netflix Cloud Architecture Tutorial - discusses the why and what of cloud including the thinking behind Netflix choice of AWS, and the product features that Netflix runs in the cloud.
In this presentation from the recent AWS Oil & Gas event in Aberdeen we introduce the AWS cloud, its benefits and some of the organisations that are using AWS today.
We also cover some specific use-case and case-studies in the oil and gas sector.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
Walk through this hands-on workshop to expand your AWS technical skills. Gain credibility for your experience working with AWS by building proficiency with services and solutions in the areas of AWS Architecture Fundamentals.
Learn how AWS customers save money, time and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
Risk Management and Particle Accelerators: Innovating with New Compute Platfo...Amazon Web Services
What does risk modeling and analytics in financial services have in common with large scale computing in high energy physics? Come to this session to hear how financial services customers like Aon are taking advantage of new approaches like predictive analytics and AI/deep learning on AWS to perform risk modeling and how Brookhaven National Laboratory are using 10s of thousands of cores to do large scale grid computing for Monte Carlo simulations in high energy physics. In addition, we will also showcase how CSIRO eHealth team in Australia are innovating with serverless architectures using AWS Lambda for personalized medicine and genomics.
Speakers: Adrian White, Sr SciCo Technical Manager, Amazon Web Services
Learn about the patterns and techniques a business should be using in building their infrastructure on Amazon Web Services to be able to handle rapid growth and success in the early days. From leveraging highly scalable AWS services, to architecting best patterns, there are a number of smart choices you can make early on to help you overcome some typical infrastructure issues.
Presenter: Chris Munns,Solutions Architect, Amazon Web Services
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...Amazon Web Services
Every day, the computing power of high-performance computing (HPC) clusters helps scientists make breakthroughs, such as proving the existence of gravitational waves and screening new compounds for new drugs. Yet building HPC clusters is out of reach for most organizations, due to the upfront hardware costs and ongoing operational expenses. Now the speed of innovation is only bound by your imagination, not your budget. Researchers can run one cluster for 10,000 hours or 10,000 clusters for one hour anytime, from anywhere, and both cost the same in the cloud. And with the availability of Public Data Sets in Amazon S3, petabyte scale data is instantly accessible in the cloud. Attend and learn how to build HPC clusters on the fly, leverage Amazon’s Spot market pricing to minimize the cost of HPC jobs, and scale HPC jobs on a small budget, using all the same tools you use today, and a few new ones too.
Are you challenged today with getting non-digital information into a digital format? Are you trying to find the most cost effective storage solutions for your digital content? Do you want to share your libraries rich information with a global audience? Attend this webinar to learn how to digitize, store and share your information quickly, efficiently and at the lowest cost possible.
Improving Availability & Lowering Costs with Auto Scaling & Amazon EC2 (CPN20...Amazon Web Services
Running your Amazon EC2 instances in Auto Scaling groups allows you to improve your application's availability right out of the box. Auto Scaling replaces impaired or unhealthy instances automatically to maintain your desired number of instances (even if that number is one). You can also use Auto Scaling to automate the provisioning of new instances and software configurations as well as to track of usage and costs by app, project, or cost center. Of course, you can also use Auto Scaling to adjust capacity as needed - on demand, on a schedule, or dynamically based on demand. In this session, we show you a few of the tools you can use to enable Auto Scaling for the applications you run on Amazon EC2. We also share tips and tricks we've picked up from customers such as Netflix, Adobe, Nokia, and Amazon.com about managing capacity, balancing performance against cost, and optimizing availability.
Cloud computing is about moving services, computation and/or data?for cost and business advantage?off-site to an internal or external, location-transparent, centralized facility or contractor. By making data available in the cloud, it can be more easily and ubiquitously accessed, often at much lower cost, increasing its value by enabling opportunities for enhanced collaboration, integration, and analysis on a shared common platform.
More and more, the scalable on-demand infrastructure provided by AWS is being used by researchers, scientists and engineers in Life Sciences, Finance and Engineering to solve bigger problems, answer complex questions and run larger simulations. In this session we start by talking about the supercomputing class performance and high performance storage available to the scientists and engineers at their fingertips. We will go over examples of how startups are innovating and large enterprises are extending their HPC environments. Finally, we walk through some of the common questions that come up as organizations start leveraging AWS for their high performance computing needs.
#lspe Q1 2013 dynamically scaling netflix in the cloudCoburn Watson
Meetup presentation on how Netflix dynamically scales in the cloud. It covers topics primarily related to AWS autoscaling and provides some "day-in-the-life" data.
Overview of AWS Services for Data Storage and Migration - SRV205 - Anaheim AW...Amazon Web Services
In this session, we explore the features and functions of AWS storage services. We provide context on the portfolio, and we cover the most common use cases for AWS offerings for object, file, block, and migration technologies, including the partner ecosystem. We then describe each service through customer case studies. Expect to leave this session understanding how to select a storage service and start moving workloads or building new ones.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
1. Accelerating Time to Science:
Transforming Research in the Cloud
Jamie Kinney - @jamiekinney
Director of Scientific Computing, a.k.a. “SciCo” – Amazon Web Services
2. Why Do Researchers Love AWS?
Time to Science
Access research
infrastructure in minutes
Low Cost
Pay-as-you-go pricing
Elastic
Easily add or remove capacity
Globally Accessible
Easily Collaborate with
researchers around the world
Secure
A collection of tools to
protect data and privacy
Scalable
Access to effectively
limitless capacity
3. Why does Amazon care about Scientific Computing?
• In order to fundamentally accelerate the pace of scientific discovery
• It is a great application of AWS with a broad customer base
• The scientific community helps us innovate on behalf of all customers
– Streaming data processing & analytics
– Exabyte scale data management solutions and exaflop scale compute
– Collaborative research tools and techniques
– New AWS regions
– Significant advances in low-power compute, storage and data centers
– Identify efficiencies which will lower our costs and therefore reduce pricing
for all AWS customers
4. How is AWS Used for Scientific Computing?
• High Throughput Computing (HTC) for Data-Intensive Analytics
• High Performance Computing (HPC) for Engineering and Simulation
• Collaborative Research Environments
• Hybrid Supercomputing Centers
• Science-as-a-Service
• Citizen Science
5. Research Grants
AWS provides free usage
credits to help researchers:
• Teach advanced courses
• Explore new projects
• Create resources for the
scientific community
aws.amazon.com/grants
9. AWS hosts “gold standard” reference
data at our expense in order to
catalyze rapid innovation and
increased AWS adoption
A few examples:
1,000 Genomes ~250 TB
Common Crawl
OpenStreetMap
Actively Developing…
Cancer Genomics Data Sets ~2-6 PB
SKA Precursor Data 1PB+
Public Data Sets
11. Peering with all global research networks
Image courtesy John Hover - Brookhaven National Lab
12. AWS Egress Waiver for Research & Education
Timeline:
• 2013: Initial trial in Australia for users connecting via AARNET and AAPT
• 2014: Extended the waiver to include ESNET and Internet2
• 2015: Extending support to other major NRENs
Terms:
• AWS waives egress fees up to 15% of total AWS bill, customers are responsible for
anything above this amount
• Majority of traffic must transit via NREN with no transit costs
• 15% waiver applies to aggregate usage when consolidated billing is used
• Does not apply to workloads for which egress is the service we are providing (e.g. live video
streaming, MOOCs, Web Hosting, etc…)
• Available regardless of AWS procurement method (i.e. direct purchase or Internet2 Net+)
Contact us if you would like to sign up!
14. Data-Intensive Computing
The Square Kilometer Array will link 250,000 radio telescopes
together, creating the world’s most sensitive telescope. The SKA will
generate zettabytes of raw data, publishing exabytes annually over
30-40 years.
Researchers are using AWS to develop and test:
• Data processing pipelines
• Image visualization tools
• Exabyte-scale research data management
• Collaborative research environments
www.skatelescope.org/ska-aws-astrocompute-call-for-proposals/
15. Astrocompute in the Cloud Program
• AWS is adding 1PB of SKA pre-cursor data to the
Amazon Public Data Sets program
• We are also providing $500K in AWS Research
Grants for the SKA to direct towards projects
focused on:
– High-throughput data analysis
– Image analysis algorithms
– Data mining discoveries (i.e. ML, CV and
data compression)
– Exascale data management techniques
– Collaborative research enablement
https://www.skatelescope.org/ska-aws-astrocompute-call-for-proposals/
16. Schrodinger & Cycle Computing:
Computational Chemistry for Better Solar Power
Simulation by Mark Thompson of the
University of Southern California to see
which of 205,000 organic compounds
could be used for photovoltaic cells for
solar panel material.
Estimated computation time 264 years
completed in 18 hours.
• 156,314 core cluster, 8 regions
• 1.21 petaFLOPS (Rpeak)
• $33,000 or 16¢ per molecule
Loosely
Coupled
18. Region
• Geographic area where
AWS services are
available
• Customers choose
region(s) for their AWS
resources
• Eleven regions worldwide
AZ
AZ
AZ AZ AZ
Transit
Transit
19. Availability Zone (AZ)
• Each region has multiple,
isolated locations known as
Availability Zones
• Low-latency links between AZs
in a region <2ms, usually <1ms
• When launching an EC2
instance, a customer chooses
an AZ
• Private AWS fiber links
interconnect all major regions
AVAILABILITY ZONE 3
EC2
AVAILABILITY ZONE 2
AVAILABILITY ZONE 1
EC2
EC2
EC2
REGION
22. Virtual Private Cloud (VPC)
• Logically isolated section of
the AWS cloud, virtual
network defined by the
customer
• When launching instances
and other resources,
customers place them in a
VPC
• All new customers have a
default VPC
AVAILABILITY ZONE 1
REGION
AVAILABILITY ZONE 2
AVAILABILITY ZONE 3
VPC
EC2
EC2
EC2
EC2
24. What is Spot?
• Name your own price for EC2 Compute
– A market where price of compute changes
based upon Supply and Demand
– When Bid Price exceeds Spot Market Price,
instance is launched
– Instance is terminated (with 2 minute
warning) if market price exceeds bid price
• Where does capacity come from?
– Unused EC2 Instances
29. Spot allows customers to run workloads
that they would likely not run anywhere
else..
But today, to be successful in Spot
requires a little bit of additional effort
30. UNDIFFERENTIATED
HEAVY LIFTING
The Spot Experience today
• Build stateless, distributed, scalable applications
• Choose which instance types fit your workload the best
• Ingest price feed data for AZs and regions
• Make run time decisions on which Spot pools to launch in
based on price and volatility
• Manage interruptions
• Monitor and manage market prices across Azs and
instance types
• Manage the capacity footprint in the fleet
• And all of this while you don’t know where the capacity is
• Serve your customers
31. Making Spot Fleet Requests
• Simply specify:
– Target Capacity – The number of EC2 instances that
you want in your fleet.
– Maximum Bid Price – The maximum bid price that
you are willing to pay.
– Launch Specifications – # of and types of
instances, AMI id, VPC, subnets or AZs, etc.
– IAM Fleet Role – The name of an IAM role. It must
allow EC2 to terminate instances on your behalf.
32. Spot Fleet
• Will attempt to reach the desired target capacity
given the choices that were given
• Manage the capacity even as Spot prices
change
• Launch using launch specifications provided
36. The AWS storage portfolio
Amazon S3
• Object storage: data presented as buckets of objects
• Data access via APIs over the Internet
Amazon
EFS
• File storage (analogous to NAS): data presented as a file system
• Shared low-latency access from multiple EC2 instances
Amazon
Elastic Block
Store
• Block storage (analogous to SAN): data presented as disk volumes
• Lowest-latency access from single Amazon EC2 instances
Amazon
Glacier
• Archival storage: data presented as vaults/archives of objects
• Lowest-cost storage, infrequent access via APIs over the Internet
37. Amazon Elastic File System
• Fully managed file system for EC2 instances
• Provides standard file system semantics
• Works with standard operating system APIs
• Sharable across thousands of instances
• Elastically grows to petabyte scale
• Delivers performance for a wide variety of workloads
• Highly available and durable
• NFS v4–based
38. EFS is designed for a broad range of use cases,
such as…
• Content repositories
• Development environments
• Home directories
• Big data
http://news.cnet.com/8301-1001_3-57611919-92/supercomputing-simulation-employs-156000-amazon-processor-cores
http://blog.cyclecomputing.com/2013/11/back-to-the-future-121-petaflopsrpeak-156000-core-cyclecloud-hpc-runs-264-years-of-materials-science.html
Computational compound analysisSolar panel material Estimated computation time 264 years
[After first bullet point]: Each region designed to be completely isolated from other regions – this achieves the greatest possible fault tolerance and stability
[After second bullet point]: So when you launch an EC2 instance, you select the region it should be in. Customers can select a region based on, e.g., latency requirements or legal requirements
[After first bullet point]: Really a logical group of data centers
[After third bullet point]:
Note that some customers distribute their instances across multiple AZs to have extremely high fault tolerance.
We have 28 availability zones worldwide
[At end]: Can launch resources in default VPC or another one that you’ve created
Points to touch upon:-
- Different auction pools
- around 50 instance types
- 28 availability zones
- Linux/Windows Operating System
Lots of capacity to pick from
Be flexible to be successful with Spot
Look across instance types. E.g., *.8xlarge may be cheaper than on-demand *.4xlarge
Look across instance families e.g. r3.8xlarge is the same vCPUs as c3.8xlarge but with more memory
Look across AZs – may not be possible for all workloads
Look across regions – may not be possible for all workloads
- Why’s price higher than on-demand
Points to touch upon:-
- Different auction pools
- around 50 instance types
- 28 availability zones
- Linux/Windows Operating System
Lots of capacity to pick from
Be flexible to be successful with Spot
Look across instance types. E.g., *.8xlarge may be cheaper than on-demand *.4xlarge
Look across instance families e.g. r3.8xlarge is the same vCPUs as c3.8xlarge but with more memory
Look across AZs – may not be possible for all workloads
Look across regions – may not be possible for all workloads
- Why’s price higher than on-demand
And building upon the last slide..
twice the memory than r3.2xlarge
Way more cores
Still cheaper than on-demand price of r3.2xlarge
And pretty stable market conditions
Why aren’t customers leveraging this pool instead?
Customers have to ingest this data and make their decisions at launch time and then throughout the time that their workload is running
- which region/AZ to launch in
Which instance(s) to launch
Manage price throughout the workload is running
Manage interruptions
Build a lot of code to do all of this
Launch a fleet of 5 instances.
Let me start by talking about our current storage offerings, and how EFS enhances our set of offerings
S3:
Object storage
An object is a piece of data, like a document/image/video that is stored with some metadata in a flat structure.
Provides that data to applications via APIs over the Internet
Super-simple to build for example a web application that delivers content to users by making API calls over the Internet.
EBS:
Block storage for EC2 instances
Data presented to your instance as a disk volume
Provides low-single-digit latency access to single EC2 instances
Very popular for example for boot volumes and DBs
Glacier
Archival storage
Our lowest-cost storage offering, intended for data that’s infrequently accessed
And now we have EFS
File storage, data presented via a file system to instances
When attached to an instance it acts just like a local file system
Can provide shared access to data to multiple EC2 instances, with low latencies
So what are the features of EFS?
[Read each bullet point, then add to each]
Managed service – no hardware to maintain, so file software layer to maintain
Standard file system semantics – you get what you’d expect from a file system. Read-after-write consistency, ability to have a hierarchical directory structure, file operations like appends, atomic renames, the ability to write to a particular block in the middle of a file
Standard OS APIs – EFS appears like any other file system to your operating system. So applications that leverage standard OS APIs to work with files will work with EFS.
Sharable – a common data source for applications and workloads across EC2 instances
Elastically grows to PB scale – don’t specify a provisioned size upfront. You just create a file system, and it grows and shrinks automatically as you add and remove data.
Performance for a wide variety of workloads – SSD-based. Low latencies, high throughput, high IOPS.
Available/durable – Replicate data across AZs within a region. Means that your files are highly available, accessible from multiple AZs, and also well-protected from data loss.
NFSv4 based – NFS is a network protocol for interacting with file systems. In the background, when you connect an instance to EFS, the data is sent over the network via NFS.
Open standard, widely adopted. Included on virtually all Linux distributions.
EFS is designed for a broad range of use cases – let me walk you through a few examples
Content management systems store and serve information for a wide range of applications
E.g., online publications, reference libraries.
EFS: durable, high throughput backing store for these applications.
Development and build environments often have hundreds of nodes accessing a common set of source files, binaries, and other resources.
EFS: serve as the storage layer in these environments, providing a high level of IOPS to serve demanding development and test needs.
Many organizations provide storage for individual and team use.
E.g., research scientists commonly share data sets that each scientist may perform ad hoc analyses on, and these scientists also store files in their own home directories.
EFS: an administrator can create a file system where parts of it are accessible to groups in an organization and parts are accessible to individuals.
Big Data workloads often require file system operations and read-after-write consistency, as well as the ability to scale to large amounts of storage and throughput.
EFS: file storage that EC2 clusters can work with directly for Big Data workloads
The Docker daemon is a process that starts the Docker container on the instance. It is called by our ECSAgent to start, stop, and describe containers
A Task Definition describes the components of your application such as the Docker containers you want to run, what resources they will use, and how they are linked together.
A Container is the actual virtualized process that encapsulates your application you are running.
A Service references how many Task Definitions you want to run and if you want to use an Elastic Load Balancer to route traffic to the containers.
The Service then instantiates the requested number of Task Definitions into Tasks that consist of one or more Containers that run on Container Instances.
A the Cluster is a logical grouping of Container Instances used to run the Tasks
A Container Instance is an Amazon EC2 instance.
The user has a Docker image that they want to deploy onto an ECS cluster
The user first
The user creates a Task Definition that specifies one or more containers required for the task, including the Docker repository and image, memory and CPU requirements, shared data volumes, and how the containers are linked to each other.
The user spins up EC2 instances that is running the ECS agent and registers it with ECS
User runs Describe Cluster command to get information about the cluster state and the available resources
The user uses the Task Definition created earlier and runs the Run Task command to deploy the containers to the ECS cluster
User calls Describe Cluster again and gets information about the cluster state and the running containers