SlideShare a Scribd company logo
1 of 38
Storage Services
• S3
• EBS
• Storage Gateway
Nagesh Ramamoorthy
S3
• S3 features
• Key Concepts
• Storage classes
• Versioning
• Managing access
S3
Amazon Simple Storage Service is
storage for the Internet.
It is designed to make web-scale
computing easier for developers.
S3 is designed to provide
99.999999999% durability and 99.99%
availability of objects over a given year
S3 features
Storage Classes
Bucket Policies & Access Control Lists
Versioning
Data encryption
Lifecycle Management
Cross Region Replication
S3 transfer Accelaration
Requester pays
S3 anaylitics and Inventory
Key Concepts : Objects
 Objects are the fundamental entities stored in Amazon S3
 An object consists of the following:
o Key – The name that you assign to an object. You use the object key to retrieve the object.
o Version ID – Within a bucket, a key and version ID uniquely identify an object. The version ID
is a string that Amazon S3 generates when you add an object to a bucket.
o Value – The content that you are storing. An object value can be any sequence of bytes.
Objects can range in size from zero to 5 TB
o Metadata – A set of name-value pairs with which you can store information regarding the
object. You can assign metadata, referred to as user-defined metadata
o Access Control Information – You can control access to the objects you store in Amazon S3
Key Concepts : Buckets
 A bucket is a container for objects stored in Amazon S3.
 Every object is contained in a bucket.
 Amazon S3 bucket names are globally unique, regardless of the AWS Region in which you create
the bucket.
 A bucket is owned by the AWS account that created it.
 Bucket ownership is not transferable;
 There is no limit to the number of objects that can be stored in a bucket and no difference in
performance whether you use many buckets or just a few
 You cannot create a bucket within another bucket.
Key Concepts : Object key
 Every object in Amazon S3 can be uniquely addressed through the combination of the web
service endpoint, bucket name, key, and optionally, a version.
 For example, in the URL http://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is
the name of the bucket and "2006-03-01/AmazonS3.wsdl" is the key.
Storage Class
Each object in Amazon S3 has a
storage class associated with it.
Amazon S3 offers the following
storage classes for the objects that
you store
• STANDARD
• STANDARD_IA
• GLACIER
Standard class
This storage class is ideal for performance-sensitive use cases and frequently
accessed data.
STANDARD is the default storage class; if you don't specify storage class at the time
that you upload an object, Amazon S3 assumes the STANDARD storage class.
Designed for Durability : 99.999999999%
Designed for Availability : 99.99%
Standard_IA class
This storage class (IA, for infrequent access) is optimized for long-lived and less frequently accessed data
for example backups and older data where frequency of access has diminished, but the use case still demands high
performance.
There is a retrieval fee associated with STANDARD_IA objects which makes it most suitable for infrequently accessed data.
The STANDARD_IA storage class is suitable for larger objects greater than 128 Kilobytes that you want to keep for at least 30
days
Designed for durability : 99.999999999%
Designed for Availability : 99.9%
Glacier
• The GLACIER storage class is suitable for archiving data where data access is infrequent
• Archived objects are not available for real-time access. You must first restore the objects
before you can access them.
• You cannot specify GLACIER as the storage class at the time that you create an object.
• You create GLACIER objects by first uploading objects using STANDARD, RRS, or
STANDARD_IA as the storage class. Then, you transition these objects to the GLACIER
storage class using lifecycle management.
• You must first restore the GLACIER objects before you can access them
• Designed for durability : 99.999999999%
• Designed for Availability : 99.99%
Reduced_Redundance
Storage class
RRS storage class is designed for noncritical, reproducible
data stored at lower levels of redundancy than the
STANDARD storage class.
if you store 10,000 objects using the RRS option, you can, on
average, expect to incur an annual loss of a single object per
year (0.01% of 10,000 objects)
Amazon S3 can send an event notification to alert a user or
start a workflow when it detects that an RRS object is lost
Designed for durability : 99.99%
Designed for Availability : 99.99%
Lifecycle Management
• Using lifecycle configuration rules, you can direct S3 to tier down the storage
classes, archive, or delete the objects during their lifecycle.
• The configuration is a set of one or more rules, where each rule defines an action
for Amazon S3 to apply to a group of objects
• These actions can be classified as follows:
Transition
• In which you define when objects transition to another storage
class.
Expiration
• In which you specify when the objects expire. Then Amazon S3
deletes the expired objects on your behalf.
When Should I Use Lifecycle Configuration?
If you are uploading periodic logs to your bucket, your application might need these logs for a week
or a month after creation, and after that you might want to delete them.
Some documents are frequently accessed for a limited period of time. After that, these documents
are less frequently accessed. Over time, you might not need real-time access to these objects, but
your organization or regulations might require you to archive them for a longer period
You might also upload some types of data to Amazon S3 primarily for archival purposes, for
example digital media archives, financial and healthcare records etc
Versioning
• Versioning enables you to keep multiple versions of an object in one bucket.
• Once versioning is enabled, it can’t be disabled but can be suspended
• Enabling and suspending versioning is done at the bucket level
• You might want to enable versioning to protect yourself from unintended overwrites and
deletions or to archive objects so that you can retrieve previous versions of them
• You must explicitly enable versioning on your bucket. By default, versioning is disabled
• Regardless of whether you have enabled versioning, each object in your bucket has a
version ID
Versioning (contd..)
• If you have not enabled versioning, then Amazon S3 sets the version ID value to null.
• If you have enabled versioning, Amazon S3 assigns a unique version ID value for the
object
• An example version ID is 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo. Only
Amazon S3 generates version IDs. They cannot be edited.
• When you enable versioning on a bucket, existing objects, if any, in the bucket are
unchanged: the version IDs (null), contents, and permissions remain the same
Versioning : PUT
Operation
• When you PUT an object in a versioning-enabled
bucket, the noncurrent version is not overwritten.
• The following figure shows that when a new version
of photo.gif is PUT into a bucket that already
contains an object with the same name, S3
generates a new version ID (121212), and adds the
newer version to the bucket.
Versioning : DELETE
Operation
• When you DELETE an object, all versions remain in
the bucket and Amazon S3 inserts a delete marker.
• The delete marker becomes the current version of
the object. By default, GET requests retrieve the
most recently stored version. Performing a simple
GET Object request when the current version is a
delete marker returns a 404 Not Found error
• You can, however, GET a noncurrent version of an
object by specifying its version ID
• You can permanently delete an object by specifying
the version you want to delete.
Managing access
• By default, all Amazon S3 resources—buckets, objects, and
related subresources are private : only the resource owner, an
AWS account that created it, can access the resource.
• The resource owner can optionally grant access permissions to
others by writing an access policy
• Amazon S3 offers access policy options broadly categorized as
resource-based policies and user policies.
• Access policies you attach to your resources are referred to
as resource-based policies. For example, bucket policies and
access control lists (ACLs) are resource-based policies.
• You can also attach access policies to users in your account.
These are called user policies
Resource Owner
• The AWS account that you use to create buckets and objects owns those
resources.
• If you create an IAM user in your AWS account, your AWS account is the
parent owner. If the IAM user uploads an object, the parent account, to
which the user belongs, owns the object.
• A bucket owner can grant cross-account permissions to another AWS
account (or users in another account) to upload objects
• In this case, the AWS account that uploads objects owns those objects. The
bucket owner does not have permissions on the objects that other accounts
own, with the following exceptions:
• The bucket owner pays the bills. The bucket owner can deny access to
any objects, or delete any objects in the bucket, regardless of who
owns them
• The bucket owner can archive any objects or restore archived objects
regardless of who owns them
When to Use an ACL-based Access Policy
An object ACL is the only way to manage access to objects
not owned by the bucket owner
Permissions vary by object and you need to manage
permissions at the object level
Object ACLs control only object-level permissions
EBS
An Amazon EBS volume is a durable, block-level storage
device that you can attach to a single EC2 instance.
EBS volumes are particularly well-suited for use as the
primary storage for file systems, databases, or for any
applications that require fine granular updates and access to
raw, unformatted, block-level storage
EBS volumes are created in a specific Availability Zone, and
can then be attached to any instances in that same
Availability Zone.
While creating an EBS volume , AWS does industry standard
disk wiping
Benefits of EBS Volume
Data Availability: When you
create an EBS volume in an
Availability Zone, it is
automatically replicated within
that zone to prevent data loss
due to failure of any single
hardware component
Data persistence: An EBS volume
is off-instance storage that can
persist independently from the
life of an instance
Data encryption: For simplified
data encryption, you can create
encrypted EBS volumes with the
Amazon EBS encryption feature.
Snapshots: Amazon EBS provides
the ability to create snapshots
(backups) of any EBS volume and
write a copy of the data in the
volume to Amazon S3, where it is
stored redundantly in multiple
Availability Zones.
Flexibility: EBS volumes support
live configuration changes while
in production. You can modify
volume type, volume size, and
IOPS capacity without service
interruptions.
EBS Volume Types
Amazon EBS provides the following volume
types, which differ in performance
characteristics and price.
The volumes types fall into two categories:
•SSD-backed volumes optimized for transactional
workloads involving frequent read/write operations
with small I/O size, where the dominant performance
attribute is IOPS ( gp2, io1)
•HDD-backed volumes optimized for large streaming
workloads where throughput (measured in MiB/s) is
a better performance measure than IOPS (St1, Sc1)
General purpose SSD
volumes (gp2)
• Description : General purpose SSD volume that balances
price and performance for a wide variety of workloads
• Use Cases: Recommended for most workloads , System
boot volumes , Low-latency interactive apps ,
Development and test environments
• API Name : Gp2
• Volume Size : 1 GiB - 16 TiB
• Max IOPS : 10,000
• Max throughput : 160 MiB/s
• Max IOPS/ Instance : 80,000
• Minimum IOPS : 100
• Between a minimum of 100 IOPS (at 33.33 GiB and
below) and a maximum of 10,000 IOPS (at 3,334 GiB and
above), baseline performance scales linearly at 3 IOPS
per GiB of volume size
Gp2 volumes IO credits and Burst
performance
• The performance of gp2 volumes is tied to volume size
• Volume Size determines the baseline performance level of the volume and how quickly it
accumulates I/O credits
• larger volumes have higher baseline performance levels and accumulate I/O credits faster
• I/O credits represent the available bandwidth that your gp2 volume can use to burst large
amounts of I/O when more than the baseline performance is needed
• Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which is enough to
sustain the maximum burst performance of 3,000 IOPS for 30 minutes
• This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to
provide a good bootstrapping experience for other applications
• If you notice that your volume performance is frequently limited to the baseline level , you should
consider using a larger gp2 volume or switching to an io1 volume
Provisioned IOPS SSD
volumes (io1)
• Description : Highest-performance SSD
volume for mission-critical low-latency
or high-throughput workloads
• Use case : Critical business applications
that require sustained IOPS
performance , Large database
workloads
• API Name : Io1
• Volume Size : 4 GiB - 16 TiB
• MAX IOPS : 32,000
• MAX Throughput : 500 MiB/s
• MAX IOPS per instance : 80000
Throughput Optimized
HDD Volumes (st1)
• Description : Low cost HDD volume designed for
frequently accessed, throughput-intensive workloads
• Use Cases : Streaming workloads requiring
consistent, fast throughput at a low price , Big Data ,
Data warehouse , log data , cant be a boot volume
• API name : st1
• Volume Size : 500 GiB - 16 TiB
• Max. Throughput/Volume : 500 MiB/s
• Throughput Credits and Burst Performance :
• Like gp2, st1 uses a burst-bucket model for performance.
• Volume size determines the baseline throughput of your
volume, which is the rate at which the volume
accumulates throughput credits
• For a 1-TiB st1 volume, burst throughput is limited to 250
MiB/s, the bucket fills with credits at 40 MiB/s, and it can
hold up to 1 TiB-worth of credits.
Cold HDD volumes
(sc1)
• Description: Lowest cost HDD volume designed for less
frequently accessed workloads
• Use Cases: Throughput-oriented storage for large
volumes of data that is infrequently accessed , Scenarios
where the lowest storage cost is important, Can't be a
boot volume
• Api Name : sc1
• Volume Size : 500 GiB - 16 TiB
• Max. Throughput/Volume : 250 MiB/s
• Throughput Credits and Burst Performance:
• Like gp2, sc1 uses a burst-bucket model for
performance.
• Volume size determines the baseline throughput of
your volume, which is the rate at which the volume
accumulates throughput credits.
• For a 1-TiB sc1 volume, burst throughput is limited
to 80 MiB/s, the bucket fills with credits at 12
MiB/s, and it can hold up to 1 TiB-worth of credits.
EBS Snapshots
• You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots.
• Snapshots are incremental backups, which means that only the blocks on the device that have changed after your
most recent snapshot are saved.
• This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data
• When you delete a snapshot, only the data unique to that snapshot is removed.
• Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot
was taken) to a new EBS volume
• When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original
volume that was used to create the snapshot.
• You can share a snapshot across AWS accounts by modifying its access permissions
• You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion,
data center migration, and disaster recovery
Amazon EBS Optimized instances
• An Amazon EBS–optimized instance uses an optimized configuration stack and provides
additional, dedicated capacity for Amazon EBS I/O
• EBS–optimized instances deliver dedicated bandwidth to Amazon EBS, with options between 425
Mbps and 14,000 Mbps, depending on the instance type you use
• The instance types that are EBS–optimized by default, there is no need to enable EBS optimization
and no effect if you disable EBS optimization
• For instances that are not EBS–optimized by default, you can enable EBS optimization
• When you enable EBS optimization for an instance that is not EBS-optimized by default, you pay
an additional low, hourly fee for the dedicated capacity.
• Example of instances which are EBS -optimzed by default : C4, C5, d3, f1, g3, h1, i3, m4 m5, r4, X1
, P2, P3
Amazon EBS
Encryption
When you create an encrypted EBS volume and attach it to a
supported instance type, the following types of data are
encrypted:
•Data at rest inside the volume
•All data moving between the volume and the instance
•All snapshots created from the volume
•All volumes created from those snapshots
Encryption operations occur on the servers that host EC2
instances, ensuring the security of both data-at-rest and data-
in-transit between an instance and its attached EBS storage
Snapshots of encrypted volumes are automatically encrypted.
Volumes that are created from encrypted snapshots are
automatically encrypted.
Storage Gateway
By using the AWS Storage Gateway software appliance, you can connect your existing on-premises application
infrastructure with scalable, cost-effective AWS cloud storage that provides data security features
AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions
Gateway is a software appliance installed as VM at your Opremise Virtualization infrastructure (ESX/ HyperV) or
an EC2 at the AWS infrastructure
To prepare for upload to Amazon S3, your gateway also stores incoming data in a staging area, referred to as an
upload buffer
Your gateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where
it is stored encrypted in Amazon S3
File Gateway
The gateway provides access to objects in S3 as files on an
NFS mount point
Objects are encrypted with server-side encryption with
Amazon S3–managed encryption keys (SSE-S3).
All data transfer is done through HTTPS
The service optimizes data transfer between the gateway and
AWS using multipart parallel uploads or byte-range
downloads
A local cache is maintained to provide low latency access to
the recently accessed data and reduce data egress charges
Volume
gateway
A volume gateway provides
cloud-backed storage volumes
that you can mount as Internet
Small Computer System Interface
(iSCSI) devices from your on-
premises application servers.
You can create storage volumes
and mount them as iSCSI devices
from your on-premises
application servers
The gateway supports the
following volume configurations:
Cached volumes
Stored Volumes
Cached volumes
• By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed
data locally in your storage gateway.
• Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your
applications with low-latency access to their frequently accessed data.
• Cached volumes can range from 1 GiB to 32 TiB in size and must be rounded to the nearest GiB.
• Each gateway configured for cached volumes can support up to 32 volumes for a total maximum storage volume of
1,024 TiB (1 PiB).
• Generally, you should allocate at least 20 percent of your existing file store size as cache storage.
• You can take incremental backups, called snapshots, of your storage volumes in Amazon S3.
• All gateway data and snapshot data for cached volumes is stored in Amazon S3 and encrypted at rest using server-
side encryption (SSE).
• However, you can't access this data with the Amazon S3 API or other tools such as the Amazon S3 Management
Console.
Stored
Volumes
By using stored volumes, you can store your
primary data locally, while asynchronously
backing up that data to AWS S3 as EBS snapshots.
This configuration provides durable and
inexpensive offsite backups that you can recover
to your local data center or Amazon EC2
Stored volumes can range from 1 GiB to 16 TiB in
size and must be rounded to the nearest GiB
Each gateway configured for stored volumes can
support up to 32 volumes and a total volume
storage of 512 TiB (0.5 PiB).
Tape Gateway
With a tape gateway, you can cost-effectively and
durably archive backup data in Amazon Glacier.
A tape gateway provides a virtual tape
infrastructure that scales seamlessly with your
business needs and eliminates the operational
burden of provisioning, scaling, and maintaining a
physical tape infrastructure.
With its virtual tape library (VTL) interface, you
use your existing tape-based backup
infrastructure to store data on virtual tape
cartridges that you create on your tape gateway

More Related Content

What's hot

What's hot (20)

ABCs of AWS: S3
ABCs of AWS: S3ABCs of AWS: S3
ABCs of AWS: S3
 
AWS Elastic Compute Cloud (EC2)
AWS Elastic Compute Cloud (EC2) AWS Elastic Compute Cloud (EC2)
AWS Elastic Compute Cloud (EC2)
 
Deep Dive on Amazon S3
Deep Dive on Amazon S3Deep Dive on Amazon S3
Deep Dive on Amazon S3
 
Intro to AWS: Storage Services
Intro to AWS: Storage ServicesIntro to AWS: Storage Services
Intro to AWS: Storage Services
 
Storage with Amazon S3 and Amazon Glacier
Storage with Amazon S3 and Amazon GlacierStorage with Amazon S3 and Amazon Glacier
Storage with Amazon S3 and Amazon Glacier
 
Getting Started with Amazon EC2
Getting Started with Amazon EC2Getting Started with Amazon EC2
Getting Started with Amazon EC2
 
AWS Lambda
AWS LambdaAWS Lambda
AWS Lambda
 
Azure storage
Azure storageAzure storage
Azure storage
 
Amazon services ec2
Amazon services ec2Amazon services ec2
Amazon services ec2
 
AWS Cloud trail
AWS Cloud trailAWS Cloud trail
AWS Cloud trail
 
AWS core services
AWS core servicesAWS core services
AWS core services
 
Introduction to Amazon EC2
Introduction to Amazon EC2Introduction to Amazon EC2
Introduction to Amazon EC2
 
AWS SQS SNS
AWS SQS SNSAWS SQS SNS
AWS SQS SNS
 
Introduction to AWS VPC, Guidelines, and Best Practices
Introduction to AWS VPC, Guidelines, and Best PracticesIntroduction to AWS VPC, Guidelines, and Best Practices
Introduction to AWS VPC, Guidelines, and Best Practices
 
AWS RDS
AWS RDSAWS RDS
AWS RDS
 
Introduction to AWS Lambda and Serverless Applications
Introduction to AWS Lambda and Serverless ApplicationsIntroduction to AWS Lambda and Serverless Applications
Introduction to AWS Lambda and Serverless Applications
 
Introduction to Amazon Web Services
Introduction to Amazon Web ServicesIntroduction to Amazon Web Services
Introduction to Amazon Web Services
 
AWS EC2 Fundametals
AWS EC2 FundametalsAWS EC2 Fundametals
AWS EC2 Fundametals
 
A complete guide to azure storage
A complete guide to azure storageA complete guide to azure storage
A complete guide to azure storage
 
AWS Storage Options
AWS Storage OptionsAWS Storage Options
AWS Storage Options
 

Similar to Storage Services Overview

Aws object storage and cdn(s3, glacier and cloud front) part 1
Aws object storage and cdn(s3, glacier and cloud front)   part 1Aws object storage and cdn(s3, glacier and cloud front)   part 1
Aws object storage and cdn(s3, glacier and cloud front) part 1Parag Patil
 
Aws meetup s3_plus
Aws meetup s3_plusAws meetup s3_plus
Aws meetup s3_plusAdam Book
 
AWS Amazon S3 Mastery Bootcamp
AWS Amazon S3 Mastery BootcampAWS Amazon S3 Mastery Bootcamp
AWS Amazon S3 Mastery BootcampMatt Bohn
 
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...Amazon Web Services
 
AWS Storage and Content Delivery Services
AWS Storage and Content Delivery ServicesAWS Storage and Content Delivery Services
AWS Storage and Content Delivery ServicesAmazon Web Services
 
Getting started with S3
Getting started with S3Getting started with S3
Getting started with S3AWS UG PK
 
Deep Dive on Amazon S3 - March 2017 AWS Online Tech Talks
Deep Dive on Amazon S3 - March 2017 AWS Online Tech TalksDeep Dive on Amazon S3 - March 2017 AWS Online Tech Talks
Deep Dive on Amazon S3 - March 2017 AWS Online Tech TalksAmazon Web Services
 
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
 
Diving into Common AWS Misconfigurations
Diving into Common AWS MisconfigurationsDiving into Common AWS Misconfigurations
Diving into Common AWS MisconfigurationsNikhil Sahoo
 
Data Storage for the Long Haul: Compliance and Archive
Data Storage for the Long Haul: Compliance and ArchiveData Storage for the Long Haul: Compliance and Archive
Data Storage for the Long Haul: Compliance and ArchiveAmazon Web Services
 
AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...
AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...
AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...Amazon Web Services
 

Similar to Storage Services Overview (20)

Aws object storage and cdn(s3, glacier and cloud front) part 1
Aws object storage and cdn(s3, glacier and cloud front)   part 1Aws object storage and cdn(s3, glacier and cloud front)   part 1
Aws object storage and cdn(s3, glacier and cloud front) part 1
 
Aws meetup s3_plus
Aws meetup s3_plusAws meetup s3_plus
Aws meetup s3_plus
 
Builders' Day - Best Practises for S3 - BL
Builders' Day - Best Practises for S3 - BLBuilders' Day - Best Practises for S3 - BL
Builders' Day - Best Practises for S3 - BL
 
Amazon s3
Amazon s3Amazon s3
Amazon s3
 
AWS Amazon S3 Mastery Bootcamp
AWS Amazon S3 Mastery BootcampAWS Amazon S3 Mastery Bootcamp
AWS Amazon S3 Mastery Bootcamp
 
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...
 
Storage & Content Delivery
Storage & Content Delivery Storage & Content Delivery
Storage & Content Delivery
 
AWS Storage and Content Delivery Services
AWS Storage and Content Delivery ServicesAWS Storage and Content Delivery Services
AWS Storage and Content Delivery Services
 
Getting started with S3
Getting started with S3Getting started with S3
Getting started with S3
 
Storage & Content Delivery
Storage & Content DeliveryStorage & Content Delivery
Storage & Content Delivery
 
Deep Dive on Amazon S3 - March 2017 AWS Online Tech Talks
Deep Dive on Amazon S3 - March 2017 AWS Online Tech TalksDeep Dive on Amazon S3 - March 2017 AWS Online Tech Talks
Deep Dive on Amazon S3 - March 2017 AWS Online Tech Talks
 
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...
 
Diving into Common AWS Misconfigurations
Diving into Common AWS MisconfigurationsDiving into Common AWS Misconfigurations
Diving into Common AWS Misconfigurations
 
AWS essentials S3
AWS essentials S3AWS essentials S3
AWS essentials S3
 
S3inmule
S3inmuleS3inmule
S3inmule
 
Data Storage for the Long Haul: Compliance and Archive
Data Storage for the Long Haul: Compliance and ArchiveData Storage for the Long Haul: Compliance and Archive
Data Storage for the Long Haul: Compliance and Archive
 
IBM Cloud Object Storage
IBM Cloud Object StorageIBM Cloud Object Storage
IBM Cloud Object Storage
 
AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...
AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...
AWS re:Invent 2016: Workshop: AWS S3 Deep-Dive Hands-On Workshop: Deploying a...
 
Amazon S3 Masterclass
Amazon S3 MasterclassAmazon S3 Masterclass
Amazon S3 Masterclass
 
s3
s3s3
s3
 

More from Nagesh Ramamoorthy

More from Nagesh Ramamoorthy (13)

IBM Cloud PowerVS - AIX and IBM i on Cloud
IBM Cloud PowerVS - AIX and IBM i on CloudIBM Cloud PowerVS - AIX and IBM i on Cloud
IBM Cloud PowerVS - AIX and IBM i on Cloud
 
NextGen IBM Cloud Monitoring and Logging
NextGen IBM Cloud Monitoring and LoggingNextGen IBM Cloud Monitoring and Logging
NextGen IBM Cloud Monitoring and Logging
 
IBM Cloud VPC Deep Dive
IBM Cloud VPC Deep DiveIBM Cloud VPC Deep Dive
IBM Cloud VPC Deep Dive
 
IBM Cloud Direct Link 2.0
IBM Cloud Direct Link 2.0IBM Cloud Direct Link 2.0
IBM Cloud Direct Link 2.0
 
CIS bench marks for public clouds
CIS bench marks for public cloudsCIS bench marks for public clouds
CIS bench marks for public clouds
 
AWS Security Hub Deep Dive
AWS Security Hub Deep DiveAWS Security Hub Deep Dive
AWS Security Hub Deep Dive
 
AWS solution Architect Associate study material
AWS solution Architect Associate study materialAWS solution Architect Associate study material
AWS solution Architect Associate study material
 
AWS database services
AWS database servicesAWS database services
AWS database services
 
AWS deployment and management Services
AWS deployment and management ServicesAWS deployment and management Services
AWS deployment and management Services
 
AWS network services
AWS network servicesAWS network services
AWS network services
 
AWS compute Services
AWS compute ServicesAWS compute Services
AWS compute Services
 
AWS Introduction and History
AWS Introduction and HistoryAWS Introduction and History
AWS Introduction and History
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 

Recently uploaded

Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsAndrey Dotsenko
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 

Recently uploaded (20)

DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 

Storage Services Overview

  • 1. Storage Services • S3 • EBS • Storage Gateway Nagesh Ramamoorthy
  • 2. S3 • S3 features • Key Concepts • Storage classes • Versioning • Managing access
  • 3. S3 Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. S3 is designed to provide 99.999999999% durability and 99.99% availability of objects over a given year
  • 4. S3 features Storage Classes Bucket Policies & Access Control Lists Versioning Data encryption Lifecycle Management Cross Region Replication S3 transfer Accelaration Requester pays S3 anaylitics and Inventory
  • 5. Key Concepts : Objects  Objects are the fundamental entities stored in Amazon S3  An object consists of the following: o Key – The name that you assign to an object. You use the object key to retrieve the object. o Version ID – Within a bucket, a key and version ID uniquely identify an object. The version ID is a string that Amazon S3 generates when you add an object to a bucket. o Value – The content that you are storing. An object value can be any sequence of bytes. Objects can range in size from zero to 5 TB o Metadata – A set of name-value pairs with which you can store information regarding the object. You can assign metadata, referred to as user-defined metadata o Access Control Information – You can control access to the objects you store in Amazon S3
  • 6. Key Concepts : Buckets  A bucket is a container for objects stored in Amazon S3.  Every object is contained in a bucket.  Amazon S3 bucket names are globally unique, regardless of the AWS Region in which you create the bucket.  A bucket is owned by the AWS account that created it.  Bucket ownership is not transferable;  There is no limit to the number of objects that can be stored in a bucket and no difference in performance whether you use many buckets or just a few  You cannot create a bucket within another bucket.
  • 7. Key Concepts : Object key  Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version.  For example, in the URL http://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and "2006-03-01/AmazonS3.wsdl" is the key.
  • 8. Storage Class Each object in Amazon S3 has a storage class associated with it. Amazon S3 offers the following storage classes for the objects that you store • STANDARD • STANDARD_IA • GLACIER
  • 9. Standard class This storage class is ideal for performance-sensitive use cases and frequently accessed data. STANDARD is the default storage class; if you don't specify storage class at the time that you upload an object, Amazon S3 assumes the STANDARD storage class. Designed for Durability : 99.999999999% Designed for Availability : 99.99%
  • 10. Standard_IA class This storage class (IA, for infrequent access) is optimized for long-lived and less frequently accessed data for example backups and older data where frequency of access has diminished, but the use case still demands high performance. There is a retrieval fee associated with STANDARD_IA objects which makes it most suitable for infrequently accessed data. The STANDARD_IA storage class is suitable for larger objects greater than 128 Kilobytes that you want to keep for at least 30 days Designed for durability : 99.999999999% Designed for Availability : 99.9%
  • 11. Glacier • The GLACIER storage class is suitable for archiving data where data access is infrequent • Archived objects are not available for real-time access. You must first restore the objects before you can access them. • You cannot specify GLACIER as the storage class at the time that you create an object. • You create GLACIER objects by first uploading objects using STANDARD, RRS, or STANDARD_IA as the storage class. Then, you transition these objects to the GLACIER storage class using lifecycle management. • You must first restore the GLACIER objects before you can access them • Designed for durability : 99.999999999% • Designed for Availability : 99.99%
  • 12. Reduced_Redundance Storage class RRS storage class is designed for noncritical, reproducible data stored at lower levels of redundancy than the STANDARD storage class. if you store 10,000 objects using the RRS option, you can, on average, expect to incur an annual loss of a single object per year (0.01% of 10,000 objects) Amazon S3 can send an event notification to alert a user or start a workflow when it detects that an RRS object is lost Designed for durability : 99.99% Designed for Availability : 99.99%
  • 13. Lifecycle Management • Using lifecycle configuration rules, you can direct S3 to tier down the storage classes, archive, or delete the objects during their lifecycle. • The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects • These actions can be classified as follows: Transition • In which you define when objects transition to another storage class. Expiration • In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.
  • 14. When Should I Use Lifecycle Configuration? If you are uploading periodic logs to your bucket, your application might need these logs for a week or a month after creation, and after that you might want to delete them. Some documents are frequently accessed for a limited period of time. After that, these documents are less frequently accessed. Over time, you might not need real-time access to these objects, but your organization or regulations might require you to archive them for a longer period You might also upload some types of data to Amazon S3 primarily for archival purposes, for example digital media archives, financial and healthcare records etc
  • 15. Versioning • Versioning enables you to keep multiple versions of an object in one bucket. • Once versioning is enabled, it can’t be disabled but can be suspended • Enabling and suspending versioning is done at the bucket level • You might want to enable versioning to protect yourself from unintended overwrites and deletions or to archive objects so that you can retrieve previous versions of them • You must explicitly enable versioning on your bucket. By default, versioning is disabled • Regardless of whether you have enabled versioning, each object in your bucket has a version ID
  • 16. Versioning (contd..) • If you have not enabled versioning, then Amazon S3 sets the version ID value to null. • If you have enabled versioning, Amazon S3 assigns a unique version ID value for the object • An example version ID is 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo. Only Amazon S3 generates version IDs. They cannot be edited. • When you enable versioning on a bucket, existing objects, if any, in the bucket are unchanged: the version IDs (null), contents, and permissions remain the same
  • 17. Versioning : PUT Operation • When you PUT an object in a versioning-enabled bucket, the noncurrent version is not overwritten. • The following figure shows that when a new version of photo.gif is PUT into a bucket that already contains an object with the same name, S3 generates a new version ID (121212), and adds the newer version to the bucket.
  • 18. Versioning : DELETE Operation • When you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete marker. • The delete marker becomes the current version of the object. By default, GET requests retrieve the most recently stored version. Performing a simple GET Object request when the current version is a delete marker returns a 404 Not Found error • You can, however, GET a noncurrent version of an object by specifying its version ID • You can permanently delete an object by specifying the version you want to delete.
  • 19. Managing access • By default, all Amazon S3 resources—buckets, objects, and related subresources are private : only the resource owner, an AWS account that created it, can access the resource. • The resource owner can optionally grant access permissions to others by writing an access policy • Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. • Access policies you attach to your resources are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. • You can also attach access policies to users in your account. These are called user policies
  • 20. Resource Owner • The AWS account that you use to create buckets and objects owns those resources. • If you create an IAM user in your AWS account, your AWS account is the parent owner. If the IAM user uploads an object, the parent account, to which the user belongs, owns the object. • A bucket owner can grant cross-account permissions to another AWS account (or users in another account) to upload objects • In this case, the AWS account that uploads objects owns those objects. The bucket owner does not have permissions on the objects that other accounts own, with the following exceptions: • The bucket owner pays the bills. The bucket owner can deny access to any objects, or delete any objects in the bucket, regardless of who owns them • The bucket owner can archive any objects or restore archived objects regardless of who owns them
  • 21. When to Use an ACL-based Access Policy An object ACL is the only way to manage access to objects not owned by the bucket owner Permissions vary by object and you need to manage permissions at the object level Object ACLs control only object-level permissions
  • 22. EBS An Amazon EBS volume is a durable, block-level storage device that you can attach to a single EC2 instance. EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage EBS volumes are created in a specific Availability Zone, and can then be attached to any instances in that same Availability Zone. While creating an EBS volume , AWS does industry standard disk wiping
  • 23. Benefits of EBS Volume Data Availability: When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component Data persistence: An EBS volume is off-instance storage that can persist independently from the life of an instance Data encryption: For simplified data encryption, you can create encrypted EBS volumes with the Amazon EBS encryption feature. Snapshots: Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon S3, where it is stored redundantly in multiple Availability Zones. Flexibility: EBS volumes support live configuration changes while in production. You can modify volume type, volume size, and IOPS capacity without service interruptions.
  • 24. EBS Volume Types Amazon EBS provides the following volume types, which differ in performance characteristics and price. The volumes types fall into two categories: •SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS ( gp2, io1) •HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS (St1, Sc1)
  • 25. General purpose SSD volumes (gp2) • Description : General purpose SSD volume that balances price and performance for a wide variety of workloads • Use Cases: Recommended for most workloads , System boot volumes , Low-latency interactive apps , Development and test environments • API Name : Gp2 • Volume Size : 1 GiB - 16 TiB • Max IOPS : 10,000 • Max throughput : 160 MiB/s • Max IOPS/ Instance : 80,000 • Minimum IOPS : 100 • Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size
  • 26. Gp2 volumes IO credits and Burst performance • The performance of gp2 volumes is tied to volume size • Volume Size determines the baseline performance level of the volume and how quickly it accumulates I/O credits • larger volumes have higher baseline performance levels and accumulate I/O credits faster • I/O credits represent the available bandwidth that your gp2 volume can use to burst large amounts of I/O when more than the baseline performance is needed • Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes • This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications • If you notice that your volume performance is frequently limited to the baseline level , you should consider using a larger gp2 volume or switching to an io1 volume
  • 27. Provisioned IOPS SSD volumes (io1) • Description : Highest-performance SSD volume for mission-critical low-latency or high-throughput workloads • Use case : Critical business applications that require sustained IOPS performance , Large database workloads • API Name : Io1 • Volume Size : 4 GiB - 16 TiB • MAX IOPS : 32,000 • MAX Throughput : 500 MiB/s • MAX IOPS per instance : 80000
  • 28. Throughput Optimized HDD Volumes (st1) • Description : Low cost HDD volume designed for frequently accessed, throughput-intensive workloads • Use Cases : Streaming workloads requiring consistent, fast throughput at a low price , Big Data , Data warehouse , log data , cant be a boot volume • API name : st1 • Volume Size : 500 GiB - 16 TiB • Max. Throughput/Volume : 500 MiB/s • Throughput Credits and Burst Performance : • Like gp2, st1 uses a burst-bucket model for performance. • Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits • For a 1-TiB st1 volume, burst throughput is limited to 250 MiB/s, the bucket fills with credits at 40 MiB/s, and it can hold up to 1 TiB-worth of credits.
  • 29. Cold HDD volumes (sc1) • Description: Lowest cost HDD volume designed for less frequently accessed workloads • Use Cases: Throughput-oriented storage for large volumes of data that is infrequently accessed , Scenarios where the lowest storage cost is important, Can't be a boot volume • Api Name : sc1 • Volume Size : 500 GiB - 16 TiB • Max. Throughput/Volume : 250 MiB/s • Throughput Credits and Burst Performance: • Like gp2, sc1 uses a burst-bucket model for performance. • Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. • For a 1-TiB sc1 volume, burst throughput is limited to 80 MiB/s, the bucket fills with credits at 12 MiB/s, and it can hold up to 1 TiB-worth of credits.
  • 30. EBS Snapshots • You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. • Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. • This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data • When you delete a snapshot, only the data unique to that snapshot is removed. • Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume • When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapshot. • You can share a snapshot across AWS accounts by modifying its access permissions • You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery
  • 31. Amazon EBS Optimized instances • An Amazon EBS–optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O • EBS–optimized instances deliver dedicated bandwidth to Amazon EBS, with options between 425 Mbps and 14,000 Mbps, depending on the instance type you use • The instance types that are EBS–optimized by default, there is no need to enable EBS optimization and no effect if you disable EBS optimization • For instances that are not EBS–optimized by default, you can enable EBS optimization • When you enable EBS optimization for an instance that is not EBS-optimized by default, you pay an additional low, hourly fee for the dedicated capacity. • Example of instances which are EBS -optimzed by default : C4, C5, d3, f1, g3, h1, i3, m4 m5, r4, X1 , P2, P3
  • 32. Amazon EBS Encryption When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted: •Data at rest inside the volume •All data moving between the volume and the instance •All snapshots created from the volume •All volumes created from those snapshots Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data- in-transit between an instance and its attached EBS storage Snapshots of encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are automatically encrypted.
  • 33. Storage Gateway By using the AWS Storage Gateway software appliance, you can connect your existing on-premises application infrastructure with scalable, cost-effective AWS cloud storage that provides data security features AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions Gateway is a software appliance installed as VM at your Opremise Virtualization infrastructure (ESX/ HyperV) or an EC2 at the AWS infrastructure To prepare for upload to Amazon S3, your gateway also stores incoming data in a staging area, referred to as an upload buffer Your gateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where it is stored encrypted in Amazon S3
  • 34. File Gateway The gateway provides access to objects in S3 as files on an NFS mount point Objects are encrypted with server-side encryption with Amazon S3–managed encryption keys (SSE-S3). All data transfer is done through HTTPS The service optimizes data transfer between the gateway and AWS using multipart parallel uploads or byte-range downloads A local cache is maintained to provide low latency access to the recently accessed data and reduce data egress charges
  • 35. Volume gateway A volume gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on- premises application servers. You can create storage volumes and mount them as iSCSI devices from your on-premises application servers The gateway supports the following volume configurations: Cached volumes Stored Volumes
  • 36. Cached volumes • By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. • Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. • Cached volumes can range from 1 GiB to 32 TiB in size and must be rounded to the nearest GiB. • Each gateway configured for cached volumes can support up to 32 volumes for a total maximum storage volume of 1,024 TiB (1 PiB). • Generally, you should allocate at least 20 percent of your existing file store size as cache storage. • You can take incremental backups, called snapshots, of your storage volumes in Amazon S3. • All gateway data and snapshot data for cached volumes is stored in Amazon S3 and encrypted at rest using server- side encryption (SSE). • However, you can't access this data with the Amazon S3 API or other tools such as the Amazon S3 Management Console.
  • 37. Stored Volumes By using stored volumes, you can store your primary data locally, while asynchronously backing up that data to AWS S3 as EBS snapshots. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2 Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB Each gateway configured for stored volumes can support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).
  • 38. Tape Gateway With a tape gateway, you can cost-effectively and durably archive backup data in Amazon Glacier. A tape gateway provides a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning, scaling, and maintaining a physical tape infrastructure. With its virtual tape library (VTL) interface, you use your existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway