Public sector teams with on-prem applications face many problems managing storage arrays throughout their short lifecycle: guessing at capacity needs, waiting for procurement cycles, recruiting staff with specialized hardware skills, etc. AWS Storage Gateway helps reduce these pains by providing a way to start using Amazon S3, Amazon Glacier and Amazon EBS in hybrid architectures for traditional and cutting-edge workloads, from backup to big data analytics. Storage Gateway connects local applications to AWS storage with familiar block and file protocols, and local caching for performance. In this session, learn how you can use these AWS Storage services with Storage Gateway for backup, content storage, data lakes, disaster recovery, data migration and more.
Stevan Beara, Solutions Architect, Amazon Web Services
Matt Campbell, Engineering Director, D2L
Building Hybrid Cloud Storage Architectures with AWS @scaleAmazon Web Services
The document discusses building hybrid cloud storage architectures with AWS. It provides an overview of AWS storage services including Amazon S3, Glacier, EBS, and EFS. It also describes the AWS Storage Gateway family of on-premises appliances that enable hybrid storage between on-premises and AWS cloud storage. Specifically, it covers the File Gateway for accessing S3 storage as files, Volume Gateway for iSCSI volumes, and Tape Gateway for migrating tape backups to S3.
Hybrid Cloud Storage for Recovery & Migration with AWS Storage Gateway (STG30...Amazon Web Services
In this workshop, we provide hands-on experience using the AWS Storage Gateway service to protect on-premises data in AWS, recover it locally or in the cloud in minutes, and migrate it when the time is right. You work with the File Gateway and Microsoft SQL Server native tools to back up to Amazon S3, and then recover or migrate that database in AWS rapidly. In addition, you use Volume Gateway and Amazon EBS Snapshots to protect and migrate block-based volumes. Use this session to hone your skills with backup and DR, and prepare for application migrations.
Deep Dive: Building Hybrid Cloud Storage Architectures with AWS Storage Gatew...Amazon Web Services
Are you tired of the treadmill of deploying on-premises storage? Join this session to learn how to use AWS Storage Gateway to shift storage for on-premises apps to the cloud, reducing your infrastructure and management challenges. Storage Gateway connects your apps to AWS storage services, including Amazon S3, using standard block, file and tape storage protocols. You can use Storage Gateway for hybrid cloud use cases for file-based application data storage, backup, analytics with data lakes, machine learning (ML), and migration. Learn about best practices from a customer using Storage Gateway for Microsoft SQL Server data protection.
As the volume and types of data continues to grow, customers often have valuable data that is not easily discoverable and available for analytics. A common challenge for data engineering teams is architecting a data lake that can cater to the needs of diverse users - from developers to business analysts to data scientists. In this session, dive deep into building a data lake using Amazon S3, Amazon Kinesis, Amazon Athena and AWS Glue. Learn how AWS Glue crawlers can automatically discover your data, extracting and cataloguing relevant metadata to reduce operations in preparing your data for downstream consumers.
How to Build a Data Lake in Amazon S3 & Amazon Glacier - AWS Online Tech TalksAmazon Web Services
The document discusses how to build a data lake on Amazon S3 and Amazon Glacier. It defines a data lake as a centralized storage platform capable of handling heterogeneous data sets. It recommends Amazon S3 and Glacier for their scalability, security, and cost effectiveness. It provides examples of how to ingest, catalog, analyze, and query data in the data lake using services like AWS Glue, Athena, and Redshift Spectrum. It also discusses best practices for performance, security, and an example use case of an IoT sensor data pipeline.
Scalable and Secure Cloud-Based Data Archiving for Digital Libraries, Complia...Amazon Web Services
The document discusses using AWS for long-term digital archival. It highlights AWS's durability, security, scale and compliance. AWS provides a broad range of storage services like S3, Glacier and Snowball for archival needs. Partners like Preservica offer solutions for active digital preservation on AWS, ensuring long-term accessibility and authenticity of records. Case studies of government customers show how AWS helps meet mandates for records preservation and access at lower costs than on-premises solutions.
by Peter Dalton, Principal Consultant AWS and Taz Sayed, Sr Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
AWS Data Transfer Services: Deep Dive - SRV302 - Chicago AWS SummitAmazon Web Services
In this session, we provide IT pros and application owners with an overview of AWS options for building hybrid storage architectures or even migrating an entire data center to the AWS Cloud. AWS Storage Gateway connects existing on-premises block, file, or tape storage systems to AWS Cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process, and migrate data into and out of AWS without any network connection. Join us to learn how you can close down data centers, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
Building Hybrid Cloud Storage Architectures with AWS @scaleAmazon Web Services
The document discusses building hybrid cloud storage architectures with AWS. It provides an overview of AWS storage services including Amazon S3, Glacier, EBS, and EFS. It also describes the AWS Storage Gateway family of on-premises appliances that enable hybrid storage between on-premises and AWS cloud storage. Specifically, it covers the File Gateway for accessing S3 storage as files, Volume Gateway for iSCSI volumes, and Tape Gateway for migrating tape backups to S3.
Hybrid Cloud Storage for Recovery & Migration with AWS Storage Gateway (STG30...Amazon Web Services
In this workshop, we provide hands-on experience using the AWS Storage Gateway service to protect on-premises data in AWS, recover it locally or in the cloud in minutes, and migrate it when the time is right. You work with the File Gateway and Microsoft SQL Server native tools to back up to Amazon S3, and then recover or migrate that database in AWS rapidly. In addition, you use Volume Gateway and Amazon EBS Snapshots to protect and migrate block-based volumes. Use this session to hone your skills with backup and DR, and prepare for application migrations.
Deep Dive: Building Hybrid Cloud Storage Architectures with AWS Storage Gatew...Amazon Web Services
Are you tired of the treadmill of deploying on-premises storage? Join this session to learn how to use AWS Storage Gateway to shift storage for on-premises apps to the cloud, reducing your infrastructure and management challenges. Storage Gateway connects your apps to AWS storage services, including Amazon S3, using standard block, file and tape storage protocols. You can use Storage Gateway for hybrid cloud use cases for file-based application data storage, backup, analytics with data lakes, machine learning (ML), and migration. Learn about best practices from a customer using Storage Gateway for Microsoft SQL Server data protection.
As the volume and types of data continues to grow, customers often have valuable data that is not easily discoverable and available for analytics. A common challenge for data engineering teams is architecting a data lake that can cater to the needs of diverse users - from developers to business analysts to data scientists. In this session, dive deep into building a data lake using Amazon S3, Amazon Kinesis, Amazon Athena and AWS Glue. Learn how AWS Glue crawlers can automatically discover your data, extracting and cataloguing relevant metadata to reduce operations in preparing your data for downstream consumers.
How to Build a Data Lake in Amazon S3 & Amazon Glacier - AWS Online Tech TalksAmazon Web Services
The document discusses how to build a data lake on Amazon S3 and Amazon Glacier. It defines a data lake as a centralized storage platform capable of handling heterogeneous data sets. It recommends Amazon S3 and Glacier for their scalability, security, and cost effectiveness. It provides examples of how to ingest, catalog, analyze, and query data in the data lake using services like AWS Glue, Athena, and Redshift Spectrum. It also discusses best practices for performance, security, and an example use case of an IoT sensor data pipeline.
Scalable and Secure Cloud-Based Data Archiving for Digital Libraries, Complia...Amazon Web Services
The document discusses using AWS for long-term digital archival. It highlights AWS's durability, security, scale and compliance. AWS provides a broad range of storage services like S3, Glacier and Snowball for archival needs. Partners like Preservica offer solutions for active digital preservation on AWS, ensuring long-term accessibility and authenticity of records. Case studies of government customers show how AWS helps meet mandates for records preservation and access at lower costs than on-premises solutions.
by Peter Dalton, Principal Consultant AWS and Taz Sayed, Sr Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
AWS Data Transfer Services: Deep Dive - SRV302 - Chicago AWS SummitAmazon Web Services
In this session, we provide IT pros and application owners with an overview of AWS options for building hybrid storage architectures or even migrating an entire data center to the AWS Cloud. AWS Storage Gateway connects existing on-premises block, file, or tape storage systems to AWS Cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process, and migrate data into and out of AWS without any network connection. Join us to learn how you can close down data centers, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
A data lake is an architectural approach that allows you to store massive amounts of data into a central location, so it's readily available to be categorized, processed, analyzed and consumed by diverse groups within an organization.In this session, we will introduce the Data Lake concept and its implementation on AWS.We will explain the different roles our services play and how they fit into the Data Lake picture.
by PD Dutta, Sr. Product Manager, Object Storage, AWS
We will explain how to design and build an IoT cloud platform on top of Amazon S3. You will get to review the best practices for architecting a cost-effective, durable, and secure storage solution to store and analyze your IoT data on Amazon S3. In addition, we’ll cover how to collect, ingest and analyze the data in-place using different AWS Services such as AWS IoT, Amazon Kinesis, Amazon Athena, and Amazon Redshift Spectrum.
by Drew Meyer, Sr. Product Marketing Manager, AWS
This session will provide an overview of the AWS storage portfolio, including block, file, object, and cloud data migration services. We will touch on new offerings, outline some of the most common use cases, and prepare you for the individual deep dive sessions, customer sessions, and new announcements. The session will also address our partner network and what it means for a storage provider to have the APN Storage Competency.
Build Data Engineering Platforms with Amazon EMR (ANT204) - AWS re:Invent 2018Amazon Web Services
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
This document summarizes and compares several AWS storage options and their key features, durability and availability, scalability and elasticity, security, anti-patterns, and pricing. It covers S3, Glacier, EFS, FSx, EBS, Instance Store Volumes, and Storage Gateway. The options provide a range of capabilities from simple object storage to block and file storage for different use cases and data access needs.
by Robbie Wright, HEad of Amazon S3 & Amazon Glacier Product Marketing, AWS
Learn from AWS on how we've designed S3 and Glacier to be durable, available, and massively scalable. Hear how customers are using these services to enhance the accessibility and usability of their data. We will also dive into the benefits of object storage, its applications, and some best practices to follow.
This document summarizes a presentation on data lifecycle and storage management techniques for Amazon S3. It discusses lifecycle management rules for transitioning or expiring objects based on age, S3 inventory for listing objects, object tagging for classification and policy filtering, storage class analysis for monitoring usage and optimizing storage, and monitoring tools like CloudWatch and CloudTrail. The presentation provides an overview and best practices for these S3 management features.
by Everett Dolgner, Business Development Manager, AWS
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum and optimizing your overall capital expense can be challenging. This session presents AWS features and services along with disaster recovery architectures that you can leverage when building highly available and disaster-resilient strategies.
Build Data Lakes & Analytics on AWS: Patterns & Best PracticesAmazon Web Services
This document discusses building data lakes and analytics on AWS. It covers challenges with big data like volume, velocity, and variety. An AWS data lake can quickly ingest and store any type of data. The data lake includes analytics, machine learning, real-time data movement, and traditional data movement. Metadata management is important for data lakes. AWS Glue crawlers can discover data in various formats and populate the data catalog. Different tools like Amazon Athena, Amazon EMR, and Amazon Redshift can be used for analytics depending on the user and use case. Machine learning benefits from big data, and a data lake supports agility in machine learning.
The document discusses building data lakes with AWS. It recommends using Amazon S3 as the storage layer for the data lake due to its scalability, durability and integration with other AWS analytics services. It also recommends using AWS Glue to catalog and ingest data into the data lake through automated crawlers. This allows for easy discovery, querying and analysis of data in the lake.
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting (in its original format) and extract value. In this session, learn how to architect and implement a data lake in the AWS Cloud. Learn about best practices as we walk through architectural blueprints.
This document discusses best practices for building a data lake architecture on AWS. It recommends using Amazon S3 as the centralized data lake storage and decoupling storage from compute. This allows for cheaper, more efficient operation and the ability to evolve to clusterless analytics tools like Amazon Athena. The document provides guidance on security, ingestion, cataloging, cost optimization, analytics tools and building a sample pipeline to analyze data in the lake.
by Darin Briskman, Database, Analytics, and Machine Learning AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by Androski Spicer, Solutions Architect AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Build Your First Big Data Application on AWS (ANT213-R1) - AWS re:Invent 2018Amazon Web Services
Do you want to increase your knowledge of AWS big data web services and launch your first big data application on the cloud? In this session, we walk you through simplifying big data processing as a data bus comprising ingest, store, process, and visualize. You will build a big data application using AWS managed services, including Amazon Athena, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. Along the way, we review architecture design patterns for big data applications and give you access to a take-home lab so you can rebuild and customize the application yourself. To get the most from this session, bring your own laptop and have some familiarity with AWS services.
One Data Lake, Many Uses: Enabling Multi-Tenant Analytics with Amazon EMR (AN...Amazon Web Services
One of the benefits of having a data lake is that same data can be consumed by multi-tenant groups—an efficient way to share a persistent Amazon EMR cluster. The same business data can be safely used for many different analytics and data processing needs. In this session, we discuss steps to make an Amazon EMR cluster multi-tenant for analytics, best practices for a multi-tenant cluster, and solutions to common challenges. We also address the security and governance aspects of a multi-tenant Amazon EMR cluster.
by Jon Handler, Principal Solutions Architect and Sanjay Dhar, Solutions Architect, AWS
Nearly everything in IT - servers, applications, websites, connected devices, and other things - generate discrete, time-stamped records of events called logs. Processing and analyzing these logs to gain actionable insights is log analytics. We'll look at how to use centralized log analytics across multiple sources with Amazon Elasticsearch Service.
Backup & Recovery - Optimize Your Backup and Restore Architectures in the CloudAmazon Web Services
This document discusses optimizing backup and restore architectures in the cloud. It begins by noting the rapid growth of digital data and importance of backup and recovery. Common terms like RPO and RTO are defined. Traditional on-premises backup is compared to approaches using cloud connectors, gateways, and services like S3, Glacier, and EBS. Benefits of cloud backup include cost savings, automation, and analytics. A variety of AWS storage services and partners are presented as solutions for different backup use cases.
Backup and Recovery with Cloud-Native Deduplication and Use Cases from the Fi...Amazon Web Services
by Hugh Emberson, CTO, StorReduce
Designing and deploying cloud-enabled backup & recovery solutions often leads to opportunities for reducing storage requirements and increasing efficiencies. Having effective cloud-native deduplication capabilities as part of your backup & recovery strategy can optimize migration, decrease the need for purpose built backup appliances like Data Domains, large tape archives, and enable cost reductions of up to 95%. In this session, StorReduce will provide best practices around data deduplication in relation to designing and deploying solutions around backup, archive, and general unstructured file data. They will also demonstrate how using a cloud native interface with scale-out deduplication enables generic cloud services like search inside all backups moved to cloud. They will guide the audience through two customer use cases from the financial services and healthcare industries.
Optimizing Storage for Enterprise Workloads and Migrations (STG202) - AWS re:...Amazon Web Services
In this session, we focus on best practices for AWS block and file storage when supporting enterprise workloads (like SAP, Oracle, Microsoft applications, and home directories). We discuss migrating mission-critical workload data, selecting volumes or file systems, optimizing performance, and designing for durability and availability. We also review optimizing for cost to ensure that your lift-and-shift project is a success.
Construindo Arquiteturas Híbridas de Armazenamento em NuvemAmazon Web Services
Empresas de todos os tamanhos têm desafios constantes de armazenamento, crescimento e proteção de dados. A aquisição de mais armazenamento prolonga o desafio de gerenciar seu ciclo de vida, que inclui compra, operação contínua, falhas de hardware, atualizações e migrações. Nesta sessão, aprenda como usar o AWS Storage Gateway para conectar suas aplicações on-premises aos serviços de armazenamento da AWS usando protocolos de armazenamento padrão. O Storage Gateway permite soluções de armazenamento em nuvem híbrida para compartilhamento de arquivos, data lakes, análise de big data, backup e recuperação de desastres e migração. Discutiremos as melhores práticas e novas abordagens de implementação.
Palestrante: Melissa Ravanini
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
A data lake is an architectural approach that allows you to store massive amounts of data into a central location, so it's readily available to be categorized, processed, analyzed and consumed by diverse groups within an organization.In this session, we will introduce the Data Lake concept and its implementation on AWS.We will explain the different roles our services play and how they fit into the Data Lake picture.
by PD Dutta, Sr. Product Manager, Object Storage, AWS
We will explain how to design and build an IoT cloud platform on top of Amazon S3. You will get to review the best practices for architecting a cost-effective, durable, and secure storage solution to store and analyze your IoT data on Amazon S3. In addition, we’ll cover how to collect, ingest and analyze the data in-place using different AWS Services such as AWS IoT, Amazon Kinesis, Amazon Athena, and Amazon Redshift Spectrum.
by Drew Meyer, Sr. Product Marketing Manager, AWS
This session will provide an overview of the AWS storage portfolio, including block, file, object, and cloud data migration services. We will touch on new offerings, outline some of the most common use cases, and prepare you for the individual deep dive sessions, customer sessions, and new announcements. The session will also address our partner network and what it means for a storage provider to have the APN Storage Competency.
Build Data Engineering Platforms with Amazon EMR (ANT204) - AWS re:Invent 2018Amazon Web Services
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
This document summarizes and compares several AWS storage options and their key features, durability and availability, scalability and elasticity, security, anti-patterns, and pricing. It covers S3, Glacier, EFS, FSx, EBS, Instance Store Volumes, and Storage Gateway. The options provide a range of capabilities from simple object storage to block and file storage for different use cases and data access needs.
by Robbie Wright, HEad of Amazon S3 & Amazon Glacier Product Marketing, AWS
Learn from AWS on how we've designed S3 and Glacier to be durable, available, and massively scalable. Hear how customers are using these services to enhance the accessibility and usability of their data. We will also dive into the benefits of object storage, its applications, and some best practices to follow.
This document summarizes a presentation on data lifecycle and storage management techniques for Amazon S3. It discusses lifecycle management rules for transitioning or expiring objects based on age, S3 inventory for listing objects, object tagging for classification and policy filtering, storage class analysis for monitoring usage and optimizing storage, and monitoring tools like CloudWatch and CloudTrail. The presentation provides an overview and best practices for these S3 management features.
by Everett Dolgner, Business Development Manager, AWS
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum and optimizing your overall capital expense can be challenging. This session presents AWS features and services along with disaster recovery architectures that you can leverage when building highly available and disaster-resilient strategies.
Build Data Lakes & Analytics on AWS: Patterns & Best PracticesAmazon Web Services
This document discusses building data lakes and analytics on AWS. It covers challenges with big data like volume, velocity, and variety. An AWS data lake can quickly ingest and store any type of data. The data lake includes analytics, machine learning, real-time data movement, and traditional data movement. Metadata management is important for data lakes. AWS Glue crawlers can discover data in various formats and populate the data catalog. Different tools like Amazon Athena, Amazon EMR, and Amazon Redshift can be used for analytics depending on the user and use case. Machine learning benefits from big data, and a data lake supports agility in machine learning.
The document discusses building data lakes with AWS. It recommends using Amazon S3 as the storage layer for the data lake due to its scalability, durability and integration with other AWS analytics services. It also recommends using AWS Glue to catalog and ingest data into the data lake through automated crawlers. This allows for easy discovery, querying and analysis of data in the lake.
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting (in its original format) and extract value. In this session, learn how to architect and implement a data lake in the AWS Cloud. Learn about best practices as we walk through architectural blueprints.
This document discusses best practices for building a data lake architecture on AWS. It recommends using Amazon S3 as the centralized data lake storage and decoupling storage from compute. This allows for cheaper, more efficient operation and the ability to evolve to clusterless analytics tools like Amazon Athena. The document provides guidance on security, ingestion, cataloging, cost optimization, analytics tools and building a sample pipeline to analyze data in the lake.
by Darin Briskman, Database, Analytics, and Machine Learning AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by Androski Spicer, Solutions Architect AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Build Your First Big Data Application on AWS (ANT213-R1) - AWS re:Invent 2018Amazon Web Services
Do you want to increase your knowledge of AWS big data web services and launch your first big data application on the cloud? In this session, we walk you through simplifying big data processing as a data bus comprising ingest, store, process, and visualize. You will build a big data application using AWS managed services, including Amazon Athena, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. Along the way, we review architecture design patterns for big data applications and give you access to a take-home lab so you can rebuild and customize the application yourself. To get the most from this session, bring your own laptop and have some familiarity with AWS services.
One Data Lake, Many Uses: Enabling Multi-Tenant Analytics with Amazon EMR (AN...Amazon Web Services
One of the benefits of having a data lake is that same data can be consumed by multi-tenant groups—an efficient way to share a persistent Amazon EMR cluster. The same business data can be safely used for many different analytics and data processing needs. In this session, we discuss steps to make an Amazon EMR cluster multi-tenant for analytics, best practices for a multi-tenant cluster, and solutions to common challenges. We also address the security and governance aspects of a multi-tenant Amazon EMR cluster.
by Jon Handler, Principal Solutions Architect and Sanjay Dhar, Solutions Architect, AWS
Nearly everything in IT - servers, applications, websites, connected devices, and other things - generate discrete, time-stamped records of events called logs. Processing and analyzing these logs to gain actionable insights is log analytics. We'll look at how to use centralized log analytics across multiple sources with Amazon Elasticsearch Service.
Backup & Recovery - Optimize Your Backup and Restore Architectures in the CloudAmazon Web Services
This document discusses optimizing backup and restore architectures in the cloud. It begins by noting the rapid growth of digital data and importance of backup and recovery. Common terms like RPO and RTO are defined. Traditional on-premises backup is compared to approaches using cloud connectors, gateways, and services like S3, Glacier, and EBS. Benefits of cloud backup include cost savings, automation, and analytics. A variety of AWS storage services and partners are presented as solutions for different backup use cases.
Backup and Recovery with Cloud-Native Deduplication and Use Cases from the Fi...Amazon Web Services
by Hugh Emberson, CTO, StorReduce
Designing and deploying cloud-enabled backup & recovery solutions often leads to opportunities for reducing storage requirements and increasing efficiencies. Having effective cloud-native deduplication capabilities as part of your backup & recovery strategy can optimize migration, decrease the need for purpose built backup appliances like Data Domains, large tape archives, and enable cost reductions of up to 95%. In this session, StorReduce will provide best practices around data deduplication in relation to designing and deploying solutions around backup, archive, and general unstructured file data. They will also demonstrate how using a cloud native interface with scale-out deduplication enables generic cloud services like search inside all backups moved to cloud. They will guide the audience through two customer use cases from the financial services and healthcare industries.
Optimizing Storage for Enterprise Workloads and Migrations (STG202) - AWS re:...Amazon Web Services
In this session, we focus on best practices for AWS block and file storage when supporting enterprise workloads (like SAP, Oracle, Microsoft applications, and home directories). We discuss migrating mission-critical workload data, selecting volumes or file systems, optimizing performance, and designing for durability and availability. We also review optimizing for cost to ensure that your lift-and-shift project is a success.
Construindo Arquiteturas Híbridas de Armazenamento em NuvemAmazon Web Services
Empresas de todos os tamanhos têm desafios constantes de armazenamento, crescimento e proteção de dados. A aquisição de mais armazenamento prolonga o desafio de gerenciar seu ciclo de vida, que inclui compra, operação contínua, falhas de hardware, atualizações e migrações. Nesta sessão, aprenda como usar o AWS Storage Gateway para conectar suas aplicações on-premises aos serviços de armazenamento da AWS usando protocolos de armazenamento padrão. O Storage Gateway permite soluções de armazenamento em nuvem híbrida para compartilhamento de arquivos, data lakes, análise de big data, backup e recuperação de desastres e migração. Discutiremos as melhores práticas e novas abordagens de implementação.
Palestrante: Melissa Ravanini
SRV302 Deep Dive: Hybrid Cloud Storage with AWS Storage GatewayAmazon Web Services
Enterprises of all sizes have the persistent storage challenges of data access, growth, and protection. Buying more storage stacks prolongs the pain of managing the storage lifecycle, which includes purchasing, ongoing operation, hardware failure, system retirement, and migration, yet it keeps on-premises datasets siloed from cloud workloads. In this session, learn how to use AWS Storage Gateway to connect your on-premises applications to AWS storage services by using standard storage protocols. Storage Gateway enables hybrid cloud storage solutions for file sharing, data lakes, big data analytics, backup and disaster recovery, and migration. We discuss best practices and new deployment approaches.
With a hybrid architecture approach to managing data on-premises and in the cloud, organizations can be more agile and responsive than ever before. Find out what your peers are doing with cloud and how data backup, recovery, management, and e-discovery capabilities can help maximize your use of AWS. We will also cover dynamic data indexing across on-premises and cloud storage and holistic data protection.
This session provides IT pros and application owners an overview of AWS options for building hybrid storage architectures or even entirely migrating datacenter storage to the AWS cloud. The AWS Storage Gateway connects existing on-premises block, file or tape storage systems to AWS cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process and migrate data into and out of AWS without any network connection at all. Join us to learn how you can close down datacenters, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
AWS re:Invent 2018: Deep Dive: Hybrid Cloud Storage Arch. w/Storage Gateway, ...Amazon Web Services
The document discusses AWS Storage Gateway, which allows for hybrid cloud storage architectures by enabling on-premises access and transfer of data to AWS cloud storage services. It provides an overview of Storage Gateway's file, volume, and tape gateway types. It also discusses how Kellogg's uses Storage Gateway for backup and disaster recovery, including migrating from traditional backups to using Storage Gateway to backup to S3 and EBS. Best practices for using Storage Gateway are also covered.
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big DataAmazon Web Services
Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms.
Cloud Data Migration with Amazon EBS (CMP406-R2) - AWS re:Invent 2018Amazon Web Services
In this session, we focus on the fundamentals for beginning your SAN migration strategy to the cloud. Get hands-on experience setting up Amazon EBS volumes, and learn how to optimize your volumes for performance, availability, and durability.
Deep Dive: Hybrid Cloud Storage with AWS Storage Gateway - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how the AWS Storage Gateway works with your on-premises applications and infrastructure
- Understand which of your workloads the AWS Storage Gateway could support
- Learn how you can automate file-based workflows between your sites or teams and AWS resources
by Everett Dolgner, Business Development Manager AWS
With a hybrid architecture approach to managing data on-premises and in the cloud, organizations can be more agile and responsive than ever before. Find out what your peers are doing with cloud and how data backup, recovery, management, and e-discovery capabilities can help maximize your use of AWS. We will also cover dynamic data indexing across on-premises and cloud storage and holistic data protection.
An Overview of AWS Services for Data Storage and Migration - SRV205 - Atlanta...Amazon Web Services
In this session, we explore the features and functions of AWS storage services. We provide context on the AWS storage portfolio, and we cover the most common use cases for AWS offerings for object, file, block, and migration technologies, including the AWS Partner Network (APN) ecosystem. Then we examine each service, using customer case studies as examples. You gain an understanding of how to select storage and start moving workloads or building new ones.
AWS Portfolio: highlight delle categorie di prodotti AWS con esempiAmazon Web Services
The document discusses Amazon Web Services (AWS) and its various cloud computing products and services. It provides information on AWS' global infrastructure including 21 regions, 64 availability zones, and 158 edge locations. It also describes compute services such as EC2 instances, containers, and serverless functions. Additional sections cover database services, storage options, data transfer mechanisms, analytics and machine learning tools, and specific AI services for image and text recognition.
AWS offers numerous services to migrate data at a petabyte scale. You can easily move large volumes of data from onsite to the cloud and utilize the cloud as a backup target using data transfer services, such as AWS Snowball, AWS Snowball Edge, or AWS Storage Gateway. Learn about available data migration options and which one is the right fit for your requirements.
What's new with Amazon S3, Amazon EFS, and other AWS storage services - STG20...Amazon Web Services
AWS provides storage services that fit any workload or application. In this session, we discuss each of the latest developments in Amazon S3, Amazon S3 Glacier, Amazon EBS, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, AWS Storage Gateway, the AWS Snow family, AWS Transfer for SFTP, and AWS DataSync. We also examine which workloads and applications are best suited for each storage service.
by Everett Dolgner, Business Development Manager, AWS
AWS offers numerous services to migrate data at a petabyte scale. You can easily move large volumes of data from onsite to the cloud and utilize the cloud as a backup target using data transfer services, such as AWS Snowball, AWS Snowball Edge, or AWS Storage Gateway. Learn about available data migration options and which one is the right fit for your requirements.
AWS re:Invent 2018: Deep Dive: Hybrid Cloud Storage Arch. w/Storage Gateway, ...Amazon Web Services
IT infrastructure teams with on-premises applications have to manage storage arrays throughout their never-ending lifecycle, including capacity planning guesswork, hardware failures, system migrations, and more. There are cloud-enabled alternatives to buying more and more storage arrays. With AWS Storage Gateway, you can start using Amazon S3, Amazon Glacier, and Amazon EBS in hybrid architectures with on-premises applications for storage, backup, disaster recovery, tiered storage, hybrid data lakes, and ML. In this session, learn how to use AWS Storage Gateway to seamlessly connect your applications to AWS storage services with familiar block-and-file storage protocols and a local cache for fast access to hot data. We demonstrate our latest capabilities and share best practices from experienced customers.
Migrating Large Scale Data Sets to the Cloud - STG204 - re:Invent 2017Amazon Web Services
AWS now offers the simple services of data migration at a petabyte scale. You can easily move large volumes of data from onsite to the cloud. You can also quickly get started with the cloud as a backup target using data transfer services, such as AWS Snowball, AWS Snowball Edge, or AWS Storage Gateway. Learn about the available data migration options and which one is the right fit for your requirements. We discuss customer use cases and review the different applications they used with our data migration services to cut their IT expenditures and management time on hardware and backup solutions.
AWS offers storage, networking, and data transfer services so you can build and deploy solutions to extend backup and archive targets to the AWS Cloud, increasing scalability, durability, security, and compliance.
Similar to Building Hybrid Cloud Storage Architectures with AWS (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
6. Object Storage Classes
Amazon
Glacier
S3 Standard
S3 Standard -
Infrequent Access
Automated Lifecycle Policies
S3 One Zone -
Infrequent Access
Active data
Millisecond access
Min 3 AZs
$0.025
Archive data
Minutes to Hours
Min 3 AZs
$0.0045
30 day min duration
Millisecond access
Min 3 AZs
$0.0138
30 day min duration
Millisecond access
1 AZ
$0.01104
Pricing is per GB per month in the Canada Central region
33. Canada’s largest biotech firm
• Data sovereignty required local hot files
& tape archives in 10 global offices
• Volume Gateway eliminated 50-hour
backup windows and tape archive
systems
• Cut on-premises; storage capex 40%;
reduced RTO from 48 hours to 10 minutes
• Meets cloud strategy while retaining local
ownership and data sovereignty
• Enabled data center exit in next 6–12
months
“It made no sense to keep
buying
big disk siloes, especially as
we opened up new global
offices, and now we can recover
in the cloud from a snapshot if
we ever had to.”
Adam Leggett
IT Manager
Stemcell’s backup & restore with volume gateway
So, now you have a comprehensive portfolio of S3 storage classes to fit a vast number of use cases.
It’s important to note that S3 One Zone-IA is an INFREQUENT ACCESS storage class that, like S3 Standard-IA, will bill you for a minimum storage duration of 30 days – which makes it great for data that you want to keep around, but that you don’t need to access very often.
The AWS SGW is typically deployed in your existing storage environment as a VM.
You connect your existing applications, storage systems, or devices to the SGW. The SGW provides standard storage protocol interfaces so apps can connect to it without changes.
The gateway in turn connects to AWS so you can store data securely and durably in Amazon S3, Glacier.
The gateway optimizes data transfer from on-premises to AWS. It also provides low-latency access through a local cache so your apps can access frequently used data locally. The service is also integrated with Cloudwatch, cloudtrail, IAM, etc. so you get an extension of aws management services locally.
---
Primary talking points:
Industry standard protocols for file, block, and tape
Secure and durable storage in Amazon S3 and Glacier
Optimized data transfer from on-premises to AWS
Low-latency access to frequently used data
Integrated with AWS security and management services
The file gateway enables you to store and retrieve objects in Amazon S3 using industry-standard file protocols. Files are stored as objects in your S3 buckets, accessed through a Network File System (NFS) mount point. Ownership, permissions, and timestamps are durably stored in S3 in the user-metadata of the object associated with the file. Once objects are transferred to S3, they can be managed as native S3 objects, and bucket policies such as versioning, lifecycle management, and cross-region replication apply directly to objects stored in your bucket.
Customers use the file interface to migrate file data into S3 for use by object-based workloads, as a cost-effective storage target for traditional backup applications, and as a tier in the cloud for on-premises file storage.
One-to-one mapping from files to objects, this means the files written by your NFS client will appear as an objects in your S3 bucket.
This is powerful because it enables native access to core S3 features such as lifecycle policies, versioning, or CRR, maintaining consistent namespace (the paths in your file system are the same as the keys to your objects).
gateway caches an inventory of the metadata and objects in the Amazon S3 buckets associated with your file share.
Your gateway uses this inventory to reduce the latency and frequency of S3 requests in response to file operations from the NFS client.
Who is D2L? What is it that we do?
Image Copyright 2016 D2L Inc.
We’re global
https://upload.wikimedia.org/wikipedia/commons/0/09/BlankMap-World-v2.png
We’re data intensive
<See if you can get stats on logins, amount of data, size of DBs>