The document summarizes a presentation about cost-effective data management using Amazon S3 storage classes and batch operations. It discusses different storage classes and their use cases, how to optimize data placement using storage class analysis and lifecycle policies, and introduces S3 Intelligent Tiering for automated cost savings. It also outlines how S3 Batch Operations allows managing billions of objects with a single request to save on operations and development time.
Amazon s3 adds new s3 event notifications for s3 lifecycle, s3 intelligent ti...Dhaval Soni
Â
You can now build event-driven applications using Amazon S3 Event Notifications that trigger when objects are transitioned or expired (deleted) with S3 Lifecycle, or moved within the S3 Intelligent-Tiering storage class to its Archive Access or Deep Archive Access tiers. You can also trigger S3 Event Notifications for any changes to object tags or access control lists (ACLs). You can generate these new notifications for your entire bucket, or for a subset of your objects using prefixes or suffixes, and choose to deliver them to Amazon EventBridge, Amazon SNS, Amazon SQS, or an AWS Lambda function.
Create Advanced Text Analytics Solutions with NLP - BDA310 - Chicago AWS SummitAmazon Web Services
Â
About 80% of the data an organization holds is unstructured, which makes it difficult to analyze and use. Examples of unstructured data include emails, social media feeds, news articles, and customer feedback. NLP and ML can help. Amazon Comprehend is an NLP service that uses ML to find insights and relationships in text. In this session, learn how to easily process, analyze, and visualize data by combining Amazon Comprehend with Amazon RDS, Amazon Elasticsearch Service, and Amazon Neptune. Also see real-world examples of how customers have built advanced text analytics solutions with Amazon Comprehend.
Migrate Workloads with Large Storage and I/O Demands (GPSTEC311) - AWS re:Inv...Amazon Web Services
Â
When you consider migrating your on-premises storage workloads to AWS, it's important to consider both performance and features. In this session, you learn how to use I/O profiling before you move your workload to AWS in order to understand your performance needs. Learn to translate your performance and feature requirements into solutions which might include AWS services and partner solutions. In addition we show you how to keep monitoring your storage workload once you're running on AWS.
Amazon s3 storage lens metrics now available in amazon cloud watchDhaval Soni
Â
Amazon S3 Storage Lens, a cloud storage analytics feature for organization-wide visibility into object storage usage and activity, now includes support for Amazon CloudWatch. You can now create a unified view of your operational health to monitor any of your S3 Storage Lens metrics alongside other application metrics using CloudWatch dashboards.
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
Â
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Build Data Lakes & Analytics on AWS: Patterns & Best PracticesAmazon Web Services
Â
With over 90% of todayâs data generated in the last two years, the rate of data growth is showing no sign of slowing down. In this session, we step through the challenges and best practices for capturing data, understanding what data you own, driving insights, and predicting the future using AWS services. We frame the session and demonstrations around common pitfalls of building data lakes and how to successfully drive analytics and insights from data. We also discuss the architecture patterns brought together key AWS services, including Amazon S3, AWS Glue, Amazon Athena, Amazon Kinesis, and Amazon Machine Learning. Discover the real-world application of data lakes for roles including data scientists and business users.
Stephen Moon, Sr. Solutions Architect, Amazon Web Services
James Juniper, Solution Architect for the Geo-Community Cloud, Natural Resources Canada
by Rajeev Srinivasan, Sr. Solutions Architect and Gautam Srinivasan, Solutions Architect, AWS
While a Data Lake can support completely unstructured data, getting performant analytics at scale requires some data preparation. We'll look at how to use Amazon Kinesis, AWS Glue, and Amazon EMR to make raw data ready to high-performance analytics.
by Andre Hass, Specialist Technical Account Manager, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Amazon s3 adds new s3 event notifications for s3 lifecycle, s3 intelligent ti...Dhaval Soni
Â
You can now build event-driven applications using Amazon S3 Event Notifications that trigger when objects are transitioned or expired (deleted) with S3 Lifecycle, or moved within the S3 Intelligent-Tiering storage class to its Archive Access or Deep Archive Access tiers. You can also trigger S3 Event Notifications for any changes to object tags or access control lists (ACLs). You can generate these new notifications for your entire bucket, or for a subset of your objects using prefixes or suffixes, and choose to deliver them to Amazon EventBridge, Amazon SNS, Amazon SQS, or an AWS Lambda function.
Create Advanced Text Analytics Solutions with NLP - BDA310 - Chicago AWS SummitAmazon Web Services
Â
About 80% of the data an organization holds is unstructured, which makes it difficult to analyze and use. Examples of unstructured data include emails, social media feeds, news articles, and customer feedback. NLP and ML can help. Amazon Comprehend is an NLP service that uses ML to find insights and relationships in text. In this session, learn how to easily process, analyze, and visualize data by combining Amazon Comprehend with Amazon RDS, Amazon Elasticsearch Service, and Amazon Neptune. Also see real-world examples of how customers have built advanced text analytics solutions with Amazon Comprehend.
Migrate Workloads with Large Storage and I/O Demands (GPSTEC311) - AWS re:Inv...Amazon Web Services
Â
When you consider migrating your on-premises storage workloads to AWS, it's important to consider both performance and features. In this session, you learn how to use I/O profiling before you move your workload to AWS in order to understand your performance needs. Learn to translate your performance and feature requirements into solutions which might include AWS services and partner solutions. In addition we show you how to keep monitoring your storage workload once you're running on AWS.
Amazon s3 storage lens metrics now available in amazon cloud watchDhaval Soni
Â
Amazon S3 Storage Lens, a cloud storage analytics feature for organization-wide visibility into object storage usage and activity, now includes support for Amazon CloudWatch. You can now create a unified view of your operational health to monitor any of your S3 Storage Lens metrics alongside other application metrics using CloudWatch dashboards.
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
Â
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Build Data Lakes & Analytics on AWS: Patterns & Best PracticesAmazon Web Services
Â
With over 90% of todayâs data generated in the last two years, the rate of data growth is showing no sign of slowing down. In this session, we step through the challenges and best practices for capturing data, understanding what data you own, driving insights, and predicting the future using AWS services. We frame the session and demonstrations around common pitfalls of building data lakes and how to successfully drive analytics and insights from data. We also discuss the architecture patterns brought together key AWS services, including Amazon S3, AWS Glue, Amazon Athena, Amazon Kinesis, and Amazon Machine Learning. Discover the real-world application of data lakes for roles including data scientists and business users.
Stephen Moon, Sr. Solutions Architect, Amazon Web Services
James Juniper, Solution Architect for the Geo-Community Cloud, Natural Resources Canada
by Rajeev Srinivasan, Sr. Solutions Architect and Gautam Srinivasan, Solutions Architect, AWS
While a Data Lake can support completely unstructured data, getting performant analytics at scale requires some data preparation. We'll look at how to use Amazon Kinesis, AWS Glue, and Amazon EMR to make raw data ready to high-performance analytics.
by Andre Hass, Specialist Technical Account Manager, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
NoSQL is an important part of many big data strategies. Attend this session to learn how Amazon DynamoDB helps you create fast ingest and response data sets. We demonstrate how to use DynamoDB for batch-based query processing and ETL operations (using a SQL-like language) through integration with Amazon EMR and Hive. Then, we show you how to reduce costs and achieve scalability by connecting data to Amazon ElasticCache for handling massive read volumes. Weâll also discuss how to add indexes on DynamoDB data for free-text searching by integrating with Elasticsearch using AWS Lambda and DynamoDB streams. Finally, youâll find out how you can take your high-velocity, high-volume data (such as IoT data) in DynamoDB and connect it to a data warehouse (Amazon Redshift) to enable BI analysis.
On premises compliance archival systems are expensive to maintain, are isolated IT silos, have very inefficient utilization, and are poorly protected from disaster. In AWS, we provide better infrastructure durability, better physical security, lower cost, and richer features for data access. Consider that many data lakes contain medical records, trading records, and other regulated content. The industry now has the opportunity to execute rich analytics against their data while retaining regulatory compliance.
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
by Rajeev Srinivasan, Sr. Solutions Architect and Gautam Srinivasan, Solutions Architect, AWS
Amazon Kinesis Data Analytics gives us to tools to run SQL queries against active data streams. We'll look at how we can performance real-time log analytics and q build entire streaming applications using SQL, so that you can gain actionable insights promptly.
Big Data Governance in a Post-GDPR World (GPSCT310) - AWS re:Invent 2018Amazon Web Services
Â
Come to this session to discuss the recent release of the General Data Protection Regulation (GDPR) and the California Consumer Protection Act. We review why the AWS Big Data Competency now requests information on your strategy for supporting data governance within your software (ISV) or in your architectures (SI). We also review the ISVs in our new Data Governance category and discuss how you might want to partner with them for success.
by Ben Willett, Solutions Architect, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Perform Social Media Sentiment Analysis with Amazon Pinpoint & Amazon Compreh...Amazon Web Services
Â
In this workshop, we show you how to easily deploy an AWS solution that ingests all Tweets from any Twitter handle, uses Amazon Comprehend to generate a sentiment score, and then automatically engages customers with a dynamic, personalized message. The intended audience is developers and marketers who want to leverage AWS to create powerful user engagement scenarios. We highlight how quickly you can deploy a machine learning marketing solution. We cover Amazon Pinpoint, the AWS user engagement service, and Amazon Comprehend, the AWS natural language processing service that uses artificail intelligence and machine learning to find insights and relationships in text.
by Brian Mitchell, Principal Data Architect, AWS
An inside look at how a global e-commerce firm uses AWS technologies to build a scalable environment for data and analytics. We'll look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel scalable compute engines including Amazon EMR and Amazon Redshift.
AWS Public Datasets: Learnings from Staging Petabytes of Data for Analysis in...Amazon Web Services
Â
AWS hosts a variety of public data sets that anyone can access for free. Previously, large datasets such as satellite imagery or genomic data have required hours or days to locate, download, customize, and analyze. When data is made publicly available on AWS, anyone can analyze any volume of data without needing to download or store it themselves. The AWS Open Data Team will share tips and tricks, patterns and anti-patterns and tools to help you most effectively stage your data for analysis in the cloud.
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to AWS in order to save costs, increase availability, and improve performance. AWS offers a broad set of analytics services, including solutions for batch processing, stream processing, machine learning, data workflow orchestration, and data warehousing. This session will focus on identifying the components and workflows in your current environment; and providing the best practices to migrate these workloads to the right AWS data analytics product. We will cover services such as Amazon EMR, Amazon Athena, Amazon Redshift, Amazon Kinesis, and more. We will also feature Vanguard, an American investment management company based in Malvern, Pennsylvania with over $4.4 trillion in assets under management. Ritesh Shah, Sr. Program Manager for Cloud Analytics Program at Vanguard, will describe how they orchestrated their migration to AWS analytics services, including Hadoop and Spark workloads to Amazon EMR. Ritesh will highlight the technical challenges they faced and overcame along the way, as well as share common recommendations and tuning tips to accelerate the time to production.
by Manish Mohite, Solutions Architect, AWS
How do you get data from your sources into your Redshift data warehouse? We'll show how to use AWS Glue and Amazon Kinesis Firehose to make it easy to automate the work to get data loaded.
The AWS cloud computing platform has disrupted big data. Managing big data applications used to be for only well-funded research organizations and large corporations, but not any longer. Hear from Ben Butler, Big Data Solutions Marketing Manager for AWS, to learn how our customers are using big data services in the AWS cloud to innovate faster than ever before. Not only is AWS technology available to everyone, but it is self-service, on-demand, and featuring innovative technology and flexible pricing models at low cost with no commitments. Learn from customer success stories, as Ben shares real-world case studies describing the specific big data challenges being solved on AWS. We will conclude with a discussion around the tutorials, public datasets, test drives, and our grants program - all of the resources needed to get you started quickly.
Data Warehousing and Data Lake Analytics, Together - AWS Online Tech TalksAmazon Web Services
Â
Learning Objectives:
- Learn how to discover and prepare your data lake for analytics
- See how you can query across your data warehouse and data lake without moving data
- Understand use cases that give you freedom to store data where you want and analyze it when you need it
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
Â
Organizations need to gain insight and knowledge from a growing number of IoT, APIs, clickstreams, and unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, we cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL pipelines for your data lake. We also discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
čŹĺ¸Ť: Yian Han, Senior Business Development Manager, AWS
AWS offers a family of intelligent services that provide cloud-native machine learning and deep learning technologies to address your different use cases and needs. For developers looking to add managed AI services to their applications, AWS brings natural language understanding (NLU) and text-to-speech (TTS) with Amazon Polly, visual search and image recognition with Amazon Rekognition, and developer-focused machine learning with Amazon Machine Learning. In this talk you will learn about these services and see demos of their capabilities
Business Intelligence in Minutes with Amazon Athena and Amazon QuickSightAmazon Web Services
Â
Leverage Amazon S3, Amazon Athena, and Amazon QuickSight to explore and visualise data without having to manage a database or spin up a server. We will show you how to upload a dataset to a Data Lake in the AWS Cloud, optimise it in a format that enables high speed queries using SQL, and create rich web-based visualisations from those results.
Speaker: Aun Iftikhar, Solutions Architect, AWS
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Auroraâ a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
Working with Scalable Machine Learning Algorithms in Amazon SageMaker - AWS O...Amazon Web Services
Â
Learning Objectives:
- Become aquainted with the popular algorithms provided with Amazon SageMaker
- Learn how to use algorithms for training in Amazon SageMaker
- Learn how the algorithms in Amazon SageMaker were architected to be faster and more efficient by design
Cost-effective-Data-Management-with-S3-Batch-Operations-and-the-S3-Storage-Cl...Amazon Web Services
Â
Cost-effective Data Management with S3 Batch Operations and the S3 Storage Classes Abstract: As your data lake grows, it becomes increasingly important to manage objects at scale and optimize storage costs and resources. In this session, AWS experts provide an overview of S3âs capabilities that let you manage data at the object, bucket, and account levels. Learn about and watch demos for S3 Batch Operations (a new feature that lets you take action across thousands, millions, and even billions of objects with a single API request or a few clicks in the S3 Management Console). Also learn some cost-optimization best practices by storing objects across the S3 Storage Classes. This includes an overview of our newest storage classes S3 Intelligent-Tiering â the first cloud storage class to automatically deliver cost savings by moving objects with unknown or changing access patterns between two storage tiers â one optimized for frequent access and a lower-cost one optimized for infrequent access.
Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across You...Amazon Web Services
Â
"Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them.
 "
NoSQL is an important part of many big data strategies. Attend this session to learn how Amazon DynamoDB helps you create fast ingest and response data sets. We demonstrate how to use DynamoDB for batch-based query processing and ETL operations (using a SQL-like language) through integration with Amazon EMR and Hive. Then, we show you how to reduce costs and achieve scalability by connecting data to Amazon ElasticCache for handling massive read volumes. Weâll also discuss how to add indexes on DynamoDB data for free-text searching by integrating with Elasticsearch using AWS Lambda and DynamoDB streams. Finally, youâll find out how you can take your high-velocity, high-volume data (such as IoT data) in DynamoDB and connect it to a data warehouse (Amazon Redshift) to enable BI analysis.
On premises compliance archival systems are expensive to maintain, are isolated IT silos, have very inefficient utilization, and are poorly protected from disaster. In AWS, we provide better infrastructure durability, better physical security, lower cost, and richer features for data access. Consider that many data lakes contain medical records, trading records, and other regulated content. The industry now has the opportunity to execute rich analytics against their data while retaining regulatory compliance.
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
by Rajeev Srinivasan, Sr. Solutions Architect and Gautam Srinivasan, Solutions Architect, AWS
Amazon Kinesis Data Analytics gives us to tools to run SQL queries against active data streams. We'll look at how we can performance real-time log analytics and q build entire streaming applications using SQL, so that you can gain actionable insights promptly.
Big Data Governance in a Post-GDPR World (GPSCT310) - AWS re:Invent 2018Amazon Web Services
Â
Come to this session to discuss the recent release of the General Data Protection Regulation (GDPR) and the California Consumer Protection Act. We review why the AWS Big Data Competency now requests information on your strategy for supporting data governance within your software (ISV) or in your architectures (SI). We also review the ISVs in our new Data Governance category and discuss how you might want to partner with them for success.
by Ben Willett, Solutions Architect, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Perform Social Media Sentiment Analysis with Amazon Pinpoint & Amazon Compreh...Amazon Web Services
Â
In this workshop, we show you how to easily deploy an AWS solution that ingests all Tweets from any Twitter handle, uses Amazon Comprehend to generate a sentiment score, and then automatically engages customers with a dynamic, personalized message. The intended audience is developers and marketers who want to leverage AWS to create powerful user engagement scenarios. We highlight how quickly you can deploy a machine learning marketing solution. We cover Amazon Pinpoint, the AWS user engagement service, and Amazon Comprehend, the AWS natural language processing service that uses artificail intelligence and machine learning to find insights and relationships in text.
by Brian Mitchell, Principal Data Architect, AWS
An inside look at how a global e-commerce firm uses AWS technologies to build a scalable environment for data and analytics. We'll look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel scalable compute engines including Amazon EMR and Amazon Redshift.
AWS Public Datasets: Learnings from Staging Petabytes of Data for Analysis in...Amazon Web Services
Â
AWS hosts a variety of public data sets that anyone can access for free. Previously, large datasets such as satellite imagery or genomic data have required hours or days to locate, download, customize, and analyze. When data is made publicly available on AWS, anyone can analyze any volume of data without needing to download or store it themselves. The AWS Open Data Team will share tips and tricks, patterns and anti-patterns and tools to help you most effectively stage your data for analysis in the cloud.
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to AWS in order to save costs, increase availability, and improve performance. AWS offers a broad set of analytics services, including solutions for batch processing, stream processing, machine learning, data workflow orchestration, and data warehousing. This session will focus on identifying the components and workflows in your current environment; and providing the best practices to migrate these workloads to the right AWS data analytics product. We will cover services such as Amazon EMR, Amazon Athena, Amazon Redshift, Amazon Kinesis, and more. We will also feature Vanguard, an American investment management company based in Malvern, Pennsylvania with over $4.4 trillion in assets under management. Ritesh Shah, Sr. Program Manager for Cloud Analytics Program at Vanguard, will describe how they orchestrated their migration to AWS analytics services, including Hadoop and Spark workloads to Amazon EMR. Ritesh will highlight the technical challenges they faced and overcame along the way, as well as share common recommendations and tuning tips to accelerate the time to production.
by Manish Mohite, Solutions Architect, AWS
How do you get data from your sources into your Redshift data warehouse? We'll show how to use AWS Glue and Amazon Kinesis Firehose to make it easy to automate the work to get data loaded.
The AWS cloud computing platform has disrupted big data. Managing big data applications used to be for only well-funded research organizations and large corporations, but not any longer. Hear from Ben Butler, Big Data Solutions Marketing Manager for AWS, to learn how our customers are using big data services in the AWS cloud to innovate faster than ever before. Not only is AWS technology available to everyone, but it is self-service, on-demand, and featuring innovative technology and flexible pricing models at low cost with no commitments. Learn from customer success stories, as Ben shares real-world case studies describing the specific big data challenges being solved on AWS. We will conclude with a discussion around the tutorials, public datasets, test drives, and our grants program - all of the resources needed to get you started quickly.
Data Warehousing and Data Lake Analytics, Together - AWS Online Tech TalksAmazon Web Services
Â
Learning Objectives:
- Learn how to discover and prepare your data lake for analytics
- See how you can query across your data warehouse and data lake without moving data
- Understand use cases that give you freedom to store data where you want and analyze it when you need it
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
Â
Organizations need to gain insight and knowledge from a growing number of IoT, APIs, clickstreams, and unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, we cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL pipelines for your data lake. We also discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
čŹĺ¸Ť: Yian Han, Senior Business Development Manager, AWS
AWS offers a family of intelligent services that provide cloud-native machine learning and deep learning technologies to address your different use cases and needs. For developers looking to add managed AI services to their applications, AWS brings natural language understanding (NLU) and text-to-speech (TTS) with Amazon Polly, visual search and image recognition with Amazon Rekognition, and developer-focused machine learning with Amazon Machine Learning. In this talk you will learn about these services and see demos of their capabilities
Business Intelligence in Minutes with Amazon Athena and Amazon QuickSightAmazon Web Services
Â
Leverage Amazon S3, Amazon Athena, and Amazon QuickSight to explore and visualise data without having to manage a database or spin up a server. We will show you how to upload a dataset to a Data Lake in the AWS Cloud, optimise it in a format that enables high speed queries using SQL, and create rich web-based visualisations from those results.
Speaker: Aun Iftikhar, Solutions Architect, AWS
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Auroraâ a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
Working with Scalable Machine Learning Algorithms in Amazon SageMaker - AWS O...Amazon Web Services
Â
Learning Objectives:
- Become aquainted with the popular algorithms provided with Amazon SageMaker
- Learn how to use algorithms for training in Amazon SageMaker
- Learn how the algorithms in Amazon SageMaker were architected to be faster and more efficient by design
Cost-effective-Data-Management-with-S3-Batch-Operations-and-the-S3-Storage-Cl...Amazon Web Services
Â
Cost-effective Data Management with S3 Batch Operations and the S3 Storage Classes Abstract: As your data lake grows, it becomes increasingly important to manage objects at scale and optimize storage costs and resources. In this session, AWS experts provide an overview of S3âs capabilities that let you manage data at the object, bucket, and account levels. Learn about and watch demos for S3 Batch Operations (a new feature that lets you take action across thousands, millions, and even billions of objects with a single API request or a few clicks in the S3 Management Console). Also learn some cost-optimization best practices by storing objects across the S3 Storage Classes. This includes an overview of our newest storage classes S3 Intelligent-Tiering â the first cloud storage class to automatically deliver cost savings by moving objects with unknown or changing access patterns between two storage tiers â one optimized for frequent access and a lower-cost one optimized for infrequent access.
Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across You...Amazon Web Services
Â
"Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them.
 "
Drive Down the Cost of your Data Lake by Using the Right Data TieringBoaz Ziniman
Â
Amazon S3 supports a wide range of storage classes to help you cost-effectively store your data. Each of the S3 Storage Classes is designed to support different use cases while reliably protecting your data. In this session, we will look into the different S3 Storage Classes, their respective key features, and the use cases they support, while focusing on the newest storage class S3 Intelligent-Tiering-the first cloud storage class that automatically optimizes storage costs for data with changing access patterns.
Optimizing Costs in Amazon S3 Creating Cost Efficiencies w/ Amazon S3 Storage...Amazon Web Services
Â
Amazon S3 supports a wide range of storage classes to help you cost-effectively store your data. Each of the S3 Storage Classes is designed to support different use cases while reliably protecting your data. In this session, Amazon S3 experts will discuss the different S3 Storage Classes, their respective key features, and the unique use cases they support. We will then deep dive into S3âs newest storage class S3 Intelligent-Tieringâthe first cloud storage class that automatically optimizes storage costs for data with changing access patterns. S3 Intelligent-Tiering moves data between two storage tiers based on changing access patterns of your objects and is ideal for data where customers donât know or have a hard time learning how a data set is accessed over time. Attend this session to learn more about creating cost efficiencies with Amazon S3, when to use which storage class, and how S3 Intelligent-Tiering automates cost savings for you.
Best Practices for Amazon S3 and Amazon Glacier (STG203-R2) - AWS re:Invent 2018Amazon Web Services
Â
Learn best practices for Amazon S3 performance optimization, security, data protection, storage management, and much more. In this session, we look at common Amazon S3 use cases and ways to manage large volumes of data within Amazon S3. We discuss the latest performance improvements and how they impact previous guidance. We also talk about the Amazon S3 data resilience model and how architecture for the AWS Regions and Availability Zones impact architecture for fault tolerance.
Cost efficiencies and security best practices with Amazon S3 storage - STG301...Amazon Web Services
Â
Join us to learn best practices for Amazon S3 cost optimization and security. Amazon S3 supports various storage classes to help you cost-effectively store data. In this session, Amazon S3 experts discuss these storage classes, their key features, and the use cases that they support. We examine the newest storage classes, S3 Intelligent-Tiering and S3 Glacier Deep Archive. Learn about Amazon S3 access control policies, encryption, and security monitoring. Also, learn how to use S3 Block Public Access, a feature that helps you enforce a no public access policy for an individual bucket, a group of buckets, or an entire account.
Amazon S3 Glacier Deep Archive is a new storage class that provides secure, durable object storage for long-term data retention and digital preservation. S3 Glacier Deep Archive is designed for customers that retain data sets for 7-10 years or longer to meet business or regulatory compliance requirements, such as organizations in media and entertainment, financial services, healthcare, and public sectors. At just $0.00099 per GB-month (less than one-tenth of one cent, or $1 per TB-month), S3 Glacier Deep Archive offers the lowest cost storage class in the cloud, at prices significantly less expensive than storing and maintaining data in on-premises magnetic tape libraries and/or archiving data offsite.
Deep dive on Amazon S3 Glacier Deep Archive - STG301 - Santa Clara AWS SummitAmazon Web Services
Â
Many organizations need to retain multiple PBs of data to meet business and regulatory compliance requirements, and many choose on-premises magnetic tape libraries or off-premises tape archival services, which are expensive and onerous to maintain. In this session, we dive into Amazon S3 Glacier Deep Archive, which enables customers with large datasets to eliminate the cost and management of tape infrastructure while ensuring that data is preserved for future use and analysis. S3 Glacier Deep Archive is Amazon S3âs lowest-cost storage class. Learn how it supports long-term retention and digital preservation of data that wonât be regularly accessed, if ever.
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...Amazon Web Services
Â
As your data repository grows on AWS using the object storage services Amazon S3 and Amazon Glacier, it becomes increasingly helpful to use particular features to help protect and manage your objects. In this chalk talk, you have the opportunity to speak directly with the AWS engineering team that builds and maintains features like Cross-Region Replication, S3 Storage Class Analysis, S3 Inventory, S3 Lifecycle, Amazon Glacier Vault Lock, and others. Bring your feedback, questions, and expertise to discuss innovative ways to protect data from corruption or malicious and accidental deletion, managing the data lifecycle to reduce costs, identifying wasted storage, and much more.
Adding to the existing AI services, AWS continues to bridge the gap for developers to build ML solutions without the hurdle of having data science expertise. In this session, learn about the new services announced at re:Invent (Forecast, Textract and Personalize) and get a preview of what to expect when building time series models, OCR and recommendation engines with little to no data science experience.
In this session, we'll discuss the new S3 storage class, S3 One Zone-IA, with detailed demos of how to implement and use single AZ storage to reduce the cost of storing recreatable data like backups, disaster recovery copies, or derived analysis. We'll also talk about the new S3 Select feature, now generally available, that help accelerate applications and analytics by filtering data in place in Amazon S3.
Get the latest on what we've been developing in Amazon S3. In this session, learn about new advances in S3 performance, security, data protection, storage management, and much more. We'll discuss how to apply the appropriate bucket policies and encryption configurations to enhance security, use S3 Select to accelerate queries, and take advantage of object tagging for data classification.
Learn from our engineering experts how we've designed Amazon S3 and Amazon Glacier to be durable, available, and massively scalable. Hear how Sprinklr architected their environment for the ultimate in high availability for their mission-critical applications. In this session, we'll discuss AWS Region and Availability Zone architecture, storage classes, built-in and on-demand data replication, and much more.
Learn best practices for Amazon Simple Storage Service (Amazon S3) performance optimization, security, data protection, storage management, and much more. Learn how to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and leverage object tagging and other features to enhance security.
Storing data long term with Amazon S3 Glacier Deep Archive - STG302 - Chicago...Amazon Web Services
Â
Many organizations need to retain multiple petabytes of data to satisfy business and regulatory compliance requirements. Among these organizations, many choose on-premises magnetic tape libraries or offsite tape archival services, which are expensive and onerous to maintain. In this session, we look closely at Amazon Simple Storage Service (Amazon S3) Glacier Deep Archive, which enables customers with large datasets to eliminate the cost and management of tape infrastructure while ensuring that data is preserved for future use and analysis. Amazon S3 Glacier Deep Archive is the lowest-cost storage class for Amazon S3, and we examine how it supports long-term retention and digital preservation of data that is seldom, if ever, accessed.
Analysing streaming data in real time (AWS)javier ramirez
Â
We all want to analyse and visualise streaming data for real-time operational insights into our applications and infrastructure, and to make more informed decisions. But streaming analytics are hard at scale. Before you realize you can end up with a very sophisticated architecture with different moving parts you need to secure, monitor, and scale independently. In this session, you will learn how to work with real-time data, from ingestion to visualisation and monitoring, at any scale, by leveraging the managed services provided by AWS.
Storing data long term with Amazon S3 Glacier Deep Archive - STG301 - New Yor...Amazon Web Services
Â
Many organizations need to retain multiple petabytes of data to satisfy business and regulatory compliance requirements. Among these organizations, several choose on-premises magnetic tape libraries or offsite tape archival services, which are expensive and onerous to maintain. In this session, get a closer look at Amazon Simple Storage Service (Amazon S3) Glacier Deep Archive, which lets customers with large datasets eliminate the cost and management of tape infrastructure while ensuring that data is preserved for future use and analysis. We also examine how this service supports long-term retention and digital preservation of data that is seldom, if ever, accessed.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Â
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, lâutilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalitĂ Server...Amazon Web Services
Â
La varietĂ e la quantitĂ di dati che si crea ogni giorno accelera sempre piĂš velocemente e rappresenta una opportunitĂ irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantitĂ di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma lâelasticitĂ del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dellâinfrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende piĂš semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilitĂ , la velocitĂ di rilascio e, in definitiva, ci ha consentito di creare applicazioni piĂš affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
Â
Lâutilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question â how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
⢠PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
⢠Scope
⢠Features
⢠Tech overview and Demo
The role of the Cloud
The Future of APIs
⢠Complying with regulation
⢠Monetizing data / APIs
⢠Business models
⢠Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica lâofferta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Â
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Â
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilitĂ del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Â
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilitĂ messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledÏ 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ⢠on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphereŽ e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilitĂ ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Â
Molte aziende oggi, costruiscono applicazioni con funzionalitĂ di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessitĂ di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalitĂ di QLDB.
Con lâascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono piĂš importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi dâuso creando API moderne con funzionalitĂ di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud⢠on AWS: i miti da sfatareAmazon Web Services
Â
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilitĂ ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno lâarchitettura e dimostreranno come sfruttare a pieno le potenzialitĂ di VMware Cloud ⢠on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o piÚ dei tuo container.
Cost-Effective Data Management with S3 Batch Operations and the S3 Storage Classes
1. P U B L I C S E C T O R
S U M M I T
WASHINGTON D.C.
2. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Cost-effective Data Management with S3
Batch Operations and the S3 Storage
Classes
Rob Wilson
Product Manager
Amazon Web Services
3 2 0 9 8 7
3. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Agenda
Pillars of cost optimization
Data placement
Introducing Amazon S3 Intelligent-Tiering
Optimizing for specific workloads
Amazon S3 Batch Operations overview and demo
4. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Pillars of cost optimization
5. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Pillars of Cost Optimization
Application
Requirements
Data
Organization
Right
Sizing
Monitor,
Optimize, Repeat
6. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Benefits of Amazon Simple Storage Service (Amazon S3)
Enterprise
applications
Analytics
Archiving
Backup & restore
Origin storage for
CDN
Website hosting
Mobile sync
and storage
7. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Your choice of Amazon S3 storage classes
Access frequencyFrequent Infrequent
⢠Active, frequently
accessed data
⢠Data with changing
access patterns
⢠Infrequently accessed
data
⢠Re-creatable, less
accessed data
⢠Archive data
S3 Standard S3 Standard-IA S3 One Zone-IA S3 Glacier
S3 Intelligent-
Tiering
S3 Glacier
Deep Archive
⢠Archive data
8. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Your choice of Amazon S3 storage classes
Access frequencyFrequent Infrequent
⢠Active, frequently
accessed data
⢠Milliseconds access
⢠Data with changing
access patterns
⢠Milliseconds access
⢠Infrequently accessed
data
⢠Milliseconds access
⢠Re-creatable, less
accessed data
⢠Milliseconds access
⢠Archive data
⢠Minutes or hours access
S3 Standard S3 Standard-IA S3 One Zone-IA S3 Glacier
S3 Intelligent-
Tiering
S3 Glacier
Deep Archive
⢠Archive data
⢠Hours to access
9. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Your choice of Amazon S3 storage classes
Access frequencyFrequent Infrequent
⢠Active, frequently
accessed data
⢠Milliseconds access
⢠> 3 AZ
⢠$0.0210/GB
⢠Data with changing
access patterns
⢠Milliseconds access
⢠> 3 AZ
⢠$0.0210 to
$0.0125/GB
⢠Infrequently accessed
data
⢠Milliseconds access
⢠> 3 AZ
⢠$0.0125/GB
⢠Re-creatable, less
accessed data
⢠Milliseconds access
⢠1 AZ
⢠$0.0100/GB
⢠Archive data
⢠Minutes or hours access
⢠> 3 AZ
⢠$0.0040/GB
S3 Standard S3 Standard-IA S3 One Zone-IA S3 Glacier
S3 Intelligent-
Tiering
S3 Glacier
Deep Archive
⢠Archive data
⢠Hours to access
⢠> 3 AZ
⢠$0.00099/GB
10. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Your choice of Amazon S3 storage classes
Access frequencyFrequent Infrequent
⢠Active, frequently
accessed data
⢠Milliseconds access
⢠> 3 AZ
⢠$0.0210/GB
⢠Data with changing
access patterns
⢠Milliseconds access
⢠> 3 AZ
⢠$0.0210 to
$0.0125/GB
⢠Monitoring fee
⢠Min storage duration
⢠Infrequently accessed
data
⢠Milliseconds access
⢠> 3 AZ
⢠$0.0125/GB
⢠Retrieval fee
⢠Min storage duration
⢠Min object size
⢠Re-creatable, less
accessed data
⢠Milliseconds access
⢠1 AZ
⢠$0.0100/GB
⢠Retrieval fee
⢠Min storage duration
⢠Min object size
⢠Archive data
⢠Minutes or hours access
⢠> 3 AZ
⢠$0.0040/GB
⢠Retrieval fee
⢠Min storage duration
⢠Min object size
S3 Standard S3 Standard-IA S3 One Zone-IA S3 Glacier
S3 Intelligent-
Tiering
S3 Glacier
Deep Archive
⢠Archive data
⢠Hours to access
⢠> 3 AZ
⢠$0.00099/GB
⢠Retrieval fee
⢠Min storage duration
⢠Min object size
11. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Data placement
12. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Customers save millions of dollars annually with
Storage Class Analysis and
Lifecycle Management
13. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Storage Class Analysis
provides lifecycle recommendations
Monitors access patterns
Classifies data as
frequently or infrequently
accessed
Great for predictable
workloads
Can be filtered by bucket,
prefix, or object tag
14. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Set Amazon S3 Lifecycle Policies to Tier and Expire
storage
Amazon S3 lifecycle policies tier
objects to lower cost storage
classes and expire storage
Amazon S3 Storage Class Analysis
results help you set lifecycle
policies
Policies are based on the objectâs
age and can be set by bucket,
prefix, or object tag
S3 Standard S3 S-IA S3 Glacier
15. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Lifecycle Management Example Policies
Lifecycle rules take action based on object age:
1. Move all objects older than 60 days to Amazon S3 S-IA
2. Move all objects older than 180 days to Amazon S3 Glacier
S3 Standard S3 S-IA S3 Glacier
16. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Lifecycle Management Example Policies
S3 Intelligent-
Tiering
S3 Glacier
Lifecycle rules take action based on object age:
1. Move all objects older than 180 days to Amazon S3 Glacier
17. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Lifecycle Management Example Policies
Lifecycle rules take action based on object age:
1. Move all objects older than 180 days to Amazon S3 Glacier
2. Move all objects older than 365 days to Amazon S3 Glacier Deep Archive
S3 Standard S3 Glacier
S3 Glacier Deep
Archive
18. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Introducing
Amazon S3 Intelligent-Tiering
19. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Intelligent-Tiering automates cost savings
20. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Intelligent-Tiering storage class
⢠Automatically optimizes storage costs
for data with changing access patterns
⢠Monitors access patterns and auto -
tiers on granular object level
⢠No performance impact, no operational
overhead
⢠Milliseconds access, > 3 AZs
⢠Monitoring fee per object, minimum
storage duration
21. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Ideal use cases for Amazon S3 Intelligent-Tiering
Dynamic cost optimization with no performance impact and no operational overhead
Big Data, Data Lakes
Storage with changing access
patterns used by multiple
applications
Enterprises
Storage accessed by fragmented
applications from various
organizations
Startups
Constraint on resources and
experience to optimize storage
themselves
22. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Workload pattern 1 â frequently accessed data
Workl oad characteri sti cs:
⢠F r e q u e n t l y a c c e s s e d s t o r a g e
( > 1 0 0 % o f s t o r a g e r e t r i e v e d )
Common use cases:
⢠B i g d a t a a n a l y t i c s , d y n a m i c
w e b s i t e h o s t i n g , I o T s e n s o r
d a t a , D N A s e q u e n c i n g ,
f i n a n c i a l s i m u l a t i o n s , o r i g i n
s t o r a g e f o r C D N
Storage cl asses:
⢠A m a z o n S 3 S t a n d a r d , m a y b e S 3
I n t e l l i g e n t - T i e r i n g
23. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Workload pattern 2 â infrequently accessed data
Workl oad characteri sti cs:
⢠O v e r t i m e i n f r e q u e n t l y a c c e s s e d
s t o r a g e ( < 1 0 0 % o f s t o r a g e
r e t r i e v e d a f t e r 9 0 d a y s )
Common use cases:
⢠M o b i l e s y n c & b a c k u p , d a t a
l o g s , m e d i a a s s e t s f o r g a m i n g ,
c u s t o m e r g e n e r a t e d c o n t e n t ,
d a t a s t o r e d f o r d i s a s t e r
r e c o v e r y
Amazon S3 storage cl asses:
⢠L i f e c y c l e f r o m A m a z o n S 3
S t a n d a r d t o S 3 S t a n d a r d - I A o r
S 3 O n e Z o n e - I A f o r r e - c r e a t a b l e
d a t a
⢠U s e A m a z o n S 3 I n t e l l i g e n t -
T i e r i n g f o r a u t o m a t e d t i e r i n g
⢠U s e A m a z o n S 3 G l a c i e r f o r
a r c h i v e
24. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Workload pattern 3 â data with changing access
Workl oad characteri sti cs:
⢠D a t a w i t h c h a n g i n g o r
u n p r e d i c t a b l e a c c e s s p a t t e r n s
⢠M i x o f o b j e c t s i z e s ( a v g . o b j e c t
s i z e ~ M B )
Common use cases:
⢠M a c h i n e L e a r n i n g t r a i n i n g
d a t a , S a t e l l i t e a n d G e o s p a t i a l
i m a g e r y , F i n a n c i a l T r a n s a c t i o n
R e c o r d s , A u t o n o m o u s v e h i c l e
d a t a , d a t a l a k e s
Storage cl asses:
⢠S 3 I n t e l l i g e n t - T i e r i n g
25. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Workload pattern 4 â unknown access patterns
Workl oad characteri sti cs:
⢠U n k n o w n w o r k l o a d
⢠Y o u o n l y k n o w t h a t o b j e c t s a r e
l a r g e ( ~ M B ) a n d s t o r a g e
d u r a t i o n i s l o n g ( ~ m o n t h s )
ď S 3 I n t e l l i g e n t - T i e r i n g
Workl oad characteri sti cs:
⢠U n k n o w n w o r k l o a d
⢠U n k n o w n o b j e c t s i z e a n d
s h o r t - l i v e d o b j e c t s ( < m o n t h s )
ď S t a r t w i t h A m a z o n S 3
S t a n d a r d a n d a f t e r s o m e t i m e
l i f e c y c l e l a r g e o b j e c t s i n t o S 3
I n t e l l i g e n t - T i e r i n g
?
26. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Batch Operations
27. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Save on operations and development time by managing
billions of objects with a single request
28. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
S3 Batch Operations: How it works
Choose
Objects
Select an
Operation
View
Progress
75%
29. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Batch Operations
⢠Amazon S3
Inventory Report
⢠CSV List
Choose
Objects
Select an
Operation
View
Progress
30. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Batch Operations
⢠Amazon S3
Inventory Report
⢠CSV List
⢠Copy
⢠Restore from Amazon S3
Glacier
⢠Put Access Control List (ACL)
⢠Replace Object Tag Sets
⢠Run AWS Lambda Functions
Choose
Objects
Select an
Operation
View
Progress
31. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Batch Operations
⢠Amazon S3
Inventory Report
⢠CSV List
⢠Copy
⢠Restore from Amazon S3
Glacier
⢠Put Access Control List (ACL)
⢠Replace Object Tag Sets
⢠Run Lambda Functions
⢠Object Level Progress
⢠Job Notifications
⢠Completion Report
Choose
Objects
Select an
Operation
View
Progress
32. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Inventory
Regularly generates a list of
objects for
analytics and auditing.
ďź Storage class
ďź Creation date
ďź Encryption status
ďź Replication status
ďź Object size, and more
33. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Console view
34. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Amazon S3 Batch Operations is a managed
solution
⢠Automatic retries
⢠Progress visibility
⢠Management controls
⢠Notifications
⢠Auditing
No need to build and maintain an
application to call APIs in bulk
35. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
36. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Demo Overview
⢠Images stored in an Amazon S3 bucket
⢠Amazon S3 Batch Operations invokes an AWS
Lambda function that calls Amazon
Rekognition
⢠Results stored in Amazon Elasticsearch Service
⢠A Kibana dashboard shows the output
37. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Demo Overview
S3 Batch
Operations
Lambda function
Create job
Images
Labels Data visualization
38. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Common uses for Amazon S3 Batch Operations
39. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Encrypt your existing objects
Use Amazon S3 Select or Amazon Athena to filter the bucketâs contents
and identify only the unencrypted objects
Copy objects to the same bucket and specify the desired type
of encryption
40. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Copy objects to a new location
Works within regions, across regions, and across accounts
41. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Restore objects from Amazon S3 Glacier
Easily restore and then copy objects by using the same
manifest for both operations
Use Amazon S3 Glacier restore notifications to identify when objects are
available in Amazon S3
42. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Putting it all together
43. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Putting it all together
Understand your application requirements
Use tags and prefixes to organize your data
Optimize across all storage classes
Use Amazon S3 Batch Operations to simplify common tasks
Amazon S3 Intelligent-Tiering for automated cost savings
AWS Cloud enables you to be more innovative, agile, and cost effective
44. Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
45. Thank you!
Š 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.P U B L I C S E C T O R
S U M M I T
Rob Wilson
Product Manager, Amazon S3