Compared to storing long-term datasets on-premises, archiving in the cloud is a smart alternative whether you’re looking for an active archive solution, tape replacement, or to fulfill a compliance requirement. Learn how AWS customers are simplifying their archiving strategy and meeting compliance needs using Amazon Glacier. Hear how customers have evolved their backup and disaster recovery architectures and replaced tape solutions by turning to AWS for a more cost efficient, durable and agile solution. We will showcase Sony DADC's active archive deployment on Glacier and demo how some of our financial service customers have set up compliant archives to meet their regulatory objectives.
ENT313 Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum E...Amazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Learn how Amazon Redshift, our fully managed, petabyte-scale data warehouse, can help you quickly and cost-effectively analyze all of your data using your existing business intelligence tools. Get an introduction to how Amazon Redshift uses massively parallel processing, scale-out architecture, and columnar direct-attached storage to minimize I/O time and maximize performance. Learn how you can gain deeper business insights and save money and time by migrating to Amazon Redshift. Take away strategies for migrating from on-premises data warehousing solutions, tuning schema and queries, and utilizing third party solutions.
Deep Dive on Elastic File System - February 2017 AWS Online Tech TalksAmazon Web Services
Organizations face significant challenges moving their applications to the cloud when they require a standard file system interface for accessing their cloud data. In this technical session, we will explore the world’s first cloud-scale file system and its targeted use cases. Attendees will learn about the Amazon Elastic File System (EFS) features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. We will highlight and demonstrate how to deploy Amazon EFS in one of our most common use cases and will share tips for success throughout.
Learning Objectives:
• Recognize why and when to use Amazon EFS
• Understand key technical/security concepts
• Learn how to leverage EFS’s performance
• See a demo of EFS in action
• Review EFS’s economics
AWS re:Invent 2016: Introduction to Managed Database Services on AWS (DAT307)Amazon Web Services
Which database is best suited for your use case? Should you choose a relational database or NoSQL or a data warehouse for your workload? Would a managed service like Amazon RDS, Amazon DynamoDB, or Amazon Redshift work better for you, or would it be better to run your own database on Amazon EC2? FanDuel has been running its fantasy sports service on Amazon Web Services (AWS) since 2012. You will learn best practices and insights from FanDuel’s successful migrations from self-managed databases on EC2 to fully-managed database services.
Data migration at petabyte scale is now a simple service from AWS. You can easily migrate large volumes of data from on-premises environments to the cloud, quickly get started with the cloud as a backup target, or burst workloads between your on-premises environments and the AWS Cloud. Learn about AWS Snowball, AWS Snowball Edge, AWS Snowmobile and AWS Storage Gateway, and understand which one is the right fit for your requirements. We will go through customer use cases, review the different applications used, and help you cut IT spend and management time on hardware and backup solutions.
Amazon Aurora is a cloud-optimized relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The recently announced PostgreSQL-compatibility, together with the original MySQL compatibility, are perfect for new application development and for migrations from overpriced, restrictive commercial databases. In this session, we’ll do a deep dive into the new architectural model and distributed systems techniques behind Amazon Aurora, discuss best practices and configurations, look at migration options and share customer experience from the field.
AWS re:Invent 2016: Case Study: How Monsanto Uses Amazon EFS with Their Large...Amazon Web Services
At Monsanto, we build and use technologies that support our data and also BI efforts that facilitate intelligent, data-driven decisions. In the past year, we've embarked on large-scale efforts to modernize our geospatial platform and improve our analytic processing capabilities by building out new cloud and open-source based services. We found using Amazon Elastic File System (Amazon EFS) gave us the flexibility and performance we were seeking while saving us significant time, effort, and cost. In this session, we discuss how Monsanto uses the Amazon EFS service to run our large scaling geospatial data sets such as raster, and to perform highly parallelized analytics for data scientists and business users. Topics include the technical architecture, how and why we chose EFS for handling data sets that are terabytes in size, our recommendations, and the lessons learned along the way.
BDA 302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
ENT313 Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum E...Amazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Learn how Amazon Redshift, our fully managed, petabyte-scale data warehouse, can help you quickly and cost-effectively analyze all of your data using your existing business intelligence tools. Get an introduction to how Amazon Redshift uses massively parallel processing, scale-out architecture, and columnar direct-attached storage to minimize I/O time and maximize performance. Learn how you can gain deeper business insights and save money and time by migrating to Amazon Redshift. Take away strategies for migrating from on-premises data warehousing solutions, tuning schema and queries, and utilizing third party solutions.
Deep Dive on Elastic File System - February 2017 AWS Online Tech TalksAmazon Web Services
Organizations face significant challenges moving their applications to the cloud when they require a standard file system interface for accessing their cloud data. In this technical session, we will explore the world’s first cloud-scale file system and its targeted use cases. Attendees will learn about the Amazon Elastic File System (EFS) features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. We will highlight and demonstrate how to deploy Amazon EFS in one of our most common use cases and will share tips for success throughout.
Learning Objectives:
• Recognize why and when to use Amazon EFS
• Understand key technical/security concepts
• Learn how to leverage EFS’s performance
• See a demo of EFS in action
• Review EFS’s economics
AWS re:Invent 2016: Introduction to Managed Database Services on AWS (DAT307)Amazon Web Services
Which database is best suited for your use case? Should you choose a relational database or NoSQL or a data warehouse for your workload? Would a managed service like Amazon RDS, Amazon DynamoDB, or Amazon Redshift work better for you, or would it be better to run your own database on Amazon EC2? FanDuel has been running its fantasy sports service on Amazon Web Services (AWS) since 2012. You will learn best practices and insights from FanDuel’s successful migrations from self-managed databases on EC2 to fully-managed database services.
Data migration at petabyte scale is now a simple service from AWS. You can easily migrate large volumes of data from on-premises environments to the cloud, quickly get started with the cloud as a backup target, or burst workloads between your on-premises environments and the AWS Cloud. Learn about AWS Snowball, AWS Snowball Edge, AWS Snowmobile and AWS Storage Gateway, and understand which one is the right fit for your requirements. We will go through customer use cases, review the different applications used, and help you cut IT spend and management time on hardware and backup solutions.
Amazon Aurora is a cloud-optimized relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The recently announced PostgreSQL-compatibility, together with the original MySQL compatibility, are perfect for new application development and for migrations from overpriced, restrictive commercial databases. In this session, we’ll do a deep dive into the new architectural model and distributed systems techniques behind Amazon Aurora, discuss best practices and configurations, look at migration options and share customer experience from the field.
AWS re:Invent 2016: Case Study: How Monsanto Uses Amazon EFS with Their Large...Amazon Web Services
At Monsanto, we build and use technologies that support our data and also BI efforts that facilitate intelligent, data-driven decisions. In the past year, we've embarked on large-scale efforts to modernize our geospatial platform and improve our analytic processing capabilities by building out new cloud and open-source based services. We found using Amazon Elastic File System (Amazon EFS) gave us the flexibility and performance we were seeking while saving us significant time, effort, and cost. In this session, we discuss how Monsanto uses the Amazon EFS service to run our large scaling geospatial data sets such as raster, and to perform highly parallelized analytics for data scientists and business users. Topics include the technical architecture, how and why we chose EFS for handling data sets that are terabytes in size, our recommendations, and the lessons learned along the way.
BDA 302 Deep Dive on Migrating Big Data Workloads to Amazon EMRAmazon Web Services
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to Amazon EMR in order to save costs, increase availability, and improve performance. Amazon EMR is a managed service that lets you process and analyze extremely large data sets using the latest versions of over 15 open-source frameworks in the Apache Hadoop and Spark ecosystems. This session will focus on identifying the components and workflows in your current environment and providing the best practices to migrate these workloads to Amazon EMR. We will explain how to move from HDFS to Amazon S3 as a durable storage layer, and how to lower costs with Amazon EC2 Spot instances and Auto Scaling. Additionally, we will go over common security recommendations and tuning tips to accelerate the time to production.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
AWS re:Invent 2016: Bringing Deep Learning to the Cloud with Amazon EC2 (CMP314)Amazon Web Services
Algorithmia is a startup with a mission to make state of the art machine learning discoverable by everyone&emdash;they offer the largest algorithm marketplace in the world, with over 2500 algorithms supporting tens of thousands of application developers. Algorithma is the first company to make deep learning, one of the most conceptually difficult areas of computing, accessible to any company via microservices. In this session, you learn how this startup has selected and optimized Amazon EC2 instances for various algorithms (including the latest generation of GPU optimized instances), to create a flexible and scalable platform. They also share their architecture and best practices for getting any computationally-intensive application started quickly.
Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Serv...Amazon Web Services
If you’re running a MySQL database at scale, there’s a good chance you’re sharding your database deployment. Sharding is a useful way to increase the scale of your deployment, but it has drawbacks like higher costs, high administration overheard and lower elasticity. It’s harder to grow or shrink a sharded database deployment to match your traffic patterns. In this session, we will discuss and demonstrate how to use AWS Database Migration Service to consolidate multiple MySQL shards into an Amazon Aurora cluster to reduce cost, improve elasticity and make it easier to manage your database.
Learning Objectives:
Learn how to scale your MySQL database at reduced cost and higher elasticity, by consolidating multiple shards into one Amazon Aurora cluster.
With distributed frameworks like Hadoop and Kafka, it is essential to deploy the right environment to successfully support these workloads. Learn about the different block storage options from AWS and walk through with our experts on how to select the best option for your big data analytic workloads. We will demonstrate how to setup, select, and modify volume types to right size your environment needs.
Amazon RDS enables you to launch an optimally configured, secure, and highly available relational database with just a few clicks. It provides cost-efficient and resizable capacity while managing time consuming administration tasks, freeing you to focus on your applications and business. In this session, we take a closer look at how Amazon RDS works, and we review best practices to achieve performance, flexibility, and cost savings for your MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server databases on Amazon RDS. We also discuss AWS Database Migration Service, a quick and secure means for migrating your existing relational database management system investments to Amazon RDS.
AWS re:Invent 2016: Deep Dive: Amazon EMR Best Practices & Design Patterns (B...Amazon Web Services
Amazon EMR is one of the largest Hadoop operators in the world. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost-efficient. Finally, we dive into some of our recent launches to keep you current on our latest features. This session will feature Asurion, a provider of device protection and support services for over 280 million smartphones and other consumer electronics devices. Asurion will share how they architected their petabyte-scale data platform using Apache Hive, Apache Spark, and Presto on Amazon EMR.
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
- Learn how AWS can help you process and make better use of your data with meaningful insights.
- Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
- Learn about real time data processing with Amazon Kinesis.
Amazon Relational Database Service – How is it different to what you do today ?Amazon Web Services
Whether you are new to Amazon Relational Database Service (RDS) or just need a refresher on the latest features - this session is for you. We will take you through RDS from the perspective of how you manage your databases on-premise today. That is, we will describe the full lifecycle of a database from provisioning to decommissioning while taking into consideration your performance, scalability, availability, security and maintenance requirements.
by Joyjeet Banerjee, Enterprise Solution Architect, AWS
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business. We’ll discuss Amazon RDS fundamentals, learn about the seven available database engines, and examine customer success stories. Level 100
AWS re:Invent 2016: Design, Deploy, and Optimize Microsoft SharePoint on AWS ...Amazon Web Services
AWS can help you rapidly deploy and scale your Microsoft SharePoint environment to help you collaborate more efficiently and cost-effectively. This session reviews architectural considerations for building a SharePoint deployment on AWS, best practices to ensure optimal performance, how to leverage multiple Availability Zones for high availability and disaster recovery, and how to integrate with Microsoft Active Directory. We will also look at new Quick Start guides, AWS CloudFormation templates, and other tools that dramatically reduce the time to deployment.
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
From dev and test to large-scale HANA production deployments, enterprise usage of SAP on AWS is rapidly growing across all verticals and geographies. Gain insights on the AWS SAP offering and partnership and why a cloud first approach makes business, technical and financial sense for the numerous SAP solutions that are certified and ready to be deployed today.
AW Speaker: Michael Needham, Sr Mgr, Solutions Architecture - Amazon Web Services
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
Learning Objectives:
• Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing.
• How to deploy and tune scalable clusters running Spark on Amazon EMR.
• How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3.
• Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
Deep Dive on the AWS Storage Gateway - April 2017 AWS Online Tech TalksAmazon Web Services
- Learn about the benefits and capabilities of AWS Storage Gateway
- Learn how to get started with AWS Storage Gateway
AWS Storage Gateway provides file, volume, and tape storage in AWS through standard protocols which integrate seamlessly into your on-premises and in-cloud environments. This tech talk covers a deep dive to the main features and capabilities of AWS Storage Gateway, and patterns for using the service to accelerate your adoption of hybrid storage. With stories from real-life customer deployments, we’ll show the benefit of Storage Gateway for workloads such as backup and archive, disaster recovery, tiered storage, and cloud bursting.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
High performance Redis is popular among developers for its incredible performance, versatility and simplicity. The powerful combination of low cost memory and high performance Redis brings to life new next generation analytic uses - such as simultaneous real time transaction and analytics processing. With Redis Labs' RLEC Flash on AWS SSD instances, you can get fantastic performance at up to 70% lower costs. Join this session to learn how next generation Flash from leading memory provider Intel has made significant strides in performance while retaining its cost advantage to memory. Using a combination of AWS' powerful SSD instances, and Redis Labs' RLEC Flash, you can achieve up to 3M ops/sec at sub millisecond latencies, with a combination of RAM and Flash. The session will also feature customer use cases from a large university, a large customer engagement company and a pioneer of online Flash sales. Session sponsored by Redis Labs.
Simple, Scalable and Highly Durable NAS in the Cloud – Amazon EFSAmazon Web Services
Come learn how to get started with a scalable file system with a simple interface for use with Amazon EC2 instances in the AWS Cloud. We’ll cover the basics and go through customer use cases to illustrate key features. We will walk through active demos that show you how EFS supports application workflows.
Data migration at a petabyte scale is now a simple service from AWS. You can easily move large volumes of data from onsite environments to the cloud, or quickly get started with the cloud as a backup target using data transfer services, like AWS Snowball or AWS Storage Gateway. Learn about the various data migration options available to you and understand which one is the right fit for your requirements.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
AWS re:Invent 2016: Bringing Deep Learning to the Cloud with Amazon EC2 (CMP314)Amazon Web Services
Algorithmia is a startup with a mission to make state of the art machine learning discoverable by everyone&emdash;they offer the largest algorithm marketplace in the world, with over 2500 algorithms supporting tens of thousands of application developers. Algorithma is the first company to make deep learning, one of the most conceptually difficult areas of computing, accessible to any company via microservices. In this session, you learn how this startup has selected and optimized Amazon EC2 instances for various algorithms (including the latest generation of GPU optimized instances), to create a flexible and scalable platform. They also share their architecture and best practices for getting any computationally-intensive application started quickly.
Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Serv...Amazon Web Services
If you’re running a MySQL database at scale, there’s a good chance you’re sharding your database deployment. Sharding is a useful way to increase the scale of your deployment, but it has drawbacks like higher costs, high administration overheard and lower elasticity. It’s harder to grow or shrink a sharded database deployment to match your traffic patterns. In this session, we will discuss and demonstrate how to use AWS Database Migration Service to consolidate multiple MySQL shards into an Amazon Aurora cluster to reduce cost, improve elasticity and make it easier to manage your database.
Learning Objectives:
Learn how to scale your MySQL database at reduced cost and higher elasticity, by consolidating multiple shards into one Amazon Aurora cluster.
With distributed frameworks like Hadoop and Kafka, it is essential to deploy the right environment to successfully support these workloads. Learn about the different block storage options from AWS and walk through with our experts on how to select the best option for your big data analytic workloads. We will demonstrate how to setup, select, and modify volume types to right size your environment needs.
Amazon RDS enables you to launch an optimally configured, secure, and highly available relational database with just a few clicks. It provides cost-efficient and resizable capacity while managing time consuming administration tasks, freeing you to focus on your applications and business. In this session, we take a closer look at how Amazon RDS works, and we review best practices to achieve performance, flexibility, and cost savings for your MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server databases on Amazon RDS. We also discuss AWS Database Migration Service, a quick and secure means for migrating your existing relational database management system investments to Amazon RDS.
AWS re:Invent 2016: Deep Dive: Amazon EMR Best Practices & Design Patterns (B...Amazon Web Services
Amazon EMR is one of the largest Hadoop operators in the world. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost-efficient. Finally, we dive into some of our recent launches to keep you current on our latest features. This session will feature Asurion, a provider of device protection and support services for over 280 million smartphones and other consumer electronics devices. Asurion will share how they architected their petabyte-scale data platform using Apache Hive, Apache Spark, and Presto on Amazon EMR.
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
- Learn how AWS can help you process and make better use of your data with meaningful insights.
- Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
- Learn about real time data processing with Amazon Kinesis.
Amazon Relational Database Service – How is it different to what you do today ?Amazon Web Services
Whether you are new to Amazon Relational Database Service (RDS) or just need a refresher on the latest features - this session is for you. We will take you through RDS from the perspective of how you manage your databases on-premise today. That is, we will describe the full lifecycle of a database from provisioning to decommissioning while taking into consideration your performance, scalability, availability, security and maintenance requirements.
by Joyjeet Banerjee, Enterprise Solution Architect, AWS
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business. We’ll discuss Amazon RDS fundamentals, learn about the seven available database engines, and examine customer success stories. Level 100
AWS re:Invent 2016: Design, Deploy, and Optimize Microsoft SharePoint on AWS ...Amazon Web Services
AWS can help you rapidly deploy and scale your Microsoft SharePoint environment to help you collaborate more efficiently and cost-effectively. This session reviews architectural considerations for building a SharePoint deployment on AWS, best practices to ensure optimal performance, how to leverage multiple Availability Zones for high availability and disaster recovery, and how to integrate with Microsoft Active Directory. We will also look at new Quick Start guides, AWS CloudFormation templates, and other tools that dramatically reduce the time to deployment.
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
From dev and test to large-scale HANA production deployments, enterprise usage of SAP on AWS is rapidly growing across all verticals and geographies. Gain insights on the AWS SAP offering and partnership and why a cloud first approach makes business, technical and financial sense for the numerous SAP solutions that are certified and ready to be deployed today.
AW Speaker: Michael Needham, Sr Mgr, Solutions Architecture - Amazon Web Services
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
Learning Objectives:
• Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing.
• How to deploy and tune scalable clusters running Spark on Amazon EMR.
• How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3.
• Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
Deep Dive on the AWS Storage Gateway - April 2017 AWS Online Tech TalksAmazon Web Services
- Learn about the benefits and capabilities of AWS Storage Gateway
- Learn how to get started with AWS Storage Gateway
AWS Storage Gateway provides file, volume, and tape storage in AWS through standard protocols which integrate seamlessly into your on-premises and in-cloud environments. This tech talk covers a deep dive to the main features and capabilities of AWS Storage Gateway, and patterns for using the service to accelerate your adoption of hybrid storage. With stories from real-life customer deployments, we’ll show the benefit of Storage Gateway for workloads such as backup and archive, disaster recovery, tiered storage, and cloud bursting.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
High performance Redis is popular among developers for its incredible performance, versatility and simplicity. The powerful combination of low cost memory and high performance Redis brings to life new next generation analytic uses - such as simultaneous real time transaction and analytics processing. With Redis Labs' RLEC Flash on AWS SSD instances, you can get fantastic performance at up to 70% lower costs. Join this session to learn how next generation Flash from leading memory provider Intel has made significant strides in performance while retaining its cost advantage to memory. Using a combination of AWS' powerful SSD instances, and Redis Labs' RLEC Flash, you can achieve up to 3M ops/sec at sub millisecond latencies, with a combination of RAM and Flash. The session will also feature customer use cases from a large university, a large customer engagement company and a pioneer of online Flash sales. Session sponsored by Redis Labs.
Simple, Scalable and Highly Durable NAS in the Cloud – Amazon EFSAmazon Web Services
Come learn how to get started with a scalable file system with a simple interface for use with Amazon EC2 instances in the AWS Cloud. We’ll cover the basics and go through customer use cases to illustrate key features. We will walk through active demos that show you how EFS supports application workflows.
Data migration at a petabyte scale is now a simple service from AWS. You can easily move large volumes of data from onsite environments to the cloud, or quickly get started with the cloud as a backup target using data transfer services, like AWS Snowball or AWS Storage Gateway. Learn about the various data migration options available to you and understand which one is the right fit for your requirements.
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting, in its original format and extract value. In this session learn how to architect and implement an Analytics Data Lake. Hear customer examples of best practices and learn from their architectural blueprints.
In this session, storage experts will walk you through the object storage offering, Amazon S3, a bulk data repository that can deliver 99.999999999% durability and scale past trillions of objects worldwide. Learn about the different ways you can accelerate data transfer to S3 and get a close look at some of the new tools available for you to secure and manage your data more efficiently. Announced at re:Invent 2016, see how you can use Amazon Athena with S3 to run serverless analytics on your data and as a bonus, walk away with some code snippets to use with S3. Hear AWS customers talk about the solutions they have built with S3 to turn their data into a strategic asset, instead of just a cost center. And bring your toughest questions to our experts on hand and walk away that much smarter on how to use object storage from AWS.
Amazon Web Services (AWS) provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing. This session provides an overview and describes why companies are flocking to the cloud so quickly.
Using AWS to design and build your data architecture has never been easier to gain insights and uncover new opportunities to scale and grow your business. Join this workshop to learn how you can gain insights at scale with the right big data applications.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data and analytics application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
In this one-hour webinar, we will look at the portfolio of AWS Big Data services and how they can be used to build a modern data architecture.
We will cover:
Using different SQL engines to analyze large amounts of structured data
Analysing streaming data in near-real time
Architectures for batch processing
Best practices for Data Lake architectures
This session is suited for:
Solution and enterprise architects
Data architects/ Data warehouse owners
IT & Innovation team members
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
El almacenamiento en la nube es un componente crítico de la informática en la nube, que guarda la información que utilizan las aplicaciones. El análisis de big data, los almacenes de datos, el Internet de las cosas, las bases de datos y las aplicaciones de backup y archivado dependen de algún tipo de arquitectura de almacenamiento de datos. El almacenamiento en la nube, por lo general, es más fiable, escalable y seguro que los sistemas de almacenamiento en las instalaciones tradicionales.
AWS ofrece una gama completa de servicios de almacenamiento en la nube para respaldar los requisitos de conformidad de las aplicaciones y el archivado. Seleccione entre servicios de almacenamiento de objetos, archivos y por bloques, así como opciones de migración de datos a la nube para comenzar a diseñar las bases de su entorno de TI en la nube.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon EC2 Spot instances provide acceleration, scale, and deep cost savings to run time-critical, hyper-scale workloads for rapid data analysis. In this session,you will learn best practices on how to scale big data workloads as well as process, store, and analyze big data securely and cost effectively.
AWS Batch is a fully-managed service that enables developers, scientists, and engineers to easily and efficiently run batch computing workloads of any scale on AWS. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there is no need to install or manage batch computing software, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2, Spot Instances, and AWS Lambda. AWS Batch reduces operational complexities, saving time and reducing costs. In this session, you will learn core concepts behind AWS Batch and details of how the service functions.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
As customers build and run production microservices architectures based on containers, having powerful tools to manage the placement and scheduling of these workloads is critical. Amazon ECS allows customers to focus on building their application and removes the need for managing the cluster management software entirely.
Introducing Amazon Lex – A Service for Building Voice or Text Chatbots - Marc...Amazon Web Services
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.
Learning Objectives:
• Learn about the capabilities and features of Amazon Lex
• Learn about the benefits of Amazon Lex
• Learn about the different use cases
• Learn how to get started using Amazon Lex
Amazon EC2 provides you with the flexibility to cost optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we will explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) purchasing models to optimize costs while maintaining high performance and availability for your applications. Common application examples will be used to demonstrate how to best combine EC2’s purchasing models. You will leave the session with best practices you can immediately apply to your application portfolio.
Compared to storing long-term datasets on-premise, archiving in the cloud is a smart alternative whether you’re looking for an active archive solution, tape replacement, or to fulfill a compliance requirement. Learn how AWS customers are simplifying their archiving strategies and meeting compliance needs using Amazon Glacier.
AWS re:Invent 2016: Strategic Planning for Long-Term Data Archiving with Amaz...Amazon Web Services
Without careful planning, data management can quickly turn complex with a runaway cost structure. Enterprise customers are turning to the cloud to solve long-term data archive needs such as reliability, compliance, and agility while optimizing the overall cost. Come to this session and hear how AWS customers are using Amazon Glacier to simplify their archiving strategy. Learn how customers architect their cloud archiving applications and share integration to streamline their organization's data management and establish successful IT best practices.
Active Archiving with Amazon S3 and Tiering to Amazon Glacier - March 2017 AW...Amazon Web Services
Most organizations have data that they need to retain, but is accessed infrequently, if ever. In cases where this data needs to be accessible at a moment’s notice, it’s hard to save money by moving to an archival storage because access times on these platforms are slower. Now, customers are using Amazon S3 & Glacier for “Active Archiving” to reduce storage costs while maintaining the flexibility of instant access. In this tech talk, we’ll show you how implement Active Archiving with AWS Object Storage services, and we’ll provide some real world examples of how AWS customers are saving money with these capabilities today.
Learning Outcomes:
• Define Active Archiving, and understand how it is different from traditional cold archiving
• Review the cost modeling tools available to determine if Active Archiving is a good fit for your organization
• Learn about best practices for using AWS Object Storage features & functionality to enable Active Archiving
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
Not just for archiving or compliance use cases, Amazon Glacier accommodates customers simply looking to replace their on-premises long term storage with a cost efficient, durable, cloud option, from which they can easily and quickly access their data when they need to. This session will introduce newly launched features for Amazon Glacier, review the current service feature set, and share the global data center shut down and storage strategy for Sony DADC New Media Solutions (NMS). NMS is Sony’s digital servicing division providing global digital distribution, linear playout and white label OTT/Commerce solutions for clients such as BBC Worldwide, NBCUniversal, Sony Playstation, and Funimation Entertainment.
Hear from Andy Shenkler, NMS’s Chief Technology and Solutions Officer as he talks about the key factors that drove the organization’s decision to move away from tape and go towards the cloud and out of the infrastructure business overall. Learn more about the impact and operational practices inside a world class digital supply chain as they were able to move over 20 petabytes of data, over 1M hours of video, to the cloud and never looked back.
An Overview of AWS Services for Data Storage and Migration - SRV205 - Atlanta...Amazon Web Services
In this session, we explore the features and functions of AWS storage services. We provide context on the AWS storage portfolio, and we cover the most common use cases for AWS offerings for object, file, block, and migration technologies, including the AWS Partner Network (APN) ecosystem. Then we examine each service, using customer case studies as examples. You gain an understanding of how to select storage and start moving workloads or building new ones.
Darry Osborne takes us on a journey across the AWS Cloud-based storage solutions. He explains S3, Glacier, Snowball and ends with Snowmobile, petabyte-scale data migration. He also talks about use cases, and customer stories. Presented in Montreal at the AWS Innovate show.
Eric Durand once again takes us to a journey of Storage solutions for digital media, using the AWS Cloud.
This presentation was delivered at AWS Toronto, during the Media and Entertainment Symposium.
Learn from our engineering experts how we've designed Amazon S3 and Amazon Glacier to be durable, available, and massively scalable. Hear how Sprinklr architected their environment for the ultimate in high availability for their mission-critical applications. In this session, we'll discuss AWS Region and Availability Zone architecture, storage classes, built-in and on-demand data replication, and much more.
Data is gravity. Your workloads and processing is dependent on where your data is and how it is stored. With AWS, you have a host of storage options and the key to successfully leverage them is to know when to use which option. This session will explain in details about each of the AWS Storage offerings along with data ingestion optins into the Cloud using Snowball and Snowmobile
Marc Trimuschat,
Head - Business Developement, AWS Storage, AWS APAC
An Overview of AWS services for Data Storage and Migration - SRV205 - Toronto...Amazon Web Services
In this session, we explore features and functions of AWS Storage Services. We give context on the portfolio, cover the most common use cases for AWS offerings for object, file, block and migration technologies, including thepartner ecosystem, and then go into each service with customer case studiy examples. Leave this session with an understanding of how to select storage and start moving workloads or building new ones.
AWS offers storage, networking, and data transfer services so you can build and deploy solutions to extend backup and archive targets to the AWS Cloud, increasing scalability, durability, security, and compliance.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
Moving Data into the Cloud with AWS Transfer Services - May 2017 AWS Online ...Amazon Web Services
Learning Objectives:
- Learn about the benefits and features to help you get the most out of your DynamoDB database
- Learn how customers have successfully used these features to deploy their applications
Developers who build applications for ad tech, finance, gaming, IoT, and other performance sensitive use cases require response times in the range of 1-2 milliseconds. They are also constantly pushing to further reduce this latency to achieve a competitive advantage. Amazon DynamoDB Accelerator (DAX) is a fully-managed, highly-available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. In this tech talk, we go over best practices for multiple use cases, including gaming, Ad Tech, IoT, and we’ll explore new features to help you get the most out of your DynamoDB database, including DynamoDB Accelerator (DAX), TTL, Tagging, VPC Endpoints for DynamoDB. Learn how customers have successfully used these features to deploy their applications.
Learn how AWS customers save money, time and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs serverless analytics on your data and hear about expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
by Henry Zhang, Sr. Product Manager, AWS
Compared to storing long-term datasets on-premise, archiving in the cloud is a smart alternative whether you’re looking for an active archive solution, tape replacement, or to fulfill a compliance requirement. Learn how AWS customers are simplifying their archiving strategies and meeting compliance needs using Amazon Glacier.
Data migration at petabyte scale is now a simple service from AWS. You can easily migrate large volumes of data from on-premises environments to the cloud, quickly get started with the cloud as a backup target, or burst workloads between your on-premises environments and the AWS Cloud. Learn about AWS Snowball, AWS Snowball Edge, AWS Snowmobile and AWS Storage Gateway, and understand which one is the right fit for your requirements. We will go through customer use cases, review the different applications used, and help you cut IT spend and management time on hardware and backup solutions.
Similar to Strategic Uses for Cost Efficient Long-Term Cloud Storage (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
2. Cloud Data Migration
Direct
Connect
Snow* data
transport
family
3rd Party
Connectors
Transfer
Acceleration
Storage
Gateway
Kinesis Firehose
The AWS Storage Portfolio
Object
Amazon GlacierAmazon S3
Block
Amazon EBS
(persistent)
Amazon EC2
Instance Store
(ephemeral)
File
Amazon EFS
3. Satellite Image Archive
• DigitalGlobe takes Satellite imagery of the Earth
• 100PB image library = 6 billion square kilometers
• 1PB new image every year
• Images to be archived and retained for decades
4. Patient data–Philips Healthcare
• HealthSuite digital platform powered by AWS
• 15 petabytes of patient data
• Archived for decades (beyond the lifetime of patients)
• Uses AWS HIPAA-eligible services in the BAA
5. Public sector–King County
• Most populous county in Washington state
• Replaced tape solution for backup from 17 agencies
• Meets compliance requirement
• Saved $1MM in first year; no more tape refresh or
management churn
6. Archive:
Data retained for the long term,
for compliance or potential
future reference
Data archiving needs are growing everywhere
• Media assets, 4K, 8K
• Health care/life sciences
• Financial services
• Regulated industries
• Oil and gas/geospatial
• Digital preservation
• Long-term backups
• Logs
8. Traditional archiving approaches
• Tape libraries, robots, drives, media
• Onsite (online and offline)
• Offsite tape out/vaulting
• Specialized software and personnel
• Tape refresh every 3-5 years
9. How can AWS help with your archival?
Metered usage:
Pay as you go
No capital investment
No commitment
No risky capacity planning
Avoid risks of physical
media handling
Control your
geographic locality for
performance and
compliance
10. 1 PB raw storage
800 TB usable storage
600 TB allocated storage
400 TB application data
Storage pricing - pay only for what you use
AWS Cloud
Storage
Amazon Glacier starts at $0.004/GB/month
Price dropped by 43% on 11/21/2016
14. Accessing Amazon Glacier
1. Direct Amazon Glacier API/SDK
2. Amazon S3 lifecycle integration
3. Third-party tools and gateways
FastGlacier
15. Amazon Glacier – Direct access/APIs
Create
Vault
Configure
Access
Upload
Archives
Register
Archive ID
Data Upload
Initiate
Retrieval
Async
Retrieval
Completion
Completion
Notification
Download
Data
Data Retrieval
16. Use Glacier via S3 Object Lifecycle
S3 Standard
Active data Archive dataInfrequently accessed data
S3 - Infrequent Access Amazon Glacier
Synchronous access Async accessSynchronous access
$0.023/GB/mo. $0.004/GB/mo.$0.0125/GB/mo.
17. - Transition Standard to Standard-IA
- Transition Standard-IA to Amazon Glacier
- Transition based on object tags
- Expiration and versioning
Data lifecycle management
T T+3 days T+5 days T+ 15 days T + 25 days T + 30 days T + 60 days T + 90 days T + 150 days T + 250 days T + 365 days
Data access frequency over time
19. Save money on storage
45% saving over S3 Standard-IA
68% saving over S3 Standard-IA
* Assumes the highest public pricing tier
20. Amazon Glacier – Third-party tools and gateways
• Consumer grade: less than $50
• Example: Cloudberry, FastGlacier, Arq (Haystack Software)
• Small / medium business: $500 - $1,000
• Example: Synology, Veeam, QNap
• Enterprise gateway and data management software
• Example: NetApp AltaVault, CommVault, StorNext, StoreReduce,
Vidispine
21. Which option should I choose?
• Use S3 lifecycle managed Amazon Glacier if the S3
object keys are sufficient for index/search capability
• Use Amazon Glacier directly if you already plan to store
more metadata/indices in a database
• Use 3rd party tools to minimize coding
22. Amazon Glacier – Data Retrieval Tiers
Standard Retrieval
• Current model
• 3-5 hours
• Disaster Recovery
Bulk Retrieval
• Batch/Bulk access
• 5-12 hours
• PB scale re-transcoding
or video/image analysis
Expedited Retrieval
• Emergency access
• 1-5 minutes
• Last minute play-out
schedule swap
$0.03/GB $0.01/GB $0.0025/GB
On-site tape replacement Off-site tape replacement
23. • Media distribution backbone (Ve.nue platform)
• Over-The-Top (OTT) broadcast service
• 20PBs of media assets, 1MM+ hours of high-res content
• Assets to be archived and retained for decades
Video archives
25. “If physical deliveries can happen
within one hour based on
unpredictable requests, surely we
are able to exceed such
expectations digitally”
@SonyDADCNMS
26. Our migration
The Challenge
• Seamlessly migrate a platform that enables content
delivery across all devices and more than 1,200
distribution points worldwide
• Store 20 petabytes of motion picture and television
content
• Equating to 1,000,000 M+ hours of content
• At a growth curve of ~1 petabyte every quarter
Desired Goals:
• One-hour delivery turn around time
• Agile, scalable, predictable cost model and
infrastructure
• Investing in innovation vs. hardware
@SonyDADCNMS
31. Amazon Glacier Vault Lock allows you to easily
set compliance controls on individual vaults and
enforce them via a lockable policy
Time-based retention
MFA authentication
Controls govern all
records in a vault
Immutable policy
Two-step locking
Compliance storage with Vault Lock
32. Vault Lock for compliance storage
• Non-overwrite, non-erasable records
• Time-based retention with “ArchiveAgeInDays” control
• Policy lockdown (strong governance)
• Legal hold with vault-level tags
• Configure optional designated third-party access and grant
temporary access
33. Amazon Glacier received a third-party assessment
from Cohasset Associates on how Amazon Glacier
with Vault Lock can be used to meet the requirements
of SEC Rule 17a-4(f) and CFTC 1.31(b)-(c).
34. Proofpoint
• Cloud-based security and compliance for the enterprise:
threat research, email, mobile, social, digital risk
• Founded 2002, public in 2012
• $350M annual revenue, $3B market cap
• Big AWS user
35. Proofpoint SocialPatrol
Policy controls and enforcement for social
• Combats fraudulent brand impersonation
• Moderates content at scale
• Ensures compliance in publishing
• Integrates with social APIs
• 150+ classifiers using NLP and ML
• Text, links, images, meta data
• Ingesting >1M social posts per day
• Built in AWS
36. Proofpoint SocialPatrol Archive with Glacier
SEC Rule 17a-4(f)-compliant archive, purpose-built for
social, enabled by Amazon Glacier and Vault Lock
PFPT in AWS
Policy engine MySQL/C*/SolrSocial
Amazon Glacier
& Vault Lock
40. Proofpoint SocialPatrol Archive
As social content flows in, we record its purge date and
surface that to the user. Each piece of social content is an
archive in the vault.
42. Managing Legacy Tape Data with AWS
Migrate Long Term Retention Data from Backup Tapes to AWS
43. Introducing Index Engines
▪ New Amazon partner – announced January 31st
▪ Software company delivering enterprise indexing technology
▪ Direct indexing, reporting and access to backup data
▪ Supports data backed up by IBM, Dell EMC, Veritas, HP, etc.
▪ Cost effective migration from legacy tape to AWS S3
▪ Index Engines Overview
▪ Partners include: Amazon, Dell EMC, EY, FTI
▪ Clients include: JPMC, Citi, DB, Barclays, TIAA-CREF, Rabo AgriFinance
▪ Patented technology
Copyright Index Engines Inc. 2017 All rights reserved. 43
44. Product Offering
▪ Native S3 support
▪ Currently supports S3
▪ Development in process to support S3-IA and Glacier
▪ Index Engines
▪ Index, search and report on tape data
▪ Determine data of value, or unique data set for migration
▪ Migrates and archives data in AWS
Copyright Index Engines Inc. 2017 All rights reserved. 44
45. Transforming Clients with IE + AWS
Data Center
Reduce Tape Infrastructure
Eliminate Offsite Storage
Reclaim Resources
Business Users
Faster Time to Data
More Intelligence
Leverage IP
Governance
Manage Risk
Support eDiscovery
Proactive Insights
Copyright Index Engines Inc. 2017 All rights reserved. 45
46. Hardware
• Servers (NDMP)
• Libraries
• Floor space
Resources
• Manpower
• Data center costs
Backup Software
• Maintenance
• Infrastructure
• Management
SLAs & Restores
• Time to restore data
• 3rd party restore
services
Tape Storage
• Offsite storage costs
• Tape management
• Tape purchases
Risk & Liability
• eDiscovery
• Regulatory
• Long-term risk
True Cost Associated with Tape
47. Sample Environment
Copyright Index Engines Inc. 2017 All rights reserved. 47
~1TB per tape (highly redundant)
~50PB total
Unique data set: 17PB files, email
and databases
Data for migration: 10PB all
files/email and 50% of databases
Data for migration: 2.5PB all
files/email
▪ 50,000 legacy tapes at Iron Mountain
▪ Veritas NetBackup generated tapes
Annual Cost for 2.5PB
S3 $717,672
S3-IA $424,332
Glacier $138,228
48. Next Steps
▪ Learn more:
▪ www.indexengines.com/aws
▪ info@indexengines.com
Copyright Index Engines Inc. 2017 All rights reserved. 48