This session will highlight the most impactful announcements made at AWS re:Invent 2017 while giving you ideas on key use cases for new services and features. We’ll cover major themes of the conference, the new services and features within those themes and how they work together to make it faster and easier to build functionality into your app.
SRV307 Applying AWS Purpose-Built Database Strategy: Match Your Workload to ...Amazon Web Services
In this session, Tony Petrossian, director of engineering, AWS Database Services, dives deep into what databases to use for which components of your application. Learn how to evaluate a new workload for the best managed database option based on specific application needs related to data shape, data size at limit, computational requirements, programmability, throughput and latency needs, etc. This session explains the ideal use cases for relational and non-relational database services, including Amazon Aurora, Amazon DynamoDB, Amazon ElastiCache for Redis, Amazon Neptune, and Amazon Redshift.
AWS DeepLens Workshop_Build Computer Vision Applications Amazon Web Services
In this workshop, developers have the opportunity to learn how to build and deploy computer vision models using the AWS DeepLens deep-learning-enabled video camera. By working hands on, developers of all skill levels can explore and build their own deep-learning-powered computer vision applications using Amazon SageMaker and AWS DeepLens devices. Attendees can experiment with different sample projects for face detection, object detection, artistic style transfer, and other machine learning use cases using Apache MXNet. Attendees also learn about use cases that integrate other AWS services that extend the functionality of AWS DeepLens, such as AWS Lambda, Amazon Polly, and Amazon Rekognition.
SRV302 Deep Dive: Hybrid Cloud Storage with AWS Storage GatewayAmazon Web Services
Enterprises of all sizes have the persistent storage challenges of data access, growth, and protection. Buying more storage stacks prolongs the pain of managing the storage lifecycle, which includes purchasing, ongoing operation, hardware failure, system retirement, and migration, yet it keeps on-premises datasets siloed from cloud workloads. In this session, learn how to use AWS Storage Gateway to connect your on-premises applications to AWS storage services by using standard storage protocols. Storage Gateway enables hybrid cloud storage solutions for file sharing, data lakes, big data analytics, backup and disaster recovery, and migration. We discuss best practices and new deployment approaches.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
In this session, we discuss architectural principles that helps simplify big data analytics.
We'll apply principles to various stages of big data processing: collect, store, process, analyze, and visualize. We'll disucss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on.
Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Citrix Moves Data to Amazon Redshift Fast with Matillion ETLAmazon Web Services
Matillion ETL, easily deployable from Amazon Web Services (AWS) Marketplace, helps Citrix collate and summarize data and augment it with more traditional business data from Microsoft SQL Server for additional context. Join our webinar to learn how organizations of any size can move data to the cloud quickly, accurately, and affordably with Matillion ETL.
Join our webinar to learn:
How Citrix moved data to Amazon Redshift with speed and accuracy.
How to make informed, business-critical decisions by analyzing data with Amazon Redshift.
How to speed time-to-value for your analytics initiatives using Matillion’s push-down ELT architecture.
Over 90% of today’s data was generated in the last 2 years, and the rate of data growth isn’t slowing down. In this session, we’ll step through the challenges and best practices on how to capture all the data that is being generated, understand what data you have, and start driving insights and even predict the future using purpose built AWS Services. We’ll frame the session and demonstrations around common pitfalls of building Data Lakes and how to successful drive analytics and insights from the data. This session will focus on the architecture patterns bringing together key AWS Services and rather than a deep dive on any single service. We’ll show how services such as Amazon S3, Amazon Glue, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon Kinesis, and Amazon Machine Learning services are put together to build a successful data lake for various role including both data scientists and business users.
Humans and Data Don't Mix- Best Practices to Secure Your CloudAmazon Web Services
When it comes to security, human error far outpaces other causes of failures. The risk of humans touching sensitive data is clear, so how do you get them away from your data while also speeding up time to detection and remediation? Stephen Schmidt, AWS CISO, will share hard-earned lessons around potential opportunities in your security program, along with practical steps to improve the agility of your organization.
Analyze your Data Lake, Fast @ Any Scale - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
-Learn how to automatically discover, catalog, and prepare your data for analytics
-Understand how to query data in your data lake without having to transform or load the data into your data warehouse
-See how to analyze data in both your data lake and data warehouse
SRV307 Applying AWS Purpose-Built Database Strategy: Match Your Workload to ...Amazon Web Services
In this session, Tony Petrossian, director of engineering, AWS Database Services, dives deep into what databases to use for which components of your application. Learn how to evaluate a new workload for the best managed database option based on specific application needs related to data shape, data size at limit, computational requirements, programmability, throughput and latency needs, etc. This session explains the ideal use cases for relational and non-relational database services, including Amazon Aurora, Amazon DynamoDB, Amazon ElastiCache for Redis, Amazon Neptune, and Amazon Redshift.
AWS DeepLens Workshop_Build Computer Vision Applications Amazon Web Services
In this workshop, developers have the opportunity to learn how to build and deploy computer vision models using the AWS DeepLens deep-learning-enabled video camera. By working hands on, developers of all skill levels can explore and build their own deep-learning-powered computer vision applications using Amazon SageMaker and AWS DeepLens devices. Attendees can experiment with different sample projects for face detection, object detection, artistic style transfer, and other machine learning use cases using Apache MXNet. Attendees also learn about use cases that integrate other AWS services that extend the functionality of AWS DeepLens, such as AWS Lambda, Amazon Polly, and Amazon Rekognition.
SRV302 Deep Dive: Hybrid Cloud Storage with AWS Storage GatewayAmazon Web Services
Enterprises of all sizes have the persistent storage challenges of data access, growth, and protection. Buying more storage stacks prolongs the pain of managing the storage lifecycle, which includes purchasing, ongoing operation, hardware failure, system retirement, and migration, yet it keeps on-premises datasets siloed from cloud workloads. In this session, learn how to use AWS Storage Gateway to connect your on-premises applications to AWS storage services by using standard storage protocols. Storage Gateway enables hybrid cloud storage solutions for file sharing, data lakes, big data analytics, backup and disaster recovery, and migration. We discuss best practices and new deployment approaches.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
In this session, we discuss architectural principles that helps simplify big data analytics.
We'll apply principles to various stages of big data processing: collect, store, process, analyze, and visualize. We'll disucss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on.
Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Citrix Moves Data to Amazon Redshift Fast with Matillion ETLAmazon Web Services
Matillion ETL, easily deployable from Amazon Web Services (AWS) Marketplace, helps Citrix collate and summarize data and augment it with more traditional business data from Microsoft SQL Server for additional context. Join our webinar to learn how organizations of any size can move data to the cloud quickly, accurately, and affordably with Matillion ETL.
Join our webinar to learn:
How Citrix moved data to Amazon Redshift with speed and accuracy.
How to make informed, business-critical decisions by analyzing data with Amazon Redshift.
How to speed time-to-value for your analytics initiatives using Matillion’s push-down ELT architecture.
Over 90% of today’s data was generated in the last 2 years, and the rate of data growth isn’t slowing down. In this session, we’ll step through the challenges and best practices on how to capture all the data that is being generated, understand what data you have, and start driving insights and even predict the future using purpose built AWS Services. We’ll frame the session and demonstrations around common pitfalls of building Data Lakes and how to successful drive analytics and insights from the data. This session will focus on the architecture patterns bringing together key AWS Services and rather than a deep dive on any single service. We’ll show how services such as Amazon S3, Amazon Glue, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon Kinesis, and Amazon Machine Learning services are put together to build a successful data lake for various role including both data scientists and business users.
Humans and Data Don't Mix- Best Practices to Secure Your CloudAmazon Web Services
When it comes to security, human error far outpaces other causes of failures. The risk of humans touching sensitive data is clear, so how do you get them away from your data while also speeding up time to detection and remediation? Stephen Schmidt, AWS CISO, will share hard-earned lessons around potential opportunities in your security program, along with practical steps to improve the agility of your organization.
Analyze your Data Lake, Fast @ Any Scale - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
-Learn how to automatically discover, catalog, and prepare your data for analytics
-Understand how to query data in your data lake without having to transform or load the data into your data warehouse
-See how to analyze data in both your data lake and data warehouse
BDA308 Deep Dive: Log Analytics with Amazon Elasticsearch ServiceAmazon Web Services
Amazon Elasticsearch Service makes it easy to deploy, secure, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. In this session you learn how to configure a secure, petabyte-scale Amazon Elasticsearch Service cluster and build Kibana dashboards to analyze your data. In addition, we discuss best practices to make your cluster reliable, take backups, and debug slow-running queries and indexing operations.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
Amazon Aurora is a relational database built for the cloud and is compatible with MySQL and PostgreSQL. It combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. We'll cover some of the key innovations in the Aurora database engine and storage layers, explain recently announced features, such as Aurora Serverless, Aurora Multi-Master, and Aurora Parallel Query, and discuss best practices and optimal configurations. See why Aurora is a great fit for new application development and for migrations from overpriced, restrictive commercial databases.
How to Build a Data Lake in Amazon S3 & Amazon Glacier - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand the options for building an analytics platform that leverages Amazon S3 & Amazon Glacier
- Learn about the key considerations for ETL and other core analytics functions
- Determine if query-in-place capabilities like Amazon S3 Select, Amazon Glacier Select, Amazon Athena, and Amazon Redshift Spectrum are a good fit for your use case
FINRA's Managed Data Lake: Next-Gen Analytics in the Cloud - ENT328 - re:Inve...Amazon Web Services
Financial Impact Regulatory Authority (FINRA)'s Technology Group has changed its customers' relationship with data by creating a managed data lake that enables discovery on petabytes of capital markets' data, while saving time and money over traditional analytics solutions. FINRA's managed data lake unlocks the value in its data to accelerate analytics and machine learning at scale. The data lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot Instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the right tool for the right job at each step in the data processing pipeline. All of this is done while meeting FINRA's security and compliance responsibilities as a financial regulator.
SRV309 AWS Purpose-Built Database Strategy: The Right Tool for the Right JobAmazon Web Services
In this session, Shawn Bice, VP of NoSQL and QuickSight, covers the AWS purpose-built strategy for databases and explains why your application should drive the requirements of a database, not the other way around. We introduce AWS databases that are purpose-built for your application use cases. Learn why you should select different data services to solve different aspects of an application, and watch a demonstration on which application use cases lend themselves well to which data services. If you’re a developer building modern applications that require flexibility and consistent millisecond performance, and you’re trying to determine what relational and non-relational data services to use, this session is for you.
One Data Lake, Many Uses: Enabling Multi-Tenant Analytics with Amazon EMR (AN...Amazon Web Services
One of the benefits of having a data lake is that same data can be consumed by multi-tenant groups—an efficient way to share a persistent Amazon EMR cluster. The same business data can be safely used for many different analytics and data processing needs. In this session, we discuss steps to make an Amazon EMR cluster multi-tenant for analytics, best practices for a multi-tenant cluster, and solutions to common challenges. We also address the security and governance aspects of a multi-tenant Amazon EMR cluster.
Migrating your traditional Data Warehouse to a Modern Data LakeAmazon Web Services
In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base.
A Deep Dive into What's New with Amazon EMR (ANT340-R1) - AWS re:Invent 2018Amazon Web Services
Amazon EMR is one of the largest Spark and Hadoop service providers in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long- and short-lived clusters, using notebooks, and other architectural best practices. We discuss lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. We showcase key improvements made to the service in 2017. We cover improvements in using the Amazon EMR API, best practices utilizing Spot instances and Spot Instances with Auto Scaling, improvements toward Amazon S3 performance on Amazon EMR, and security/authorization and authentication. We couple each of these with a demo or customer use case to illustrate the benefits. If you are an existing Amazon EMR user, you walk away with a thorough understanding of improvements made in 2018, and how they benefit you. If you are a new Amazon EMR user, get an understanding of common use cases and how other customers are using Amazon EMR.
Building Serverless Analytics Solutions with Amazon QuickSight (ANT391) - AWS...Amazon Web Services
Querying and analyzing big data can be complicated and expensive. It requires you to setup and manage databases, data warehouses, and business intelligence (BI) applications—all of which require time, effort, and resources. Using Amazon Athena and Amazon QuickSight, you can avoid the cost and complexity by creating a fast, scalable, and serverless cloud analytics solution without the need to invest in databases, data warehouses, complex ETL solutions, and BI applications. In this session, we demonstrate how you can build a serverless big data analytics solution using Amazon Athena and Amazon QuickSight.
Build Data Engineering Platforms with Amazon EMR (ANT204) - AWS re:Invent 2018Amazon Web Services
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
Build Your Own Log Analytics Solutions on AWS (ANT323-R) - AWS re:Invent 2018Amazon Web Services
With Amazon Elasticsearch Service's simplicity comes a multitude of opportunity to use it as a back end for real-time application and infrastructure monitoring. With this wealth of opportunities comes sprawl - developers in your organization are deploying Amazon Elasticsearch Service for many different workloads and many different purposes. Should you centralize into one Amazon Elasticsearch Service domain? What are the tradeoffs in scale and cost? How do you control access to the data and dashboards? How do you structure your indexes - single tenant or multi-tenant? In this session, we'll explore whether, when, and how to centralize logging across your organization to minimize cost and maximize value and learn how Autodesk has built a unified log analytics solution using Amazon Elasticsearch Service.
Tape Is a Four Letter Word: Back Up to the Cloud in Under an Hour (STG201) - ...Amazon Web Services
Tape backups. Yes, they're still a thing. If you want to stop using tapes but need to store immutable backups for compliance or operational reasons, attend this session to learn how to make an easy switch to a cloud-based virtual tape library (VTL). AWS Storage Gateway provides a seamless drop-in replacement for tape backups with its Tape Gateway. It works with the major backup software products, so you simply change the target for your backups, and they go to a VTL that stores virtual tapes on Amazon S3 and Amazon Glacier. Come see how it works.
ABD206-Building Visualizations and Dashboards with Amazon QuickSightAmazon Web Services
Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data.
How TrueCar Gains Actionable Insights with Splunk Cloud PPTAmazon Web Services
The vast amount of big data that today’s companies generate makes it difficult to separate the signal from the noise. Organizations need to derive meaningful insights into operations and business to take action. TrueCar needed a better way to manage, search, and analyze their hybrid environment. In this webinar, you’ll learn how TrueCar centralized all of their data in one place using Amazon Kinesis and Splunk Cloud, gaining deep visibility, scalability, and the ability to monitor and troubleshoot operational issues – all while migrating to AWS.
This session provides IT pros and application owners an overview of AWS options for building hybrid storage architectures or even entirely migrating datacenter storage to the AWS cloud. The AWS Storage Gateway connects existing on-premises block, file or tape storage systems to AWS cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process and migrate data into and out of AWS without any network connection at all. Join us to learn how you can close down datacenters, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
Developing with .NET Core on AWS: What's New (DEV318-R1) - AWS re:Invent 2018Amazon Web Services
In this demonstration-heavy session, we illustrate our latest techniques, tools, and libraries for developing end-to-end applications with .NET Core. We focus on serverless applications, but the techniques are broadly relevant. We start by showing you some useful features and best practices for authoring your serverless application, including debugging locally from the IDE and in production. From there, we demonstrate some helpful tools that make it easy to set up your CI/CD workflow from the start. Finally, we deploy our application with AWS Lambda.
Building Your First Serverless Data Lake (ANT356-R1) - AWS re:Invent 2018Amazon Web Services
In this session, you have the opportunity to learn the fundamental building blocks of a data lake on AWS. You design and build a serverless pipeline to ingest, process, optimize and query data in your very own data lake. We discuss different optimizations and best practices to tune your architecture for future growth.
Not having to worry about servers can save you time and effort. Today with a serverless platform, you can globally distribute your web-application to run on dozens of data centers across the planet, with your customers being served from the one nearest to them. In this session to learn how you can combine forces -- with Drupal as a powerful Headless CMS, AWS Lambda@Edge providing serverless compute functionality, and Amazon CloudFront accelerating content through its global network.
In this presentation, we look at architecture, integration examples, and best practices for some of the most popular use-cases from across different channels such as web, mobile, and social media. Learn more: https://aws.amazon.com/cloudfront/
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us to understand best practices for scaling your resources from one to millions of users. We’ll show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
BDA308 Deep Dive: Log Analytics with Amazon Elasticsearch ServiceAmazon Web Services
Amazon Elasticsearch Service makes it easy to deploy, secure, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. In this session you learn how to configure a secure, petabyte-scale Amazon Elasticsearch Service cluster and build Kibana dashboards to analyze your data. In addition, we discuss best practices to make your cluster reliable, take backups, and debug slow-running queries and indexing operations.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
Amazon Aurora is a relational database built for the cloud and is compatible with MySQL and PostgreSQL. It combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. We'll cover some of the key innovations in the Aurora database engine and storage layers, explain recently announced features, such as Aurora Serverless, Aurora Multi-Master, and Aurora Parallel Query, and discuss best practices and optimal configurations. See why Aurora is a great fit for new application development and for migrations from overpriced, restrictive commercial databases.
How to Build a Data Lake in Amazon S3 & Amazon Glacier - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand the options for building an analytics platform that leverages Amazon S3 & Amazon Glacier
- Learn about the key considerations for ETL and other core analytics functions
- Determine if query-in-place capabilities like Amazon S3 Select, Amazon Glacier Select, Amazon Athena, and Amazon Redshift Spectrum are a good fit for your use case
FINRA's Managed Data Lake: Next-Gen Analytics in the Cloud - ENT328 - re:Inve...Amazon Web Services
Financial Impact Regulatory Authority (FINRA)'s Technology Group has changed its customers' relationship with data by creating a managed data lake that enables discovery on petabytes of capital markets' data, while saving time and money over traditional analytics solutions. FINRA's managed data lake unlocks the value in its data to accelerate analytics and machine learning at scale. The data lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot Instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the right tool for the right job at each step in the data processing pipeline. All of this is done while meeting FINRA's security and compliance responsibilities as a financial regulator.
SRV309 AWS Purpose-Built Database Strategy: The Right Tool for the Right JobAmazon Web Services
In this session, Shawn Bice, VP of NoSQL and QuickSight, covers the AWS purpose-built strategy for databases and explains why your application should drive the requirements of a database, not the other way around. We introduce AWS databases that are purpose-built for your application use cases. Learn why you should select different data services to solve different aspects of an application, and watch a demonstration on which application use cases lend themselves well to which data services. If you’re a developer building modern applications that require flexibility and consistent millisecond performance, and you’re trying to determine what relational and non-relational data services to use, this session is for you.
One Data Lake, Many Uses: Enabling Multi-Tenant Analytics with Amazon EMR (AN...Amazon Web Services
One of the benefits of having a data lake is that same data can be consumed by multi-tenant groups—an efficient way to share a persistent Amazon EMR cluster. The same business data can be safely used for many different analytics and data processing needs. In this session, we discuss steps to make an Amazon EMR cluster multi-tenant for analytics, best practices for a multi-tenant cluster, and solutions to common challenges. We also address the security and governance aspects of a multi-tenant Amazon EMR cluster.
Migrating your traditional Data Warehouse to a Modern Data LakeAmazon Web Services
In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base.
A Deep Dive into What's New with Amazon EMR (ANT340-R1) - AWS re:Invent 2018Amazon Web Services
Amazon EMR is one of the largest Spark and Hadoop service providers in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long- and short-lived clusters, using notebooks, and other architectural best practices. We discuss lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. We showcase key improvements made to the service in 2017. We cover improvements in using the Amazon EMR API, best practices utilizing Spot instances and Spot Instances with Auto Scaling, improvements toward Amazon S3 performance on Amazon EMR, and security/authorization and authentication. We couple each of these with a demo or customer use case to illustrate the benefits. If you are an existing Amazon EMR user, you walk away with a thorough understanding of improvements made in 2018, and how they benefit you. If you are a new Amazon EMR user, get an understanding of common use cases and how other customers are using Amazon EMR.
Building Serverless Analytics Solutions with Amazon QuickSight (ANT391) - AWS...Amazon Web Services
Querying and analyzing big data can be complicated and expensive. It requires you to setup and manage databases, data warehouses, and business intelligence (BI) applications—all of which require time, effort, and resources. Using Amazon Athena and Amazon QuickSight, you can avoid the cost and complexity by creating a fast, scalable, and serverless cloud analytics solution without the need to invest in databases, data warehouses, complex ETL solutions, and BI applications. In this session, we demonstrate how you can build a serverless big data analytics solution using Amazon Athena and Amazon QuickSight.
Build Data Engineering Platforms with Amazon EMR (ANT204) - AWS re:Invent 2018Amazon Web Services
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
Build Your Own Log Analytics Solutions on AWS (ANT323-R) - AWS re:Invent 2018Amazon Web Services
With Amazon Elasticsearch Service's simplicity comes a multitude of opportunity to use it as a back end for real-time application and infrastructure monitoring. With this wealth of opportunities comes sprawl - developers in your organization are deploying Amazon Elasticsearch Service for many different workloads and many different purposes. Should you centralize into one Amazon Elasticsearch Service domain? What are the tradeoffs in scale and cost? How do you control access to the data and dashboards? How do you structure your indexes - single tenant or multi-tenant? In this session, we'll explore whether, when, and how to centralize logging across your organization to minimize cost and maximize value and learn how Autodesk has built a unified log analytics solution using Amazon Elasticsearch Service.
Tape Is a Four Letter Word: Back Up to the Cloud in Under an Hour (STG201) - ...Amazon Web Services
Tape backups. Yes, they're still a thing. If you want to stop using tapes but need to store immutable backups for compliance or operational reasons, attend this session to learn how to make an easy switch to a cloud-based virtual tape library (VTL). AWS Storage Gateway provides a seamless drop-in replacement for tape backups with its Tape Gateway. It works with the major backup software products, so you simply change the target for your backups, and they go to a VTL that stores virtual tapes on Amazon S3 and Amazon Glacier. Come see how it works.
ABD206-Building Visualizations and Dashboards with Amazon QuickSightAmazon Web Services
Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data.
How TrueCar Gains Actionable Insights with Splunk Cloud PPTAmazon Web Services
The vast amount of big data that today’s companies generate makes it difficult to separate the signal from the noise. Organizations need to derive meaningful insights into operations and business to take action. TrueCar needed a better way to manage, search, and analyze their hybrid environment. In this webinar, you’ll learn how TrueCar centralized all of their data in one place using Amazon Kinesis and Splunk Cloud, gaining deep visibility, scalability, and the ability to monitor and troubleshoot operational issues – all while migrating to AWS.
This session provides IT pros and application owners an overview of AWS options for building hybrid storage architectures or even entirely migrating datacenter storage to the AWS cloud. The AWS Storage Gateway connects existing on-premises block, file or tape storage systems to AWS cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process and migrate data into and out of AWS without any network connection at all. Join us to learn how you can close down datacenters, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
Developing with .NET Core on AWS: What's New (DEV318-R1) - AWS re:Invent 2018Amazon Web Services
In this demonstration-heavy session, we illustrate our latest techniques, tools, and libraries for developing end-to-end applications with .NET Core. We focus on serverless applications, but the techniques are broadly relevant. We start by showing you some useful features and best practices for authoring your serverless application, including debugging locally from the IDE and in production. From there, we demonstrate some helpful tools that make it easy to set up your CI/CD workflow from the start. Finally, we deploy our application with AWS Lambda.
Building Your First Serverless Data Lake (ANT356-R1) - AWS re:Invent 2018Amazon Web Services
In this session, you have the opportunity to learn the fundamental building blocks of a data lake on AWS. You design and build a serverless pipeline to ingest, process, optimize and query data in your very own data lake. We discuss different optimizations and best practices to tune your architecture for future growth.
Not having to worry about servers can save you time and effort. Today with a serverless platform, you can globally distribute your web-application to run on dozens of data centers across the planet, with your customers being served from the one nearest to them. In this session to learn how you can combine forces -- with Drupal as a powerful Headless CMS, AWS Lambda@Edge providing serverless compute functionality, and Amazon CloudFront accelerating content through its global network.
In this presentation, we look at architecture, integration examples, and best practices for some of the most popular use-cases from across different channels such as web, mobile, and social media. Learn more: https://aws.amazon.com/cloudfront/
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us to understand best practices for scaling your resources from one to millions of users. We’ll show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Lessons Learned from a Large-Scale Legacy Migration with Sysco (STG311) - AWS...Amazon Web Services
Migrating enterprise applications to the cloud requires thorough planning and consideration for a number of variables. Should you move your application to a similar infrastructure in the cloud (in a lift-and-shift scenario)? Or should you refactor your application to take advantage of cloud-native services for object storage, serverless, auto-scaling, and so on? In this session, an AWS expert walks through the ten commandments that enterprises should follow when moving applications to the cloud and refactoring them for optimal performance. Then, a representative of Sysco Corporation, a Fortune 50 company, shares how the company migrated mission-critical legacy business systems and modernized them to take advantage of the AWS Cloud. Learn how the company moved its enterprise purchasing system, which processes millions of dollars in sales daily, to the AWS Cloud while achieving a 60% decrease in run costs. Also discover the lessons learned and highlights of the migration, which resulted in 30% increase in performance, 3x improvement in user accessibility, and a significant decrease in order backlogs and outages.
Modernizing Media Supply Chains with AWS Serverless (API301) - AWS re:Invent ...Amazon Web Services
Learn how Fox and Discovery modernized their media processing workflows to positively impact operations and business results. In this session, we examine each company's production architecture and learn how they utilize AWS services such as AWS Elemental Media Services, AWS Lambda, AWS Step Functions, Amazon API Gateway, and container toolsets. You also get insights into new business capabilities enabled by their AWS serverless architecture, including automation of content assembly and quality control as well as increased customer engagement with personalization and improved processing performance.
by Anupam Mishra, AWS Solutions Architect
For startup tech leaders, it's a balancing act: aiming to accelerate product development, while also being mindful of how rushed technology choices can introduce unnecessary business risk.
Come to this session to learn how to start releasing features faster with an entire continuous delivery toolchain deployed in minutes with AWS CodeStar. See how you can easily track progress across your product backlog until actual deployment in production. We will show you specific AWS services to use to future-proof your architecture and avoid over-engineering. Prepare for success by deploying your app on a scalable platform like Amazon Elastic Beanstalk - without a steep learning curve or complex infrastructure configuration work.
Finally leverage one of the turnkey AWS Solutions you can launch with a few clicks; a reference implementation to make data driven decisions about product roadmap using real time analytics.
Deep Dive into AWS X-Ray: Monitor Modern Applications (DEV324) - AWS re:Inven...Amazon Web Services
Are you spending hours trying to understand how customers are impacted by performance issues and faults in your service-oriented applications? In this session, we show you how customers are using AWS X-Ray to reduce mean time to resolution, get to the root cause faster, and determine customer impact. In addition, one of our X-Ray customers, ConnectWise, presents a case study and best practices on how it is leveraging X-Ray in its production environment. We also show you how to use X-Ray with applications built using AWS services, such as Amazon Elastic Container Service for Kubernetes (Amazon EKS), AWS Fargate, Amazon Elastic Container Service (Amazon ECS), and AWS Lambda to achieve the above.
Scaling Up to Your First 10 Million Users (ARC205-R1) - AWS re:Invent 2018Amazon Web Services
Cloud computing provides a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session for best practices on scaling your resources from one to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
In this session, learn from market-leader Vonage how and why they re-architected their QoS-sensitive, highly available and highly performant legacy real-time communications systems to take advantage of Amazon EC2, Enhanced Networking, Amazon S3, ASG, Amazon RDS, Amazon ElastiCache, AWS Lambda, StepFunctions, Amazon SNS, Amazon SQS, Amazon Kinesis, Amazon EFS, and more. We also learn how Aspect, a multinational leader in call center solutions, used AWS Lambda, Amazon API Gateway, Amazon Kinesis, Amazon ElastiCache, Amazon Cognito, and Application Load Balancer with open-source API development tooling from Swagger, to build a comprehensive, microservices-based solution. Vonage and Aspect share their journey to TCO optimization, global outreach, and agility with best practices and insights.
Come scalare da zero ai tuoi primi 10 milioni di utenti.pdfAmazon Web Services
AWS Summit Milano 2018
Come scalare da zero ai tuoi primi 10 milioni di utenti
Speaker: Giorgio Bonfiglio, AWS Technical Account Manager - Enterprise Support
When Fujirebio Diagnostics, a leading producer of in vitro diagnostics, shifted to virtualization and the cloud, it wanted to replace its costly, unreliable, and cumbersome backup solution. Fujirebio turned to Amazon Web Services (AWS) and Rubrik for a more modern solution. The company used Rubrik Cloud Data Management to eliminate complex tape backup and archive mission critical production systems on AWS, as well as extend on-site storage capacity. The solution automates backup, recovery, and archival on AWS, helping the company drive operational efficiency and resilience. In this webinar, you will learn how Fujirebio Diagnostics used AWS and Rubrik to simplify data protection, achieve fast recovery, reduce management time, and lower total cost of ownership by 75 percent.
AWS Speaker: Mike Ruiz, Partner Solutions Architect
Rubrik Speakers: Kenneth Hui, Technical Marketing Engineer & Mark Haus, Sales Engineer
Resiliency and Availability Design Patterns for the CloudAmazon Web Services
We have traditionally built robust software systems by trying to avoid mistakes and by dodging failures when they occur in production or by testing parts of the system in isolation from one another. Modern methods and techniques take a very different approach based on resiliency, which promotes embracing failure instead of trying to avoid it. Resilient architectures enhance observability, leverage well-known patterns such as graceful degradation, timeouts and circuit breakers but also new patterns like cell-based architecture and shuffle sharding. In this session, will review the most useful patterns for building resilient software systems and especially show the audience how they can benefit from the patterns.
Building Microservices with the 12 Factor App Pattern on AWSAmazon Web Services
Small monolithic apps are quick to build and fast to implement. But tightly coupled applications can quickly get difficult to operate, maintain, and scale as they grow. In this presentation, we'll cover how to properly construct services and distributed microservices systems.. We'll explain how to build twelve factor applications and discuss the right tools and architectures to implement them on AWS.
Ripping off the Bandage: Re-Architecting Traditional Three-Tier Monoliths to ...Amazon Web Services
The world is powered by many monolithic applications that were written many years ago. These applications have complicated code bases. They are also difficult to maintain, deploy, and operate. The cloud, microservices, and serverless provide agility, efficiency, and resiliency. In this chalk talk, we highlight various approaches for rearchitecting three-tier monoliths to serverless microservices for your customers.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.