"In this session, learn how Cox Automotive is using Splunk Cloud for real time visibility into its AWS and hybrid environments to achieve near instantaneous MTTI, reduce auction incidents by 90%, and proactively predict outages. We also introduce a highly anticipated capability that allows you to ingest, transform, and analyze data in real time using Splunk and Amazon Kinesis Firehose to gain valuable insights from your cloud resources. It’s now quicker and easier than ever to gain access to analytics-driven infrastructure monitoring using Splunk Enterprise & Splunk Cloud.
Session sponsored by Splunk"
ABD317_Building Your First Big Data Application on AWS - ABD317Amazon Web Services
Want to ramp up your knowledge of AWS big data web services and launch your first big data application on the cloud? We walk you through simplifying big data processing as a data bus comprising ingest, store, process, and visualize. You build a big data application using AWS managed services, including Amazon Athena, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. Along the way, we review architecture design patterns for big data applications and give you access to a take-home lab so that you can rebuild and customize the application yourself. You should bring your own laptop and have some familiarity with AWS services to get the most from this session.
ABD302_Real-Time Data Exploration and Analytics with Amazon Elasticsearch Ser...Amazon Web Services
In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we review approaches for generating custom, ad-hoc reports.
ABD307_Deep Analytics for Global AWS Marketing OrganizationAmazon Web Services
To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams.
How Nextdoor Built a Scalable, Serverless Data Pipeline for Billions of Event...Amazon Web Services
In this session, learn how Nextdoor replaced their home-grown data pipeline based on a topology of Flume nodes with a completely serverless architecture based on Kinesis and Lambda. By making these changes, they improved both the reliability of their data and the delivery times of billions of records of data to their Amazon S3–based data lake and Amazon Redshift cluster. Nextdoor is a private social networking service for neighborhoods.
IOT313_AWS IoT and Machine Learning for Building Predictive Applications with...Amazon Web Services
In this session, we present AWS IoT and Amazon Machine Learning (Amazon ML) to demonstrate how you can use these services together to build smart applications. Customer SKF presents their use case around AWS IoT and Amazon ML in their wind turbines.
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), APIs, clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Additionally, Merck will share how they built an end-to-end ETL pipeline for their application release management system, and launched it in production in less than a week using AWS Glue.
Easy and Scalable Log Analytics with Amazon Elasticsearch Service - ABD326 - ...Amazon Web Services
- Applications generate logs. Infrastructure generates logs. Even humans generate logs (though we usually call that “medical data”). By ingesting and analyzing logs, you can gain understanding of how complex systems operate and quickly discover and diagnose when they don’t work as they should. In this workshop, we ingest and analyze log streams using Amazon Kinesis Firehose and Amazon Elasticsearch Service. You should come with an understanding of AWS fundamentals (Amazon EC2, Amazon S3, and security groups). You need a laptop with a Chrome or Firefox browser.
GAM310_Build a Telemetry and Analytics Pipeline for Game BalancingAmazon Web Services
In this workshop, we will together build telemetry/analytics data processing pipelines to assist game developers/architects, designers and producers. We will use a fictitious RPG and ingest data from in-game events. We will then analyze the data to help with game balancing, troubleshooting and other relevant recommendations for game developers and designers. As a participant, you will use Amazon Kinesis, Amazon Kinesis Firehose, Amazon Analytics, Amazon EMR, Amazon Redshift, Amazon S3, Amazon Athena and Amazon QuickSight. Prerequisites include having your own laptop and an interest in big data services, game data processing & analytics.
In order to make your time in the workshop as productive as possible, please make sure to check out the additional information below.
AWS account: Fully functional AWS Account with administrative access. Participant should have the ability to create & destroy resources in the us-west-2 and eu-west-1 regions via API, CLI & AWS Console.
Device/OS: A laptop computer – running Mac OS X, a Linux flavor or Windows. The computer will need a functional ssh/Remote Desktop client.
AWS service familiarity/experience:Familiarity/Experience with EC2, S3 & the AWS Console will be good. For the rest of the services, we will introduce each during the workshop.
Audience: Game Developers (server programmers), Architects, Game Producers/Designers, Game Marketing/Analytics team – hands-on members
ABD317_Building Your First Big Data Application on AWS - ABD317Amazon Web Services
Want to ramp up your knowledge of AWS big data web services and launch your first big data application on the cloud? We walk you through simplifying big data processing as a data bus comprising ingest, store, process, and visualize. You build a big data application using AWS managed services, including Amazon Athena, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. Along the way, we review architecture design patterns for big data applications and give you access to a take-home lab so that you can rebuild and customize the application yourself. You should bring your own laptop and have some familiarity with AWS services to get the most from this session.
ABD302_Real-Time Data Exploration and Analytics with Amazon Elasticsearch Ser...Amazon Web Services
In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we review approaches for generating custom, ad-hoc reports.
ABD307_Deep Analytics for Global AWS Marketing OrganizationAmazon Web Services
To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams.
How Nextdoor Built a Scalable, Serverless Data Pipeline for Billions of Event...Amazon Web Services
In this session, learn how Nextdoor replaced their home-grown data pipeline based on a topology of Flume nodes with a completely serverless architecture based on Kinesis and Lambda. By making these changes, they improved both the reliability of their data and the delivery times of billions of records of data to their Amazon S3–based data lake and Amazon Redshift cluster. Nextdoor is a private social networking service for neighborhoods.
IOT313_AWS IoT and Machine Learning for Building Predictive Applications with...Amazon Web Services
In this session, we present AWS IoT and Amazon Machine Learning (Amazon ML) to demonstrate how you can use these services together to build smart applications. Customer SKF presents their use case around AWS IoT and Amazon ML in their wind turbines.
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), APIs, clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Additionally, Merck will share how they built an end-to-end ETL pipeline for their application release management system, and launched it in production in less than a week using AWS Glue.
Easy and Scalable Log Analytics with Amazon Elasticsearch Service - ABD326 - ...Amazon Web Services
- Applications generate logs. Infrastructure generates logs. Even humans generate logs (though we usually call that “medical data”). By ingesting and analyzing logs, you can gain understanding of how complex systems operate and quickly discover and diagnose when they don’t work as they should. In this workshop, we ingest and analyze log streams using Amazon Kinesis Firehose and Amazon Elasticsearch Service. You should come with an understanding of AWS fundamentals (Amazon EC2, Amazon S3, and security groups). You need a laptop with a Chrome or Firefox browser.
GAM310_Build a Telemetry and Analytics Pipeline for Game BalancingAmazon Web Services
In this workshop, we will together build telemetry/analytics data processing pipelines to assist game developers/architects, designers and producers. We will use a fictitious RPG and ingest data from in-game events. We will then analyze the data to help with game balancing, troubleshooting and other relevant recommendations for game developers and designers. As a participant, you will use Amazon Kinesis, Amazon Kinesis Firehose, Amazon Analytics, Amazon EMR, Amazon Redshift, Amazon S3, Amazon Athena and Amazon QuickSight. Prerequisites include having your own laptop and an interest in big data services, game data processing & analytics.
In order to make your time in the workshop as productive as possible, please make sure to check out the additional information below.
AWS account: Fully functional AWS Account with administrative access. Participant should have the ability to create & destroy resources in the us-west-2 and eu-west-1 regions via API, CLI & AWS Console.
Device/OS: A laptop computer – running Mac OS X, a Linux flavor or Windows. The computer will need a functional ssh/Remote Desktop client.
AWS service familiarity/experience:Familiarity/Experience with EC2, S3 & the AWS Console will be good. For the rest of the services, we will introduce each during the workshop.
Audience: Game Developers (server programmers), Architects, Game Producers/Designers, Game Marketing/Analytics team – hands-on members
DVC303-Technological Accelerants for Organizational TransformationAmazon Web Services
"Developers and management can seem at cross purposes when one group looks at technologies and the other looks at organizational issues. Both groups are looking for ways to deliver value faster, leaner, and at less cost. There are technological avenues for accomplishing these goals, including DevOps and serverless architectures. However, these approaches also have organizational implications, as they change the nature and content of communication between teams. In this session, we cover the technology benefits and organizational transformations involved in DevOps and serverless architectures.
This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities."
ABD327_Migrating Your Traditional Data Warehouse to a Modern Data LakeAmazon Web Services
In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base.
Reducing the time to get actionable insights from data is important to all businesses, and customers who employ batch data analytics tools are exploring the benefits of streaming analytics. Learn best practices to extend your architecture from data warehouses and databases to real-time solutions. Learn how to use Amazon Kinesis to get real-time data insights and integrate them with Amazon Aurora, Amazon RDS, Amazon Redshift, and Amazon S3. The Amazon Flex team describes how they used streaming analytics in their Amazon Flex mobile app, used by Amazon delivery drivers to deliver millions of packages each month on time. They discuss the architecture that enabled the move from a batch processing system to a real-time system, overcoming the challenges of migrating existing batch data to streaming data, and how to benefit from real-time analytics.
RET301-Build Single Customer View across Multiple Retail Channels using AWS S...Amazon Web Services
A challenge faced by many retailers is how to form an integrated single view of the customer across multiple retail channels to help you better understand purchasing behavior & patterns. In this session, we will present a solution that merges web analytics data with customer purchase history based on AWS API Gateway, Lambda and S3. Learn how to track customer purchase behaviors across different selling channels to better predict future needs and make relevant, intelligent recommendations.
This is your chance to learn directly from top CTOs and Cloud Architects from some of the most innovative AWS customers. In this lightning round session, we'll have an action-packed hour, jumping straight to the architecture and technical detail for some of the most innovative data storage solutions of 2017. Hear how Insitu collects and analyzes data from drone flights in the field with AWS Snowball Edge. See how iRobot collects and analyzes IoT data from their robotic vacuums, mops, and pool cleaners. Learn how Viber maintains a petabyte-scale data lake on Amazon S3. Understand how Alert Logic scales their massive SaaS cloud security solution on Amazon S3 & Amazon Glacier.
ABD202_Best Practices for Building Serverless Big Data ApplicationsAmazon Web Services
Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this session, we show you how to incorporate serverless concepts into your big data architectures. We explore the concepts behind and benefits of serverless architectures for big data, looking at design patterns to ingest, store, process, and visualize your data. Along the way, we explain when and how you can use serverless technologies to streamline data processing, minimize infrastructure management, and improve agility and robustness and share a reference architecture using a combination of cloud and open source technologies to solve your big data problems. Topics include: use cases and best practices for serverless big data applications; leveraging AWS technologies such as Amazon DynamoDB, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon Athena, and Amazon EMR; and serverless ETL, event processing, ad hoc analysis, and real-time analytics.
In this session, you learn how to set up a crawler to automatically discover your data and build your AWS Glue Data Catalog. You then auto-generate an AWS Glue ETL script, download it, and interactively edit it using a Zeppelin notebook, connected to an AWS Glue development endpoint. After that, you upload this script to Amazon S3, reuse it across multiple jobs, and add trigger conditions to run the jobs. The resulting datasets automatically get registered in the AWS Glue Data Catalog and you can then query these new datasets from Amazon EMR and Amazon Athena. Prerequisites: Knowledge of Python and familiarity with big data applications is preferred but not required. Attendees must bring their own laptops.
Learn best practices for Amazon Simple Storage Service (Amazon S3) performance optimization, security, data protection, storage management, and much more. Learn how to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and leverage object tagging and other features to enhance security.
ABD301-Analyzing Streaming Data in Real Time with Amazon KinesisAmazon Web Services
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we present an end-to-end streaming data solution using Kinesis Streams for data ingestion, Kinesis Analytics for real-time processing, and Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
GAM301-Migrating the League of Legends Platform into AWS Cloud.pdfAmazon Web Services
For years, Riot Games deployed to their own private data centers across the globe to meet the growing demands of their game, League of Legends. The last seven years saw explosive growth in new data centers worldwide, along with a great deal of technical debt. This is Riot Games’ story of how they overcame their technical debt by taking the League of Legends platform into the AWS Cloud. AWS services gave Riot Games the infrastructure agility they were lacking within their data centers and empowered them to focus more on new player features and less on legacy infrastructure. In this session, you learn why and how Riot Games migrated their existing platform to the AWS Cloud and the advantages gained by the move using services such as Amazon EC2, Elastic Load Balancing (with Application Load Balancers), Amazon EBS, and Auto Scaling. Riot Games also shares how they created new automation toolsets to enable the existing tools and replace a few legacy ones.
GPSTEC313_GPS Real-Time Data Processing with AWS Lambda Quickly, at Scale, an...Amazon Web Services
Real-time data processing is a powerful technique that allows businesses to make agile automated decisions. This process is particularly powerful when applied to workloads like security, analyzing access logs, parsing audit logs, and monitoring API activity to detect behavior anomalies. Combined with automation, business can quickly take action to remediate security concerns, or even train a machine learning (ML) model. We explore different techniques for analyzing real-time streams on AWS using Lambda, Amazon Kinesis, Spark with Amazon EMR, and Amazon DynamoDB. We also cover best practices around short- and long-term storage and analysis of data and, briefly, the possibility of leveraging ML.
Deploying Business Analytics at Enterprise Scale - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Deploy business analytics to thousands of users using Active Directory and Federated SSO
- Securely access data sources in Amazon VPCs or on-premises and build data marts with SPICE
- Control access to your data sources, implement row-level security, and audit access to your data
FSV302_An Architecture for Trade Capture and Regulatory ReportingAmazon Web Services
For many securities organizations, post-trade processing is expensive, cumbersome, and time-consuming. This is in part due to the massive volumes of data required for processing a trade and the limited agility of the technology on which many organizations rely today. In order to create efficiencies and move faster, many financial services organizations are working with AWS to implement post-trade solutions built with AWS storage services (Amazon S3 and Amazon Glacier) and big data capabilities (Amazon Athena, Amazon EMR, Amazon Redshift, and Amazon QuickSight ). In this session, we walk through a trade capture and regulatory reporting solution that uses the aforementioned AWS services. We also provide guidance around obtaining data-driven insights (from pixels to pictures); bolstering encryption with AWS KMS; and maintaining transparency and control with Amazon CloudWatch and Amazon CloudTrail (which also helps meet SEC Rule 613 that requires the creation of comprehensive consolidated audit trails).
Big Data Breakthroughs: Process and Query Data In Place with Amazon S3 Select...Amazon Web Services
Amazon S3 & Amazon Glacier provide the durable, scalable, secure and cost-effective storage you need for your data lake. But, as your data lake grows, the resources needed to analyze all the data can become expensive, or queries may take longer than desired. AWS provides query-in-place services like Amazon Athena and Amazon Redshift Spectrum to help you analyze this data easily and more cost-effectively than ever before. In this session, we will talk about how AWS query-in-place services and other tools work with Amazon S3 & Amazon Glacier and the optimizations you can use to analyze and process this data, cheaply and effectively.
A Look Under the Hood – How Amazon.com Uses AWS Services for Analytics at Mas...Amazon Web Services
Amazon’s consumer business continues to grow, and so does the volume of data and the number and complexity of the analytics done in support of the business. In this session, we talk about how Amazon.com uses AWS technologies to build a scalable environment for data and analytics. We look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel, scalable compute engines such as Amazon EMR and Amazon Redshift.
AWS offers customers multiple solutions for federating identities on the AWS Cloud. In this session, we will embark on a tour of these solutions and the use cases they support. Along the way, we will dive deep with demonstrations and best practices to help you be successful managing identies on the AWS Cloud. We will cover how and when to use Security Assertion Markup Language 2.0 (SAML), OpenID Connect (OIDC), and other AWS native federation mechanisms. You will learn how these solutions enable federated access to the AWS Management Console, APIs, and CLI, AWS Infrastructure and Managed Services, your web and mobile applications running on the AWS Cloud, and much more.
How Netflix Monitors Applications in Near Real-time w Amazon Kinesis - ABD401...Amazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, you'll learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
AMF305_Autonomous Driving Algorithm Development on Amazon AIAmazon Web Services
Over the next decade, accelerating autonomous driving technology—including advances in artificial intelligence, sensors, cameras, radar and data analytics—are set to transform how we commute. In this session, you learn how to use Amazon AI for a highly productive, on demand, and scalable autonomous driving development environment. We compare the most popular AI frameworks including TensorFlow and MXNet for use in autonomous driving workloads. You learn about the AWS optimizations on MXNet that yield near linear scalability for training deep neural networks and convolutional neural networks. We demonstrate the ease of getting started on AWS AI by using a sample training dataset for building an object detection model on AWS. This session is intended for audiences who have some exposure to the underlying concepts for AI-based autonomous driving development. After attending the session, you can get started with AI development on AWS by using a sample dataset for building an object detection model.
STG311_Deep Dive on Amazon S3 & Amazon Glacier Storage ManagementAmazon Web Services
Learn best practices for Amazon Simple Storage Service (Amazon S3) performance optimization, security, data protection, storage management, and much more. Learn how to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and leverage object tagging and other features to enhance security.
Given the complexity and scale of contemporary games, it’s been a dream of game creators to algorithmically generate game content. Nexon wanted to create a large-scale, open-world MMORPG called Durango, where algorithmic generation is desperately needed to minimize development costs and maintain game longevity. In-game objects such as trees and plants are placed based on complex rules, with the intention of mimicking the realistic ecosystem that evolves continuously. However, a large amount of computation and a careful orchestration of various computing resources are required due to the immense size of the in-game lands. Nexon achieved this goal by leveraging AWS services to take advantage of massive parallelism supported by the infrastructure. In this talk, Nexon discusses the architecture they settled on for algorithmic generation of game content in a large scale, and the AWS services involved such as Amazon SQS and Amazon ECS with automatic scaling and spot instances.
Real-time Analytics using Data from IoT Devices - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn the different options available to stream data from IoT sensors to AWS
- Understand how to architect an analytics solution using AWS services to ingest and process IoT data
- Take away best practices for building IoT applications with scalability, cost-effectiveness, and security
DVC303-Technological Accelerants for Organizational TransformationAmazon Web Services
"Developers and management can seem at cross purposes when one group looks at technologies and the other looks at organizational issues. Both groups are looking for ways to deliver value faster, leaner, and at less cost. There are technological avenues for accomplishing these goals, including DevOps and serverless architectures. However, these approaches also have organizational implications, as they change the nature and content of communication between teams. In this session, we cover the technology benefits and organizational transformations involved in DevOps and serverless architectures.
This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities."
ABD327_Migrating Your Traditional Data Warehouse to a Modern Data LakeAmazon Web Services
In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base.
Reducing the time to get actionable insights from data is important to all businesses, and customers who employ batch data analytics tools are exploring the benefits of streaming analytics. Learn best practices to extend your architecture from data warehouses and databases to real-time solutions. Learn how to use Amazon Kinesis to get real-time data insights and integrate them with Amazon Aurora, Amazon RDS, Amazon Redshift, and Amazon S3. The Amazon Flex team describes how they used streaming analytics in their Amazon Flex mobile app, used by Amazon delivery drivers to deliver millions of packages each month on time. They discuss the architecture that enabled the move from a batch processing system to a real-time system, overcoming the challenges of migrating existing batch data to streaming data, and how to benefit from real-time analytics.
RET301-Build Single Customer View across Multiple Retail Channels using AWS S...Amazon Web Services
A challenge faced by many retailers is how to form an integrated single view of the customer across multiple retail channels to help you better understand purchasing behavior & patterns. In this session, we will present a solution that merges web analytics data with customer purchase history based on AWS API Gateway, Lambda and S3. Learn how to track customer purchase behaviors across different selling channels to better predict future needs and make relevant, intelligent recommendations.
This is your chance to learn directly from top CTOs and Cloud Architects from some of the most innovative AWS customers. In this lightning round session, we'll have an action-packed hour, jumping straight to the architecture and technical detail for some of the most innovative data storage solutions of 2017. Hear how Insitu collects and analyzes data from drone flights in the field with AWS Snowball Edge. See how iRobot collects and analyzes IoT data from their robotic vacuums, mops, and pool cleaners. Learn how Viber maintains a petabyte-scale data lake on Amazon S3. Understand how Alert Logic scales their massive SaaS cloud security solution on Amazon S3 & Amazon Glacier.
ABD202_Best Practices for Building Serverless Big Data ApplicationsAmazon Web Services
Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this session, we show you how to incorporate serverless concepts into your big data architectures. We explore the concepts behind and benefits of serverless architectures for big data, looking at design patterns to ingest, store, process, and visualize your data. Along the way, we explain when and how you can use serverless technologies to streamline data processing, minimize infrastructure management, and improve agility and robustness and share a reference architecture using a combination of cloud and open source technologies to solve your big data problems. Topics include: use cases and best practices for serverless big data applications; leveraging AWS technologies such as Amazon DynamoDB, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon Athena, and Amazon EMR; and serverless ETL, event processing, ad hoc analysis, and real-time analytics.
In this session, you learn how to set up a crawler to automatically discover your data and build your AWS Glue Data Catalog. You then auto-generate an AWS Glue ETL script, download it, and interactively edit it using a Zeppelin notebook, connected to an AWS Glue development endpoint. After that, you upload this script to Amazon S3, reuse it across multiple jobs, and add trigger conditions to run the jobs. The resulting datasets automatically get registered in the AWS Glue Data Catalog and you can then query these new datasets from Amazon EMR and Amazon Athena. Prerequisites: Knowledge of Python and familiarity with big data applications is preferred but not required. Attendees must bring their own laptops.
Learn best practices for Amazon Simple Storage Service (Amazon S3) performance optimization, security, data protection, storage management, and much more. Learn how to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and leverage object tagging and other features to enhance security.
ABD301-Analyzing Streaming Data in Real Time with Amazon KinesisAmazon Web Services
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we present an end-to-end streaming data solution using Kinesis Streams for data ingestion, Kinesis Analytics for real-time processing, and Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
GAM301-Migrating the League of Legends Platform into AWS Cloud.pdfAmazon Web Services
For years, Riot Games deployed to their own private data centers across the globe to meet the growing demands of their game, League of Legends. The last seven years saw explosive growth in new data centers worldwide, along with a great deal of technical debt. This is Riot Games’ story of how they overcame their technical debt by taking the League of Legends platform into the AWS Cloud. AWS services gave Riot Games the infrastructure agility they were lacking within their data centers and empowered them to focus more on new player features and less on legacy infrastructure. In this session, you learn why and how Riot Games migrated their existing platform to the AWS Cloud and the advantages gained by the move using services such as Amazon EC2, Elastic Load Balancing (with Application Load Balancers), Amazon EBS, and Auto Scaling. Riot Games also shares how they created new automation toolsets to enable the existing tools and replace a few legacy ones.
GPSTEC313_GPS Real-Time Data Processing with AWS Lambda Quickly, at Scale, an...Amazon Web Services
Real-time data processing is a powerful technique that allows businesses to make agile automated decisions. This process is particularly powerful when applied to workloads like security, analyzing access logs, parsing audit logs, and monitoring API activity to detect behavior anomalies. Combined with automation, business can quickly take action to remediate security concerns, or even train a machine learning (ML) model. We explore different techniques for analyzing real-time streams on AWS using Lambda, Amazon Kinesis, Spark with Amazon EMR, and Amazon DynamoDB. We also cover best practices around short- and long-term storage and analysis of data and, briefly, the possibility of leveraging ML.
Deploying Business Analytics at Enterprise Scale - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Deploy business analytics to thousands of users using Active Directory and Federated SSO
- Securely access data sources in Amazon VPCs or on-premises and build data marts with SPICE
- Control access to your data sources, implement row-level security, and audit access to your data
FSV302_An Architecture for Trade Capture and Regulatory ReportingAmazon Web Services
For many securities organizations, post-trade processing is expensive, cumbersome, and time-consuming. This is in part due to the massive volumes of data required for processing a trade and the limited agility of the technology on which many organizations rely today. In order to create efficiencies and move faster, many financial services organizations are working with AWS to implement post-trade solutions built with AWS storage services (Amazon S3 and Amazon Glacier) and big data capabilities (Amazon Athena, Amazon EMR, Amazon Redshift, and Amazon QuickSight ). In this session, we walk through a trade capture and regulatory reporting solution that uses the aforementioned AWS services. We also provide guidance around obtaining data-driven insights (from pixels to pictures); bolstering encryption with AWS KMS; and maintaining transparency and control with Amazon CloudWatch and Amazon CloudTrail (which also helps meet SEC Rule 613 that requires the creation of comprehensive consolidated audit trails).
Big Data Breakthroughs: Process and Query Data In Place with Amazon S3 Select...Amazon Web Services
Amazon S3 & Amazon Glacier provide the durable, scalable, secure and cost-effective storage you need for your data lake. But, as your data lake grows, the resources needed to analyze all the data can become expensive, or queries may take longer than desired. AWS provides query-in-place services like Amazon Athena and Amazon Redshift Spectrum to help you analyze this data easily and more cost-effectively than ever before. In this session, we will talk about how AWS query-in-place services and other tools work with Amazon S3 & Amazon Glacier and the optimizations you can use to analyze and process this data, cheaply and effectively.
A Look Under the Hood – How Amazon.com Uses AWS Services for Analytics at Mas...Amazon Web Services
Amazon’s consumer business continues to grow, and so does the volume of data and the number and complexity of the analytics done in support of the business. In this session, we talk about how Amazon.com uses AWS technologies to build a scalable environment for data and analytics. We look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel, scalable compute engines such as Amazon EMR and Amazon Redshift.
AWS offers customers multiple solutions for federating identities on the AWS Cloud. In this session, we will embark on a tour of these solutions and the use cases they support. Along the way, we will dive deep with demonstrations and best practices to help you be successful managing identies on the AWS Cloud. We will cover how and when to use Security Assertion Markup Language 2.0 (SAML), OpenID Connect (OIDC), and other AWS native federation mechanisms. You will learn how these solutions enable federated access to the AWS Management Console, APIs, and CLI, AWS Infrastructure and Managed Services, your web and mobile applications running on the AWS Cloud, and much more.
How Netflix Monitors Applications in Near Real-time w Amazon Kinesis - ABD401...Amazon Web Services
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, you'll learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
AMF305_Autonomous Driving Algorithm Development on Amazon AIAmazon Web Services
Over the next decade, accelerating autonomous driving technology—including advances in artificial intelligence, sensors, cameras, radar and data analytics—are set to transform how we commute. In this session, you learn how to use Amazon AI for a highly productive, on demand, and scalable autonomous driving development environment. We compare the most popular AI frameworks including TensorFlow and MXNet for use in autonomous driving workloads. You learn about the AWS optimizations on MXNet that yield near linear scalability for training deep neural networks and convolutional neural networks. We demonstrate the ease of getting started on AWS AI by using a sample training dataset for building an object detection model on AWS. This session is intended for audiences who have some exposure to the underlying concepts for AI-based autonomous driving development. After attending the session, you can get started with AI development on AWS by using a sample dataset for building an object detection model.
STG311_Deep Dive on Amazon S3 & Amazon Glacier Storage ManagementAmazon Web Services
Learn best practices for Amazon Simple Storage Service (Amazon S3) performance optimization, security, data protection, storage management, and much more. Learn how to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and leverage object tagging and other features to enhance security.
Given the complexity and scale of contemporary games, it’s been a dream of game creators to algorithmically generate game content. Nexon wanted to create a large-scale, open-world MMORPG called Durango, where algorithmic generation is desperately needed to minimize development costs and maintain game longevity. In-game objects such as trees and plants are placed based on complex rules, with the intention of mimicking the realistic ecosystem that evolves continuously. However, a large amount of computation and a careful orchestration of various computing resources are required due to the immense size of the in-game lands. Nexon achieved this goal by leveraging AWS services to take advantage of massive parallelism supported by the infrastructure. In this talk, Nexon discusses the architecture they settled on for algorithmic generation of game content in a large scale, and the AWS services involved such as Amazon SQS and Amazon ECS with automatic scaling and spot instances.
Real-time Analytics using Data from IoT Devices - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn the different options available to stream data from IoT sensors to AWS
- Understand how to architect an analytics solution using AWS services to ingest and process IoT data
- Take away best practices for building IoT applications with scalability, cost-effectiveness, and security
How TrueCar Gains Actionable Insights with Splunk Cloud PPTAmazon Web Services
The vast amount of big data that today’s companies generate makes it difficult to separate the signal from the noise. Organizations need to derive meaningful insights into operations and business to take action. TrueCar needed a better way to manage, search, and analyze their hybrid environment. In this webinar, you’ll learn how TrueCar centralized all of their data in one place using Amazon Kinesis and Splunk Cloud, gaining deep visibility, scalability, and the ability to monitor and troubleshoot operational issues – all while migrating to AWS.
NEW LAUNCH! AWS PrivateLink: Bringing SaaS Solutions into Your VPCs and Your ...Amazon Web Services
Many customers are hesitant to adopt SaaS solutions due to the concerns on the safety of the network connectivity traversing internet. It is also difficult to manage the firewall rules, NAT Gateway or VPN connections. AWS PrivateLink provided solution that let our customers’ applications, whether in a VPC or in their own data center, to connect to SaaS solutions in a highly scalable and highly available manner, while keeping all the network traffic within the AWS network.
Slides from my talk at the IP Expo Nordic 2017:
https://www.ipexponordic.com/Speakers-2017/Adrian-Hornsby
Speed and agility are essential for today’s businesses. The quicker you can get from an idea to first results, the more you can experiment and innovate with your data, perform ad-hoc analysis, and drive answers to new business questions. During this talk, Adrian will take in key features of the AWS IoT platform, latest developments and live demos
When migrating lots of applications to the AWS Cloud, it’s important to architect cloud environments that are efficient, secure, and compliant. Landing zones are a prescriptive set of instructions for deploying an AWS-recommended foundation of interrelated AWS accounts, networks, and core services for your initial AWS application environments. In this session, we will review the benefits and best practices for developing landing zones as well as how to incorporate them into your migration process.
ABD206-Building Visualizations and Dashboards with Amazon QuickSightAmazon Web Services
Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data.
NEW LAUNCH! AWS IoT Analytics from Consumer IoT to Industrial IoT - IOT211 - ...Amazon Web Services
This session is an overview of IoT Analytics challenges and use cases with our customers. This session will cover analytics use cases from Consumer IoT to Industrial IoT. It will then show how AWS IoT Analytics helps customers solve these challenges in different IoT verticals.
IOT311_Customer Stories of Things, Cloud, and Analytics on AWSAmazon Web Services
In this session, AWS IoT customers talk about the nuances, successes, and challenges of running large-scale IoT deployments on AWS. Hear from customers who have been operating on AWS IoT. Learn from their war stories of development and their architectural recommendations on technical best practices on IoT.
Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions.
I Want to Analyze and Visualize Website Access Logs, but Why Do I Need Server...Amazon Web Services
Nowadays, it’s common for a web server to be fronted by a global content delivery service, such as Amazon CloudFront, to accelerate delivery of websites, APIs, media content, and other web assets. Website administrators and developers want to generate insights in order to improve website availability through bot detection and mitigation, by optimizing web content based on the devices and browser used, by reducing perceived latency by caching a popular object closer to its viewer, and so on. In this session, we dive deep into building an end-to-end serverless analytics solution to analyze Amazon CloudFront access logs, both at rest and in transit, using Amazon Athena and Amazon Kinesis Analytics, respectively, and we generate visualization insights using Amazon QuickSight. Join a discussion with AWS solution architects to learn more about the various ways to generate insights to improve the overall perceived experience for your website users.
NEW LAUNCH! Data Driven Apps with GraphQL: AWS AppSync Deep Dive - MBL402 - r...Amazon Web Services
State and data management are the foundations of high quality web and mobile applications in use today. Unfortunately it’s often difficult to setup infrastructure for accessing data in a scalable, efficient and secure manner and easily integrate into your frontend or client framework of choice. In this session you’ll see how using GraphQL with AWS AppSync allows your clients to quickly and securely access data in the cloud using Amazon DynamoDB, AWS Lambda or Amazon ElasticSearch. You’ll see how flexible the system is to let web and mobile application developers define their data structures, mix and match the backing stores using GraphQL resolvers, and customize mapping templates as needed. We’ll take you through effective use of GraphQL queries, mutations, and subscriptions for batch and realtime needs. Finally you’ll see how easy it is for application developers to leverage the system using React Native, iOS and JavaScript web applications.
Design, Build, and Modernize Your Web Applications with AWSDonnie Prakoso
Cloud makes it super easy for you to spin off your desired IT resources. But, the true value of cloud lies in its capability to provide you a set of building blocks for your applications. Join us in this hands-on session to understand how to use Amazon Virtual Private Cloud (VPC) and Amazon Elastic Compute Cloud (EC2) along with Amazon EC2 Auto Scaling and Elastic Load Balancer to design your scalable architecture and build your applications in no time. Moreover, we will discover how to modernize your application with the help of our serverless service AWS Lambda.
ARC306_High Resiliency & Availability Of Online Entertainment Communities Usi...Amazon Web Services
With increase in popularity of online engagement as a means of entertainment, broad use of wide range of communities have become popular. These communities need to be highly available and resilient at scale. Failure of availability could be fatal to the product that are used by the customer. We will share the process you should use to develop your architectural principles that will allow you to reap the benefits of reduced complexity.
Containers on AWS - State of the Union - CON201 - re:Invent 2017Amazon Web Services
Just over four years after the first public release of Docker, and three years to the day after the launch of Amazon EC2 Container Service, the use of containers has surged to run a significant percentage of production workloads at startups and enterprise organizations. Join Deepak Singh, General Manager of Amazon Container Services, as we cover the state of containerized application development and deployment trends, new container capabilities on AWS that are available now, options for running containerized applications on AWS, and how AWS customers successfully run container workloads in production.
In this session, hear how Cambia Health Solutions, a not-for-profit total health solutions company, created a self-service data model to convert a large-scale, on-premises batch processing model to a cloud-based, real-time pub-sub and RESTful API model. Learn how Cambia leveraged AWS services like Amazon Aurora, AWS Database Migration Service (AWS DMS), AWS Lambda, and AWS messaging services to create an architecture that provides a reasonable runway for legacy customers to convert from old mode to new mode and, at the same time, offer a fast track for onboarding new customers.
Healthcare Payers and Serverless Batch Processing Engines - HLC308 - re:Inven...Amazon Web Services
In this session, hear how Cambia Health Solutions, a not-for-profit total health solutions company, created a self-service data model to convert a large-scale, on-premises batch processing model to a cloud-based, real-time pub-sub and RESTful API model. Learn how Cambia leveraged AWS services like Amazon Aurora, AWS Database Migration Service (AWS DMS), AWS Lambda, and AWS messaging services to create an architecture that provides a reasonable runway for legacy customers to convert from old mode to new mode and, at the same time, offer a fast track for onboarding new customers.
Similar to ABD208_Cox Automotive Empowered to Scale with Splunk Cloud & AWS and Explores New Innovation with Amazon Kinesis Firehose (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
31. A leading provider of products
and services that span the
automotive ecosystem worldwide.
More than 20 brands that
together provide end-to-end
solutions for customers large and
small.
A subsidiary of
Cox Enterprises, Inc.