Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)Amazon Web Services
For discovery-phase research, life sciences companies have to support infrastructure that processes millions to billions of transactions. The advent of a data lake to accomplish such a task is showing itself to be a stable and productive data platform pattern to meet the goal. We discuss how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, IAM, and AWS Lambda. We also review a reference architecture from Amgen that uses a data lake to aid in their Life Science Research.
Antoine Genereux takes us on a detailed overview of the Database solutions available on the AWS Cloud, addressing the needs and requirements of customers at all levels. He also discusses Business Intelligence and Analytics solutions.
AWS Summit Singapore - Architecting a Serverless Data Lake on AWSAmazon Web Services
Unni Pillai, Specialist Solution Architect, ASEAN, AWS.
Daniel Muller, Head of Cloud Infrastructure, Spuul.
As the volume and types of data continues to grow, customers often have valuable data that is not easily discoverable and available for analytics. A common challenge for data engineering teams is architecting a data lake that can cater to the needs of diverse users - from developers to business analysts to data scientists.
In this session, we will dive deep into building a data lake using Amazon S3, Amazon Kinesis, Amazon Athena and AWS Glue. We will also see how AWS Glue crawlers can automatically discover your data, extracting and cataloguing relevant metadata to reduce operations in preparing your data for downstream consumers.
Furthermore, learn from our customer Spuul, on how they moved from a Data Warehouse based analytics to a serverless data lake. Why and how did Spuul undertake this journey? Hear about the benefits and challenges they encountered.
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)Amazon Web Services
For discovery-phase research, life sciences companies have to support infrastructure that processes millions to billions of transactions. The advent of a data lake to accomplish such a task is showing itself to be a stable and productive data platform pattern to meet the goal. We discuss how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, IAM, and AWS Lambda. We also review a reference architecture from Amgen that uses a data lake to aid in their Life Science Research.
Antoine Genereux takes us on a detailed overview of the Database solutions available on the AWS Cloud, addressing the needs and requirements of customers at all levels. He also discusses Business Intelligence and Analytics solutions.
AWS Summit Singapore - Architecting a Serverless Data Lake on AWSAmazon Web Services
Unni Pillai, Specialist Solution Architect, ASEAN, AWS.
Daniel Muller, Head of Cloud Infrastructure, Spuul.
As the volume and types of data continues to grow, customers often have valuable data that is not easily discoverable and available for analytics. A common challenge for data engineering teams is architecting a data lake that can cater to the needs of diverse users - from developers to business analysts to data scientists.
In this session, we will dive deep into building a data lake using Amazon S3, Amazon Kinesis, Amazon Athena and AWS Glue. We will also see how AWS Glue crawlers can automatically discover your data, extracting and cataloguing relevant metadata to reduce operations in preparing your data for downstream consumers.
Furthermore, learn from our customer Spuul, on how they moved from a Data Warehouse based analytics to a serverless data lake. Why and how did Spuul undertake this journey? Hear about the benefits and challenges they encountered.
ENT314 Automate Best Practices and Operational Health for Your AWS ResourcesAmazon Web Services
"It can be challenging to optimize AWS resources across cost, performance, security and fault-tolerance, much less do it automatically. AWS Trusted Advisor is an online resource to help you do just that, by providing real time guidance to help you provision your resources following AWS best practices. In this session, we will go over how to safely automate these best practices using Amazon CloudWatch events and AWS Lambda along with samples for you to use.
AWS Personal Health Dashboard (PHD) provides alerts and remediation guidance when AWS is experiencing events that may impact your AWS environment. The AWS Health API, the underlying service powering PHD integrates with Amazon CloudWatch Events, enabling you to trigger AWS Lambda functions to define automated remediation actions. We will also introduce you to AWS Health tools, a community-based source of tools to automate remediation actions and customize Health alerts.
Come join us to see how you can implement automation of AWS best practice recommendations from Trusted Advisor and remediation from the AWS Health API on your AWS resources."
Collecting, maintaining, and analyzing data is key to keeping pace within any industry today. In addition to being a critical competitive asset, maintaining corporate data requires careful foundational planning to ensure that the data is secure at all stages. Your big data may include not only proprietary non-public information, but also controlled data that must adhere to regulations such as HIPAA or ITAR. Securing this data while maintaining access for authorized data analytics and reporting workloads can pose significant challenges. In this talk, you’ll learn about strategies leveraging tools such as AWS Identity and Access Management (IAM), AWS Key Management Service (KMS) , Amazon S3, and Amazon EMR to secure your big data workloads in the cloud.
Level: 200
Speaker: Hannah Marlowe - Consultant, Federal, WWPS Professional Services
An overview of Amazon Kinesis Firehose, Amazon Kinesis Analytics, and Amazon Kinesis Streams so you can quickly get started with real-time, streaming data.
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...Amazon Web Services
Working with Amazon Web Services “AWS” and 1Strategy, an Advance AWS Consulting partner; the Cambia Health Data Sciences teams have been able to deploy HIPAA compliant and secured AWS Elastic Map Reduce (EMR) data pipelines on the cloud. In this session, we will dive deep into the architectural components of this solution and you will learn how utilizing AWS services has helped Cambia decrease processing time for analytics, increase application flexibility and accelerate speed to production. The second part of the session is going to cover machine learning and its role in reducing cost and improving quality of care. The healthcare community must rely on advanced analytics and machine learning to analyze multiple facets of healthcare data and process it at scale to gain insights on things that matter. You will learn why AWS is a well suited platform for machine learning. We will take you through the steps of building a machine learning model using Amazon ML for a real world problem of predicting patient readmissions.
The introductory morning session will discuss big data challenges and provide an overview of the AWS Big Data Platform. We will also cover:
• How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
• Reference architectures for popular use cases, including: connected devices (IoT), log streaming, real-time intelligence, and analytics.
• The AWS big data portfolio of services, including Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR) and Redshift.
• The latest relational database engine, Amazon Aurora - a MySQL-compatible, highly-available relational database engine which provides up to five times better performance than MySQL at a price one-tenth the cost of a commercial database.
• Amazon Machine Learning – the latest big data service from AWS provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology.
Data Lake allows an organisation to store all of their data, structured and unstructured, in one, centralised repository. Since data can be stored as-is, there is no need to convert it to a predefined schema and you no longer need to know what questions you want to ask of your data beforehand. In this session we will explore the architecture of a Data Lake on AWS and cover topics such as storage, processing and security.
Speakers:
Tom McMeekin, Associate Solutions Architect, Amazon Web Services
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Aurora— a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
2 years ago if someone had claimed they could stand up a petabyte scale data warehouse in under an hour and then have a non-technical business user querying it live 30 minutes later without knowing any SQL or coding language, they would have been laughed out of the room. These days, that’s called taking advantage of disruptive technology. Amazon Web Services and Tableau Software have shifted the entire paradigm by which organizations not only store and access their data, but ultimately how they innovate with it. The fast, scalable, and inexpensive services that AWS provides for housing data combined with Tableau’s unbelievably flexible and user friendly visual analytic solution means that within hours an organization can securely put the power of their massive data assets into the hands of their domain experts without expensive overhead or lengthy ramp-up time. Attend this webinar to learn how Amazon Web Services and Tableau Software are leveraged together everyday to: • Empower visual ad-hoc data discovery against big data • Revolutionize corporate reporting and dashboards • Promote data driven decision making at every level The presentation will include: • A live demonstration of AWS and Tableau working together • A real customer case study focused on fraud detection and online video metrics • Live Q&A and an opportunity to trial both solutions
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...Amazon Web Services
Amazon QuickSight is a fast BI service that makes it easy for you to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. QuickSight is built to harness the power and scalability of the cloud, so you can easily run analysis on large datasets, and support hundreds of thousands of users. In this session, we’ll demonstrate how you can easily get started with Amazon QuickSight, uploading files, connecting to S3 and Redshift and creating analyses from visualizations that are optimized based on the underlying data. Once we’ve built our analysis and dashboard, we’ll show you easy it is to share it with colleagues and stakeholders in just a few seconds. And with SPICE – QuckSight’s in-memory calculation engine – you can go from data to insights, faster than ever.
AWS Webcast - Managing Big Data in the AWS Cloud_20140924Amazon Web Services
This presentation deck will cover specific services such as Amazon S3, Kinesis, Redshift, Elastic MapReduce, and DynamoDB, including their features and performance characteristics. It will also cover architectural designs for the optimal use of these services based on dimensions of your data source (structured or unstructured data, volume, item size and transfer rates) and application considerations - for latency, cost and durability. It will also share customer success stories and resources to help you get started.
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
By using a Data Lake, you no longer need to worry about structuring or transforming data before storing it. A Data Lake on AWS enables your organization to more rapidly analyze data, helping you quickly discover new business insights. Join us for our webinar to learn about the benefits of building a Data Lake on AWS and how your organization can begin reaping their rewards. In this webinar, select APN Partners will share their specific methodology for implementing a Data Lake on AWS and best practices for getting the most from your Data Lake.
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting, in its original format and extract value. In this session learn how to architect and implement an Analytics Data Lake. Hear customer examples of best practices and learn from their architectural blueprints.
Learn how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes.
Businesses are generating more data than ever before.
Doing real time data analytics requires IT infrastructure that often needs to be scaled up quickly and running an on-premise environment in this setting has its limitations.
Organisations often require a massive amount of IT resources to analyse their data and the upfront capital cost can deter them from embarking on these projects.
What’s needed is scalable, agile and secure cloud-based infrastructure at the lowest possible cost so they can spin up servers that support their data analysis projects exactly when they are required. This infrastructure must enable them to create proof-of-concepts quickly and cheaply – to fail fast and move on.
The AWS cloud computing platform has disrupted big data. Managing big data applications used to be for only well-funded research organizations and large corporations, but not any longer. Hear from Ben Butler, Big Data Solutions Marketing Manager for AWS, to learn how our customers are using big data services in the AWS cloud to innovate faster than ever before. Not only is AWS technology available to everyone, but it is self-service, on-demand, and featuring innovative technology and flexible pricing models at low cost with no commitments. Learn from customer success stories, as Ben shares real-world case studies describing the specific big data challenges being solved on AWS. We will conclude with a discussion around the tutorials, public datasets, test drives, and our grants program - all of the resources needed to get you started quickly.
ENT314 Automate Best Practices and Operational Health for Your AWS ResourcesAmazon Web Services
"It can be challenging to optimize AWS resources across cost, performance, security and fault-tolerance, much less do it automatically. AWS Trusted Advisor is an online resource to help you do just that, by providing real time guidance to help you provision your resources following AWS best practices. In this session, we will go over how to safely automate these best practices using Amazon CloudWatch events and AWS Lambda along with samples for you to use.
AWS Personal Health Dashboard (PHD) provides alerts and remediation guidance when AWS is experiencing events that may impact your AWS environment. The AWS Health API, the underlying service powering PHD integrates with Amazon CloudWatch Events, enabling you to trigger AWS Lambda functions to define automated remediation actions. We will also introduce you to AWS Health tools, a community-based source of tools to automate remediation actions and customize Health alerts.
Come join us to see how you can implement automation of AWS best practice recommendations from Trusted Advisor and remediation from the AWS Health API on your AWS resources."
Collecting, maintaining, and analyzing data is key to keeping pace within any industry today. In addition to being a critical competitive asset, maintaining corporate data requires careful foundational planning to ensure that the data is secure at all stages. Your big data may include not only proprietary non-public information, but also controlled data that must adhere to regulations such as HIPAA or ITAR. Securing this data while maintaining access for authorized data analytics and reporting workloads can pose significant challenges. In this talk, you’ll learn about strategies leveraging tools such as AWS Identity and Access Management (IAM), AWS Key Management Service (KMS) , Amazon S3, and Amazon EMR to secure your big data workloads in the cloud.
Level: 200
Speaker: Hannah Marlowe - Consultant, Federal, WWPS Professional Services
An overview of Amazon Kinesis Firehose, Amazon Kinesis Analytics, and Amazon Kinesis Streams so you can quickly get started with real-time, streaming data.
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...Amazon Web Services
Working with Amazon Web Services “AWS” and 1Strategy, an Advance AWS Consulting partner; the Cambia Health Data Sciences teams have been able to deploy HIPAA compliant and secured AWS Elastic Map Reduce (EMR) data pipelines on the cloud. In this session, we will dive deep into the architectural components of this solution and you will learn how utilizing AWS services has helped Cambia decrease processing time for analytics, increase application flexibility and accelerate speed to production. The second part of the session is going to cover machine learning and its role in reducing cost and improving quality of care. The healthcare community must rely on advanced analytics and machine learning to analyze multiple facets of healthcare data and process it at scale to gain insights on things that matter. You will learn why AWS is a well suited platform for machine learning. We will take you through the steps of building a machine learning model using Amazon ML for a real world problem of predicting patient readmissions.
The introductory morning session will discuss big data challenges and provide an overview of the AWS Big Data Platform. We will also cover:
• How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
• Reference architectures for popular use cases, including: connected devices (IoT), log streaming, real-time intelligence, and analytics.
• The AWS big data portfolio of services, including Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR) and Redshift.
• The latest relational database engine, Amazon Aurora - a MySQL-compatible, highly-available relational database engine which provides up to five times better performance than MySQL at a price one-tenth the cost of a commercial database.
• Amazon Machine Learning – the latest big data service from AWS provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology.
Data Lake allows an organisation to store all of their data, structured and unstructured, in one, centralised repository. Since data can be stored as-is, there is no need to convert it to a predefined schema and you no longer need to know what questions you want to ask of your data beforehand. In this session we will explore the architecture of a Data Lake on AWS and cover topics such as storage, processing and security.
Speakers:
Tom McMeekin, Associate Solutions Architect, Amazon Web Services
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Aurora— a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
2 years ago if someone had claimed they could stand up a petabyte scale data warehouse in under an hour and then have a non-technical business user querying it live 30 minutes later without knowing any SQL or coding language, they would have been laughed out of the room. These days, that’s called taking advantage of disruptive technology. Amazon Web Services and Tableau Software have shifted the entire paradigm by which organizations not only store and access their data, but ultimately how they innovate with it. The fast, scalable, and inexpensive services that AWS provides for housing data combined with Tableau’s unbelievably flexible and user friendly visual analytic solution means that within hours an organization can securely put the power of their massive data assets into the hands of their domain experts without expensive overhead or lengthy ramp-up time. Attend this webinar to learn how Amazon Web Services and Tableau Software are leveraged together everyday to: • Empower visual ad-hoc data discovery against big data • Revolutionize corporate reporting and dashboards • Promote data driven decision making at every level The presentation will include: • A live demonstration of AWS and Tableau working together • A real customer case study focused on fraud detection and online video metrics • Live Q&A and an opportunity to trial both solutions
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...Amazon Web Services
Amazon QuickSight is a fast BI service that makes it easy for you to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. QuickSight is built to harness the power and scalability of the cloud, so you can easily run analysis on large datasets, and support hundreds of thousands of users. In this session, we’ll demonstrate how you can easily get started with Amazon QuickSight, uploading files, connecting to S3 and Redshift and creating analyses from visualizations that are optimized based on the underlying data. Once we’ve built our analysis and dashboard, we’ll show you easy it is to share it with colleagues and stakeholders in just a few seconds. And with SPICE – QuckSight’s in-memory calculation engine – you can go from data to insights, faster than ever.
AWS Webcast - Managing Big Data in the AWS Cloud_20140924Amazon Web Services
This presentation deck will cover specific services such as Amazon S3, Kinesis, Redshift, Elastic MapReduce, and DynamoDB, including their features and performance characteristics. It will also cover architectural designs for the optimal use of these services based on dimensions of your data source (structured or unstructured data, volume, item size and transfer rates) and application considerations - for latency, cost and durability. It will also share customer success stories and resources to help you get started.
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
By using a Data Lake, you no longer need to worry about structuring or transforming data before storing it. A Data Lake on AWS enables your organization to more rapidly analyze data, helping you quickly discover new business insights. Join us for our webinar to learn about the benefits of building a Data Lake on AWS and how your organization can begin reaping their rewards. In this webinar, select APN Partners will share their specific methodology for implementing a Data Lake on AWS and best practices for getting the most from your Data Lake.
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting, in its original format and extract value. In this session learn how to architect and implement an Analytics Data Lake. Hear customer examples of best practices and learn from their architectural blueprints.
Learn how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes.
Businesses are generating more data than ever before.
Doing real time data analytics requires IT infrastructure that often needs to be scaled up quickly and running an on-premise environment in this setting has its limitations.
Organisations often require a massive amount of IT resources to analyse their data and the upfront capital cost can deter them from embarking on these projects.
What’s needed is scalable, agile and secure cloud-based infrastructure at the lowest possible cost so they can spin up servers that support their data analysis projects exactly when they are required. This infrastructure must enable them to create proof-of-concepts quickly and cheaply – to fail fast and move on.
The AWS cloud computing platform has disrupted big data. Managing big data applications used to be for only well-funded research organizations and large corporations, but not any longer. Hear from Ben Butler, Big Data Solutions Marketing Manager for AWS, to learn how our customers are using big data services in the AWS cloud to innovate faster than ever before. Not only is AWS technology available to everyone, but it is self-service, on-demand, and featuring innovative technology and flexible pricing models at low cost with no commitments. Learn from customer success stories, as Ben shares real-world case studies describing the specific big data challenges being solved on AWS. We will conclude with a discussion around the tutorials, public datasets, test drives, and our grants program - all of the resources needed to get you started quickly.
AWS January 2016 Webinar Series - Getting Started with Big Data on AWSAmazon Web Services
With hundreds of new and sometimes disparate tools, it’s hard to keep pace. Amazon Web Services provides a broad and fully integrated portfolio of cloud computing services to help you build, secure and deploy your big data applications.
Attend this webinar to get an overview of the different big data options available in the AWS Cloud – including popular big data frameworks such as Hadoop, Spark, NoSQL databases, and more. Learn about ideal use cases, cases to avoid, performance, interfaces, and more. Finally, learn how you can build valuable applications with a real-life example.
Learning Objectives:
Learn about big data tools available at AWS
Understand ideal use cases
Learn some of the key considerations such as performance, scalability, elasticity and availability, when selecting big data tools
Who Should Attend:
Data Architects, Data Scientists, Developers
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...Amazon Web Services
Building big data applications often requires integrating a broad set of technologies to store, process, and analyze the increasing variety, velocity, and volume of data being collected by many organizations.
Using a combination of Amazon EMR, a managed Hadoop framework, and Amazon Redshift, a managed petabyte-scale data warehouse, organizations can effectively address many of these requirements.
In this webinar, we will show how organizations are using Amazon EMR and Amazon Redshift to build more agile and scalable architectures for big data. We will look into how you can leverage Spark and Presto running on EMR, to address multiple data processing requirements. We will also share best practices and common use cases to integrate EMR and Redshift.
Learning Objectives:
• Best practices for building a big data architecture that includes Amazon EMR and Amazon Redshift
• Understand how to use technologies such as Amazon EMR, Presto and Spark to complement your data warehousing environment
• Learn key use cases for Amazon EMR and Amazon Redshift
Who Should Attend:
• Data architects, Data management professionals, Data warehousing professionals, BI professionals
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...SoftServe
BI architecture drivers have to change to satisfy new requirements in format, volume, latency, hosting, analysis, reporting, and visualization. In this presentation delivered at the 2014 SATURN conference, SoftServe`s Serhiy and Olha showcased a number of reference architectures that address these challenges and speed up the design and implementation process, making it more predictable and economical:
- Traditional architecture based on an RDMBS data warehouse but modernized with column-based storage to handle a high load and capacity
- NoSQL-based architectures that address Big Data batch and stream-based processing and use popular NoSQL and complex event-processing solutions
- Hybrid architecture that combines traditional and NoSQL approaches to achieve completeness that would not be possible with either alone
The architectures are accompanied by real-life projects and case studies that the presenters have performed for multiple companies, including Fortune 100 and start-ups.
(BDT310) Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data and analytics application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
In this one-hour webinar, we will look at the portfolio of AWS Big Data services and how they can be used to build a modern data architecture.
We will cover:
Using different SQL engines to analyze large amounts of structured data
Analysing streaming data in near-real time
Architectures for batch processing
Best practices for Data Lake architectures
This session is suited for:
Solution and enterprise architects
Data architects/ Data warehouse owners
IT & Innovation team members
Rolta was invited to co-present our OneView™ Enterprise Suite with a specific use case on IoT Analytics. The Rolta CTO, Rajesh Ramachandran, co-presented with SAP and during SAP’s portion of the presentation, they highlighted Rolta as the key partner for Industry IoT.
Big Data is changing business and at a fast pace. Rolta gives a complete answer for quicken end-to-end BI and Big Data Analytics development adventure to transform information into business results and development.
This is a guest lecture provided by Sam Lalonde at Concordia University's Continuing Education Department for the class: Search Engine Marketing (CEMK 175).
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
In this session, we will walk through the fundamentals of Amazon Virtual Private Cloud (VPC). First, we will cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We will then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks AWS makes available with VPC and how you can connect this with your offices and current data center footprint.
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...Amazon Web Services
Learn how the AWS Marketplace brings together customers who have challenges with ISVs who have solutions to those challenges. See how to use relational and NoSQL technologies on AWS to build enterprise and consumer apps. NBC used MarkLogic to deliver an award-winning app that can handle high traffic levels and unexpected usage spikes. NBC’s popular, Emmy-winning, “SNL 40” was launched to celebrate the 40th anniversary of Saturday Night Live, and delivers four decades of sketches and performances. Hosted on AWS, the app — as well as a browser-based platform — are powered by the MarkLogic Enterprise NoSQL database. Come learn from the team who collaborated on this project how to run your own database on AWS, and how to integrate with Amazon RDS and other data stores. A world-recognized automotive brand needed to deliver real-time response about their worldwide fleet vehicles. You will learn how they used a combination of AWS services and FileMaker Cloud, (an Apple subsidiary, procured through AWS Marketplace) to deliver high-scale dealer-facing applications.
Big data solutions for advanced marketing analyticsNatalino Busa
Our retail banking market demands now more than ever to stay close to our customers, and to carefully understand what services, products, and wishes are relevant for each customer at any given time. This sort of marketing research is often beyond the capacity of traditional BI reporting frameworks. In this talk, we illustrate how we team up data scientists and big data engineers in order to create and scale distributed analyses on a big data platform. By using Hadoop and open source statistical language and tools such R and Python, we can execute a variety of machine learning algorithms, and scale them out on a distributed computing framework.
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
This is the complete deck presented at the Westin Calgary Hotel, on August 16th, 2016.
It covers the current state of the AWS Big Data Solution set. Contains several use cases of Big Data, Machine Learning, and a tutorial on how to implement and use Big Data on the AWS Cloud Platform.
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Esta sesión está enfocada en mostrar cómo las empresas pueden optimizar sus recursos a través de las soluciones basadas en la nube, poniendo foco en la diferenciación, la innovación y reducción de riesgos en la infraestructura.
Por Ricardo Rentería de Amazon
講師: Ivan Cheng, Solution Architect, AWS
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
You’re interested in the cloud, and you want to start learning more. In this webinar we will answer the following questions:
• What is Cloud Computing?
• What are the benefits of Cloud Computing?
• What are AWS’s products and what workloads can I run with them?
• Who is using the cloud and what are they using it for?
Presenter: Jeff Barr, AWS Chief Evangelist
Introduction to the AWS Cloud from Digital Tuesday MeetupIan Massingham
These are the slides that I used for my Introduction to AWS talk at the South Wales Digital Tuesday Meetup on the 2nd of December 2014.
Find out more about Digital Tuesday at their website here: http://www.digital-tuesday.com/
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...Amazon Web Services
• Overview of database services to elevate your applications, analytic services to engage your data, and migration services to help you reach database freedom.
• Survey of how Canadian and other organizations are using the cloud to make data scalable, reliable, and secure.
Over 90% of today’s data was generated in the last 2 years, and the rate of data growth isn’t slowing down. In this session, we’ll step through the challenges and best practices on how to capture all the data that is being generated, understand what data you have, and start driving insights and even predict the future using purpose built AWS Services. We’ll frame the session and demonstrations around common pitfalls of building Data Lakes and how to successful drive analytics and insights from the data. This session will focus on the architecture patterns bringing together key AWS Services and rather than a deep dive on any single service. We’ll show how services such as Amazon S3, Amazon Glue, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, and Amazon Kinesis, and Amazon Machine Learning services are put together to build a successful data lake for various role including both data scientists and business users.
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...Amazon Web Services
Find out how Citrix built a solution using Matillion ETL for Amazon Redshift from AWS Marketplace to load all data into an Amazon Redshift cluster, allowing them to do their analytics on the entire environment at a single time. We’ll discuss the transition made to consolidate multiple disparate databases in order to run analytic workloads, get a holistic view of all their data sources, and prevent inconsistent data from being captured.
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...Amazon Web Services
Find out how Citrix built a solution using Matillion ETL for Amazon Redshift from AWS Marketplace to load all data into an Amazon Redshift cluster, allowing them to do their analytics on the entire environment at a single time. We’ll discuss the transition made to consolidate multiple disparate databases in order to run analytic workloads, get a holistic view of all their data sources, and prevent inconsistent data from being captured.
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017Amazon Web Services
Join us for this general session where AWS big data experts present an in-depth look at the current state of big data. Learn about the latest big data trends and industry use cases. Hear how other organizations are using the AWS big data platform to innovate and remain competitive. Take a look at some of the most recent AWS big data developments. Learn More: https://aws.amazon.com/government-education/
What is Innovation? How can cloud computing help you innovate? How can you make your applications smarter? Predictive? How can you interpret data and anticipate trends? With AWS Artificial Intelligence Solutions: Machine Learning, Rekognition, Polly; with serverless - Lambda, Step Functions.
AWS October Webinar Series - Introducing Amazon QuickSightAmazon Web Services
Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data.
In this webinar, we will demonstrate how you can point Amazon QuickSight to AWS data stores, flat files, or other third-party data sources and begin visualizing your data in minutes. We will also introduce SPICE, a new Super-fast, Parallel, In-memory, Calculation Engine in Amazon QuickSight, which performs advanced calculations and renders visualizations rapidly without requiring any additional infrastructure, SQL programming, or dimensional modeling, so you can seamlessly scale to hundreds of thousands of users. Lastly, you will see how Amazon QuickSight provides you with smart visualizations and graphs that are optimized for your different data types, to ensure the most suitable and appropriate visualization to conduct your analysis, and how to share these visualization stories using the built-in collaboration tools.
Using real time big data analytics for competitive advantageAmazon Web Services
Many organisations find it challenging to successfully perform real-time data analytics using their own on premise IT infrastructure. Building a system that can adapt and scale rapidly to handle dramatic increases in transaction loads can potentially be quite a costly and time consuming exercise.
Most of the time, infrastructure is under-utilised and it’s near impossible for organisations to forecast the amount of computing power they will need in the future to serve their customers and suppliers.
To overcome these challenges, organisations can instead utilise the cloud to support their real-time data analytics activities. Scalable, agile and secure, cloud-based infrastructure enables organisations to quickly spin up infrastructure to support their data analytics projects exactly when it is needed. Importantly, they can ‘switch off’ infrastructure when it is not.
BluePi Consulting and Amazon Web Services (AWS) are giving you the opportunity to discover how organisations are using real time data analytics to gain new insights from their information to improve the customer experience and drive competitive advantage.
Scientists, developers, and many other technologists from many different industries are taking advantage of Amazon Web Services to meet the challenges of the increasing volume, variety, and velocity of digital information. Amazon Web Services offers an end-to-end portfolio of cloud computing resources to help you manage big data by reducing costs, gaining a competitive advantage and increasing the speed of innovation.
In this presentation from a webinar focusing on running Data Analytics on AWS, AWS Technical Evangelist, Ian Massingham, discusses the role that AWS services can play in helping you to derive value from your data. Topics include stream processing with Amazon Kinesis, processing data with Amazon Elastic MapReduce (EMR)and its ecosystem of tools and running large scale data warehouses on AWS with Redshift.
Topics covered in this session:
• Discover how AWS customers are extracting value from Big Data
• Understand the role that AWS services could play in helping you to manage your data
• Learn about running Hadoop on AWS Amazon EMR and its ecosystem of tools for data processing and analysis
See a recording of this webinar on YouTube here: http://youtu.be/ueRarqsCbJM
See past and future webinars in the Journey Through the Cloud series here: http://aws.amazon.com/campaigns/emea/journey/
For a deep dive into specific AWS services, you might also be interested in the Masterclass webinar series, which you can find here: http://aws.amazon.com/campaigns/emea/masterclass/
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
1. Big Data Solutions Day
Introductory Session
Wesley Wilk
Solutions Architect, AWS
2. 12:30 PM Introductory Session
2:00 PM Best Practices on Real-time Streaming Analytics
3:00 PM Break
3:15 PM Getting started with Amazon Machine Learning
4:15 PM Building your first big data application on AWS
Today
5. Infrastructure
Intelligence
Harness data generated from
your systems and infrastructure
Advanced
Analytics
Anticipate future behaviors and
conduct what-if analysis
Largest Ecosystem of ISVs
Solutions vetted by the AWS Partner Competency Program
Data
Integration
Move, synchronize, cleanse,
and manage data
Data Analysis &
Visualization
Turn data into actionable insight
and enhance decision making
6. Largest Ecosystem of Integrators
Qualified consulting partners with AWS big data expertise
North America
Europe, Middle East and Africa
Asia Pacific and Japan
7. http://aws.analytic.business
Big Data Analytics in
AWS Marketplace
Thousands of products pre-integrated with the AWS Cloud.
290 specific to big data
Analysis & Visualization
Solutions are pre-configured and ready to run
on AWS
Advanced Analytics
Deploy when you need it, 1-Click launch
in multiple regions around the world
Metered pricing by the hour. Pay only for
what you use. Volume licensing available
Data Integration
Extract valuable information from your
historical and current data
Predict future business performance; location,
text, social, sentiment analysis
Extract, migrate, or prepare and clean
your data for accurate analysis
9. Volume
Velocity
Variety
Big Data is Breaking Traditional IT Infrastructure
- Too many tools to setup, manage &
scale
- Unsustainable costs
- Difficult & undifferentiated heavy
lifting just to get started
- New & expensive skills
- Bigger responsibility (sensitive data)
10. The Evolution of Big Data Processing
Descriptive Real-time Predictive
Dashboards;
Traditional query &
reporting
Clickstream analysis;
Ad bidding;
streaming data
Inventory forecasting;
Fraud detection;
Recommendation
engines
What happened
before
What’s happening
now
Probability of “x”
happening
11. Big Data on AWS
1. Broadest & Deepest Functionality
2. Computational Power that is Second to None
3. Petabyte-scale Analytics Made Affordable
4. Easiest & Most Powerful Tools for Migrating Data
5. Security You Can Trust
12. 1. Broadest & Deepest Functionality
Big Data
Storage
Data
Warehousing
Real-time
Streaming
Distributed Analytics
(Hadoop, Spark, Presto)
NoSQL
Databases
Business
Intelligence
Relational
Databases
Internet of
Things (IoT)
Machine
Learning
Server-less
Compute
Elasticsearch
16. Amazon Kinesis
Streams
• For Technical Developers
• Build your own custom
applications that process
or analyze streaming
data
Amazon Kinesis
Firehose
• For all developers, data
scientists
• Easily load massive
volumes of streaming data
into S3, Amazon Redshift
and Amazon Elasticsearch
Amazon Kinesis
Analytics
• For all developers, data
scientists
• Easily analyze data
streams using standard
SQL queries
• Coming soon
Amazon Kinesis: Streaming Data Made Easy
Services make it easy to capture, deliver and process streams on AWS
17. Managed Hadoop framework
Spark, Presto, Hbase, Tez, Hive, etc.
Cost effective; Easy to use
On-demand and spot pricing
HDFS & S3 File systems
Amazon EMR
Distributed Analytics
19. WRITES
Continuously replicated to 3 AZ’s
Quorum acknowledgment
Persisted to disk (custom SSD)
READS
Strongly or eventually consistent
No trade-off in latency
Durable Low Latency – At Scale
20. Relational data warehouse
Massively parallel; Petabyte scale
Fully managed
HDD and SSD Platforms
$1,000/TB/Year; start at $0.25/hour
Amazon
Redshift
Fully Managed Petabyte-scale Data Warehouse
21. Scale from 160GB to 2PB
Push-button Global DR
Built in end-to-end security
22. Amazon Redshift has security built-in
SSL to secure data in transit
Encryption to secure data at rest
• AES-256; hardware accelerated
• All blocks on disks and in Amazon S3 encrypted
• HSM Support
No direct access to compute nodes
Audit logging, AWS CloudTrail, AWS KMS
integration
Amazon VPC support
SOC 1/2/3, PCI-DSS Level 1, FedRAMP, HIPAA
10 GigE
(HPC)
Ingestion
Backup
Restore
SQL Clients/BI Tools
128GB RAM
16TB disk
16 cores
128GB RAM
16TB disk
16 cores
128GB RAM
16TB disk
16 cores
128GB RAM
16TB disk
16 cores
Amazon S3 / Amazon DynamoDB
Customer VPC
Internal
VPC
JDBC/ODBC
Leader
Node
Compute
Node
Compute
Node
Compute
Node
23. Easy to use, managed machine learning service
built for developers
Robust, powerful machine learning technology
based on Amazon’s internal systems
Create models using your data already stored in the
AWS cloud
Deploy models to production in seconds
Amazon
Machine Learning
Machine Learning
25. Sample Use Cases
Fraud detection Detecting fraudulent transactions, filtering spam emails,
flagging suspicious reviews, …
Personalization Recommending content, predictive content loading,
improving user experience, …
Targeted marketing Matching customers and offers, choosing marketing
campaigns, cross-selling and up-selling, …
Content classification Categorizing documents, matching hiring managers and
resumes, …
Churn prediction Finding customers who are likely to stop using the
service, free-tier upgrade targeting, …
Customer support Predictive routing of customer emails, social media
listening, …
26. 2. Computational Power that is Second to None
Over 2x computational instance types as any other cloud
platform to address a wide range of big data use cases.
Compute
Optimized
(C3, C4)
Memory
Optimized
(R3, X1)
GPU
Instances
(G2)
Storage
Optimized
(I2, D2)
Plus general purpose (M4), low cost burstable (T2), and dedicated instances
28. 3. Affordable Petabyte-scale Analytics
AWS helps customers maximize the value of Big Data
investments while reducing overall IT costs
Secure,
Highly Durable storage
$28.16 / TB / month
Data
Archiving
$7.16 / TB / month
Real-time
streaming data load
$0.035 / GB
10-node
Spark Cluster
$0.15 / hr
Petabyte-scale
Data Warehouse
$0.25 / hr
Amazon Glacier Amazon S3 Amazon RedshiftAmazon EMRAmazon Kinesis
29. NASDAQ migrated to Amazon Redshift, achieving cost
savings of 57% compared to legacy system
Volume
Avg 5.5B rows per day.
Peak of 14B rows
Performance
Queries running faster.
Loads finish before 11PM
Scale
Loading more data than
ever stored by legacy DWH
30. 4. Easy, Powerful Tools for Migrating Data
AWS provides the broadest range of tools for easy, fast,
secure data movement to and from the AWS cloud.
AWS Direct
Connect
ISV
Connectors
Amazon
Kinesis
Firehose
Amazon S3
Transfer
Acceleration
AWS Storage
Gateway
Database
Migration
Service
AWS
Snowball
31. What is Snowball? Petabyte scale data transport
E-ink shipping
label
Ruggedized
case
“8.5G Impact”
All data encrypted
end-to-end
50TB & 80TB
10G network
Rain & dust
resistant
Tamper-resistant
case & electronics
32. Start your first migration in 10 minutes or less
Keep your apps running during the migration
Replicate within, to, or from Amazon EC2 or Amazon RDS
Move data to the same or different database engine
AWS
Database Migration
Service
33. 5. Security & Compliance You Can Trust
• AWS manages 1800+ security controls so you don’t have to
• You benefit from an environment built for the most security sensitive
organizations
• You get to define the right security controls for your workload
sensitivity
• You always have full ownership and control of your data
34. FINRA handles approximately 75 billion market events every
day to build a holistic picture of trading in the U.S.
Deter
misconduct by
enforcing the rules
Detect
and prevent wrongdoing
in the U.S. markets
Discipline
those who break
the rules
35. Broadest Services to Secure Applications
NETWORKING
IDENTITY
ENCRYPTION
COMPLIANCE
Virtual
Private Cloud
Web Application
Firewall
IAM Active Directory
Integration
SAML Federation
KMS Cloud HSM Server-side
Encryption
Encryption
SDK
Service Catalog CloudTrail Config Config Rules Inspector
38. Amazon Web Services
Time to Answer (Latency)
Throughput
collect store
process /
analyze
consume /
visualize
data answers
Time to Answer (Latency)
Throughput
Cost
Simplify Big Data Processing
40. COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
41. COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
42. COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
43. COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
Transport
44. COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
Transport
Messaging
45. COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
Transport
Messaging
IOT
46. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database
Types of Data
CONSUMEPROCESS / ANALYZEETL
47. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database
Types of Data
Database records
CONSUMEPROCESS / ANALYZEETL
48. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS Search Search documents
Logging
Types of Data
CONSUMEPROCESS / ANALYZEETL
49. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Search documents
Log files
Logging
Types of Data
CONSUMEPROCESS / ANALYZEETL
50. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Search documents
Log files
Logging
Messaging
Message MESSAGES
Messaging
Queue Messaging events
Types of Data
CONSUMEPROCESS / ANALYZEETL
51. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Search documents
Log files
Logging
Messaging
Message MESSAGES
Messaging
Queue Messaging events
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Devices / sensors /
IoT stream
Types of Data
CONSUMEPROCESS / ANALYZEETL
52. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Logging
Messaging
Message MESSAGES
Messaging
Queue
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Database
CONSUMEPROCESS / ANALYZEETL
Storage
53. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
File Store
Logging
Messaging
Message MESSAGES
Messaging
Queue
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Amazon Elasticache
Amazon DynamoDB
Amazon RDS
Amazon Elastisearch
Amazon ElastiCache
Amazon DynamoDB
Amazon RDS
Amazon Elasticsearch
Service
Warm
SQL
Cache
NoSQL
Search
Storage
CONSUMEPROCESS / ANALYZEETL
54. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon Elasticache
Amazon DynamoDB
Amazon RDS
Amazon Elastisearch
Amazon S3
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Messaging
Message MESSAGES
Messaging
Queue
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
File
SQL
Cache
NoSQL
Search
Storage
CONSUMEPROCESS / ANALYZEETL
55. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Stream
storage
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
56. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Message & Stream Storage
Amazon SQS
• Managed message queue service
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Stream
storage
CONSUMEPROCESS / ANALYZEETL
57. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
• Managed message queue service
Apache Kafka
• High throughput distributed
messaging system
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
58. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
• Managed message queue service
Apache Kafka
• High throughput distributed
messaging system
Amazon Kinesis Streams
• Managed stream storage +
processing
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Kinesis
Streams
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
59. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
• Managed message queue service
Apache Kafka
• High throughput distributed
messaging system
Amazon Kinesis Streams
• Managed stream storage +
processing
Amazon Kinesis Firehose
• Managed data delivery
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Kinesis
Streams
Amazon Kinesis
Firehose
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
60. COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
• Managed message queue service
Apache Kafka
• High throughput distributed
messaging system
Amazon Kinesis Streams
• Managed stream storage +
processing
Amazon Kinesis Firehose
• Managed data delivery
Amazon DynamoDB
• Managed NoSQL database
• Tables can be stream-enabled
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Kinesis
Streams
Amazon Kinesis
Firehose
Amazon DynamoDB
Streams
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
Warm
SearchSQLNoSQLCache
File
Queue
Hot
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
67. COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Consume
68. COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Consume
69. COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Analysis and visualization
Consume
70. COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Analysis and visualization
Notebooks
Consume
71. COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Analysis and visualization
Notebooks
IDE
Consume
74. Multi-Stage Decoupled “Data Bus”
Multiple stages
Storage decouples multiple processing stages
Store Process Store ProcessData
process
store
75. Multiple Stream Processing Applications Can Read
from Amazon Kinesis
Amazon
Kinesis
AWS Lambda
Amazon
DynamoDB
Amazon
Kinesis S3
Connector
process
store
Amazon S3
76. Amazon EMR
Analysis Frameworks Can Read from Multiple
Data Stores
Amazon
Kinesis
AWS
Lambda
Amazon S3
Data
Amazon
DynamoDB
AnswerSpark
Streaming
Amazon
Kinesis S3
Connector
process
store
Spark
SQL
77. Amazon EMR
Real-time Analytics
Apache
Kafka
KCL
AWS Lambda
Spark
Streaming
Apache
Storm
Amazon
SNS
Amazon
ML
Notifications
Amazon
ElastiCache
(Redis)
Amazon
DynamoDB
Amazon
RDS
Amazon
ES
Alert
App state
Real-time prediction
KPI
process
store
DynamoDB
Streams
Amazon
Kinesis
Data
stream
78. Case Study: Clickstream Analysis
Hearst Corporation monitors trending content for over 250 digital properties
worldwide and processes more than 30TB of data per day, using an architecture
that includes Amazon Kinesis and Spark running on Amazon EMR.
Store → Process | Analyze → Answers
81. Case Study: On-demand Big Data Analytics
Redfin uses Amazon EMR with spot instances – dynamically spinning up & down
Apache Hadoop clusters – to perform large data transformations and deliver data
to internal and external customers.
Store → Process → Store → Analyze → Answers
82. Architectural Principles
Decoupled “data bus”
• Data → Store → Process → Store → Analyze → Answers
Use the right tool for the job
• Data structure, latency, throughput, access patterns
Use Lambda architecture ideas
• Immutable (append-only) log, batch/speed/serving layer
Leverage AWS managed services
• Scalable/elastic, available, reliable, secure, no/low admin
Big data ≠ big cost
84. Building a Big Data Application
web clients
mobile clients
DBMS
corporate data center
Getting Started
85. Building a Big Data Application
web clients
mobile clients
DBMS
Amazon Redshift
Amazon
QuickSight
AWS cloudcorporate data center
Adding a data warehouse
86. Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Amazon
QuickSight
AWS cloud
Bringing in Log Data
corporate data center
87. Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
AWS cloud
Extending your DW to S3
corporate data center
88. Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
Adding a real-time layer
corporate data center
89. Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
Adding predictive analytics
corporate data center
90. Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
Adding encryption at rest with AWS KMS
corporate data center
AWS KMS
91. Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
AWS KMS
VPC subnet
SSL/TLS
SSL/TLS
Protecting Data in Transit & Adding Network Isolation
corporate data center
92. The Evolution of Big Data Processing
Descriptive Real-time Predictive
Dashboards;
Traditional query &
reporting
Clickstream analysis;
Ad bidding;
streaming data
Inventory forecasting;
Fraud detection;
Recommendation
engines
What happened
before
What’s happening
now
Probability of “x”
happening
Amazon Kinesis
Amazon EC2
AWS Lambda
Amazon DynamoDB
Amazon EMR
Amazon Redshift
Amazon QuickSight
Amazon EMR
Amazon Machine Learning
Amazon EMR
93. PLACEHOLDER CALL TO ACTION
aws.amazon.com/big-data
Big Data Quest
Learn at your own pace and practice working with AWS
services for big data on qwikLABS. (3 Hours | Online)
Big Data on AWS
How to use AWS services to process data with Hadoop
& create big data environments (3 Days | Classroom )
Big Data Technology Fundamentals FREE!
Overview of AWS big data solutions for architects or
data scientists new to big data. (3 Hours | Online)
AWS Courses
Self-paced Online Labs
94. 2:00 PM Best Practices on Real-time Streaming Analytics
3:00 PM Break
3:15 PM Getting started with Amazon Machine Learning
4:15 PM Building your first big data application on AWS
Next…
95. Real-time Streaming Data on AWS
Deep Dive & Best Practices Using Amazon Kinesis,
Spark Streaming, AWS Lambda, and Amazon EMR
Steve Abraham
Solutions Architect, AWS
96. Batch Processing
Hourly server logs
Weekly or monthly bills
Daily web-site clickstream
Daily fraud reports
Stream Processing
Real-time metrics
Real-time spending alerts/caps
Real-time clickstream analysis
Real-time detection
It’s All About the Pace
97. Streaming Data Scenarios Across Verticals
Scenarios/
Verticals
Accelerated Ingest-
Transform-Load
Continuous Metrics
Generation
Responsive Data Analysis
Digital Ad Tech/
Marketing
Publisher, bidder data
aggregation
Advertising metrics like
coverage, yield, and
conversion
User engagement with
ads, optimized bid/buy
engines
IoT Sensor, device telemetry
data ingestion
Operational metrics and
dashboards
Device operational
intelligence and alerts
Gaming Online data aggregation,
e.g., top 10 players
Massively multiplayer
online game (MMOG) live
dashboard
Leader board generation,
player-skill match
Consumer
Online
Clickstream analytics Metrics like impressions
and page views
Recommendation engines,
proactive care
98. Customer Use Cases
Sonos runs near real-time streaming
analytics on device data logs from
their connected hi-fi audio equipment.
Analyzing 30TB+ clickstream
data enabling real-time insights for
Publishers.
Glu Mobile collects billions of
gaming events data points from
millions of user devices in
real-time every single day.
Nordstorm
Nordstorm recommendation team built
online stylist using Amazon Kinesis
Streams and AWS Lambda.
99. Metering Record Common Log Entry
MQTT RecordSyslog Entry
{
"payerId": "Joe",
"productCode": "AmazonS3",
"clientProductCode": "AmazonS3",
"usageType": "Bandwidth",
"operation": "PUT",
"value": "22490",
"timestamp": "1216674828"
}
{
127.0.0.1 user-
identifier frank
[10/Oct/
2000:13:55:36
-0700] "GET /
apache_pb.gif
HTTP/1.0" 200
2326
}
{
“SeattlePublicWa
ter/Kinesis/123/
Realtime” –
412309129140
}
{
<165>1 2003-10-11T22:14:15.003Z
mymachine.example.com evntslog -
ID47 [exampleSDID@32473 iut="3"
eventSource="Application"
eventID="1011"]
[examplePriority@32473
class="high"]
}
Streaming Data Challenges: Variety & Velocity
• Streaming data comes in
different types and formats
− Metering records,
logs and sensor data
− JSON, CSV, TSV
• Can vary in size from a
few bytes to kilobytes or
megabytes
• High velocity and
continuous
100. Two Main Processing Patterns
Stream processing (real time)
• Real-time response to events in data streams
Examples:
• Proactively detect hardware errors in device logs
• Notify when inventory drops below a threshold
• Fraud detection
Micro-batching (near real time)
• Near real-time operations on small batches of events in data streams
Examples:
• Aggregate and archive events
• Monitor performance SLAs
102. Amazon Kinesis
Streams
• For Technical Developers
• Build your own custom
applications that process
or analyze streaming
data
Amazon Kinesis
Firehose
• For all developers, data
scientists
• Easily load massive
volumes of streaming data
into S3, Amazon Redshift
and Amazon Elasticsearch
Amazon Kinesis
Analytics
• For all developers, data
scientists
• Easily analyze data
streams using standard
SQL queries
• Coming soon
Amazon Kinesis: Streaming Data Made Easy
Services make it easy to capture, deliver and process streams on AWS
103. Amazon Kinesis Firehose
Load massive volumes of streaming data into Amazon S3, Amazon
Redshift and Amazon Elasticsearch
Zero administration: Capture and deliver streaming data into Amazon S3, Amazon Redshift
and Amazon Elasticsearch without writing an application or managing infrastructure.
Direct-to-data store integration: Batch, compress, and encrypt streaming data for
delivery into data destinations in as little as 60 secs using simple configurations.
Seamless elasticity: Seamlessly scales to match data throughput w/o intervention
Capture and submit
streaming data to Firehose
Analyze streaming data using your
favorite BI tools
Firehose loads streaming data
continuously into S3, Amazon Redshift
and Amazon Elasticsearch
105. • Streams are made of shards
• Each shard ingests up to 1MB/sec, and
1000 records/sec
• Each shard emits up to 2 MB/sec
• All data is stored for 24 hours by
default; storage can be extended for
up to 7 days
• Scale Kinesis streams using scaling util
• Replay data inside of 24-hour window
106. Amazon Kinesis Firehose vs. Amazon Kinesis
Streams
Amazon Kinesis Streams is for use cases that require custom
processing, per incoming record, with sub-1 second processing
latency, and a choice of stream processing frameworks.
Amazon Kinesis Firehose is for use cases that require zero
administration, ability to use existing analytics tools based on
Amazon S3, Amazon Redshift and Amazon Elasticsearch, and a
data latency of 60 seconds or higher.
108. Putting Data into Amazon Kinesis Streams
Determine your partition key strategy
• Managed buffer or streaming MapReduce job
• Ensure high cardinality for your shards
Provision adequate shards
• For ingress needs
• Egress needs for all consuming applications: if more
than two simultaneous applications
• Include headroom for catching up with data in stream
109. Putting Data into Amazon Kinesis
Amazon Kinesis Agent – (supports pre-processing)
• http://docs.aws.amazon.com/firehose/latest/dev/writing-with-agents.html
Pre-batch before Puts for better efficiency
• Consider Flume, Fluentd as collectors/agents
• See https://github.com/awslabs/aws-fluent-plugin-kinesis
Make a tweak to your existing logging
• log4j appender option
• See https://github.com/awslabs/kinesis-log4j-appender
AWS IoT
• The Rules Engine can also route messages to AWS endpoints including Amazon
Kinesis.
110. • A provisioned entity called a Stream composed of Shards
• Producers use a PUT call to store data in a Stream.
• Each record <= 1 MB. PutRecord or PutRecords
• A partition key is supplied by producer and used to distribute (MD5
hash) the PUTs across (hash key range) of Shards
• Unique Sequence# returned upon successful PUT call
• Approximate arrival timestamp affixed to each record
Putting Data into Amazon Kinesis Streams
Simple Put* interface to capture and store data in Streams
111. Amazon Kinesis Producer Library
• Writes to one or more Amazon Kinesis streams with automatic,
configurable retry mechanism
• Collects records and uses PutRecords to write multiple records to
multiple shards per request
• Aggregates user records to increase payload size and improve
throughput
• Integrates seamlessly with KCL to de-aggregate batched records
• Use Amazon Kinesis Producer Library with AWS Lambda (New!)
• Submits Amazon CloudWatch metrics on your behalf to provide
visibility into producer performance
112. Record Order and Multiple Shards
Unordered processing
• Randomize partition key to distribute events over
many shards and use multiple workers
Exact order processing
• Control partition key to ensure events are
grouped into the same shard and read by the
same worker
Need both? Use global sequence number
Producer
Get Global
Sequence
Unordered
Stream
Campaign Centric
Stream
Fraud Inspection
Stream
Get Event
Metadata
113. Sample Code for Scaling Shards
java -cp
KinesisScalingUtils.jar-complete.jar
-Dstream-name=MyStream
-Dscaling-action=scaleUp
-Dcount=10
-Dregion=eu-west-1 ScalingClient
Options:
• stream-name - The name of the stream to be scaled
• scaling-action - The action to be taken to scale. Must be one of "scaleUp”, "scaleDown"
or “resize”
• count - Number of shards by which to absolutely scale up or down, or resize
See https://github.com/awslabs/amazon-kinesis-scaling-utils
114. Managed Buffer
• Care about a reliable, scalable
way to capture data
• Defer all other aggregation to
consumer
• Generate Random Partition
Keys
• Ensure a high cardinality for
Partition Keys with respect to
shards, to spray evenly across
available shards
Putting Data into Amazon Kinesis Streams
Streaming Map-Reduce
• Streaming Map-Reduce: leverage
partition keys as a natural way to
aggregate data
• For e.g. Partition Keys per billing
customer, per DeviceId, per stock
symbol
• Design partition keys to scale
• Be aware of “hot partition keys or
shards ”
Determine your partition key strategy
115. Putting Data into Amazon Kinesis Streams
Provision adequate shards
• For ingress needs
• Egress needs for all consuming applications: if more
than two simultaneous applications
• Include headroom for catching up with data in stream
117. Amazon Kinesis Client Library
• Build Kinesis Applications with Kinesis Client Library (KCL)
• Open source client library available for Java, Ruby, Python,
Node.JS dev
• Deploy on your EC2 instances
• KCL Application includes three components:
1. Record Processor Factory – Creates the record processor
2. Record Processor – Processor unit that processes data from a
shard in Amazon Kinesis Streams
3. Worker – Processing unit that maps to each application instance
118. State Management with Kinesis Client Library
• One record processor maps to one shard and processes data records from
that shard
• One worker maps to one or more record processors
• Balances shard-worker associations when worker / instance counts change
• Balances shard-worker associations when shards split or merge
119. Other Options
• Third-party connectors (for example, Splunk)
• AWS IoT Platform
• AWS Lambda
• Amazon EMR with Apache Spark, Pig or Hive
120. Apache Spark and Amazon Kinesis Streams
Apache Spark is an in-memory analytics cluster using
RDD for fast processing
Spark Streaming can read directly from an Amazon
Kinesis stream
Amazon software license linking – Add ASL
dependency to SBT/MAVEN project, artifactId =
spark-streaming-kinesis-asl_2.10
KinesisUtils.createStream(‘twitter-stream’)
.filter(_.getText.contains(”Open-Source"))
.countByWindow(Seconds(5))
Example: Counting tweets on a sliding window
122. Using Spark Streaming with Amazon Kinesis
Streams
1. Use Spark 1.6+ with EMRFS consistent view option – if you
use Amazon S3 as storage for Spark checkpoint
2. Amazon DynamoDB table name – make sure there is only one
instance of the application running with Spark Streaming
3. Enable Spark-based checkpoints
4. Number of Amazon Kinesis receivers is multiple of executors so
they are load-balanced
5. Total processing time is less than the batch interval
6. Number of executors is the same as number of cores per
executor
7. Spark Streaming uses default of 1 sec with KCL
126. Conclusion
• Amazon Kinesis offers: managed service to build applications, streaming
data ingestion, and continuous processing
• Ingest aggregate data using Amazon Producer Library
• Process data using Amazon Connector Library and open source connectors
• Determine your partition key strategy
• Try out Amazon Kinesis at http://aws.amazon.com/kinesis/
127. • Technical documentation
• Amazon Kinesis Agent
• Amazon Kinesis Streams and Spark Streaming
• Amazon Kinesis Producer Library Best Practice
• Amazon Kinesis Firehose and AWS Lambda
• Building Near Real-Time Discovery Platform with Amazon Kinesis
• Public case studies
• Glu mobile – Real-Time Analytics
• Hearst Publishing – Clickstream Analytics
• How Sonos Leverages Amazon Kinesis
• Nordstorm Online Stylist
Reference
128. 3:00 PM Break
3:15 PM Getting started with Amazon Machine Learning
4:15 PM Building your first big data application on AWS
Next…
130. Agenda
• Machine learning and the data ecosystem
• Smart applications by example (and counterexample)
• Amazon Machine Learning (Amazon ML) features and
benefits
• Developing with Amazon ML
• Q&A
131. Three types of data-driven development
Retrospective
analysis and
reporting
Amazon Redshift,
Amazon RDS
Amazon S3
Amazon EMR
132. Three types of data-driven development
Retrospective
analysis and
reporting
Here-and-now
real-time processing
and dashboards
Amazon Kinesis
Amazon EC2
AWS Lambda
Amazon Redshift,
Amazon RDS
Amazon S3
Amazon EMR
133. Three types of data-driven development
Retrospective
analysis and
reporting
Here-and-now
real-time processing
and dashboards
Predictions
to enable smart
applications
Amazon Kinesis
Amazon EC2
AWS Lambda
Amazon Redshift,
Amazon RDS
Amazon S3
Amazon EMR
134. Machine learning and smart applications
Machine learning is the technology that
automatically finds patterns in your data and
uses them to make predictions for new data
points as they become available.
135. Machine learning and smart applications
Machine learning is the technology that
automatically finds patterns in your data and
uses them to make predictions for new data
points as they become available.
Your data + machine learning = smart applications
136. Smart applications by example
Based on what you
know about the user:
Will they use your
product?
137. Smart applications by example
Based on what you
know about the user:
Will they use your
product?
Based on what you
know about an order:
Is this order
fraudulent?
138. Smart applications by example
Based on what you
know about the user:
Will they use your
product?
Based on what you
know about an order:
Is this order
fraudulent?
Based on what you know
about a news article:
What other articles are
interesting?
139. And a few more examples…
Fraud detection Detecting fraudulent transactions, filtering spam emails,
flagging suspicious reviews,…
Personalization Recommending content, predictive content loading,
improving user experience,…
Targeted marketing Matching customers and offers, choosing marketing
campaigns, cross-selling and up-selling,…
Content classification Categorizing documents, matching hiring managers and
resumes,…
Churn prediction Finding customers who are likely to stop using the
service, upgrade targeting,…
Customer support Predictive routing of customer emails, social media
listening,…
140. Smart applications by counterexample
Dear Alex,
This awesome quadcopter is on sale
for just $49.99!
141. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
GROUP BY c.ID
HAVING o.date > GETDATE() – 30
We can start by sending
the offer to all customers
who placed an order in
the last 30 days
142. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
GROUP BY c.ID
HAVING O.CATEGORY = ‘TOYS’
AND o.date > GETDATE() – 30
…let’s narrow it down to
just customers who
bought toys
143. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN PRODUCTS P
ON P.ID = O.PRODUCT
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((P.DESCRIPTION LIKE ‘%HELICOPTER%’
AND O.DATE > GETDATE() - 60)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 30)
)
…and expand the query
to customers who
purchased other toy
helicopters recently, or
made several expensive
toy purchases
144. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%COPTER%’
AND o.date > GETDATE() - 60)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 30)
)
…but what about
quadcopters?
145. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 120)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 30)
)
…maybe we should go
back further in time
146. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 120)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 40)
)
…tweak the query more
147. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 120)
OR (COUNT(*) > 2
AND SUM(o.price) > 150
AND o.date > GETDATE() – 40)
)
…again
148. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 90)
OR (COUNT(*) > 2
AND SUM(o.price) > 150
AND o.date > GETDATE() – 40)
)
…and again
149. Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 90)
OR (COUNT(*) > 2
AND SUM(o.price) > 150
AND o.date > GETDATE() – 40)
)
Use machine learning
technology to learn your
business rules from data!
150. Why aren’t there more smart applications?
1. Machine learning expertise is rare.
2. Building and scaling machine learning technology is
hard.
3. Closing the gap between models and applications is
time-consuming and expensive.
151. Building smart applications today
Expertise Technology Operationalization
Limited supply of
data scientists
Many choices, few
mainstays
Complex and error-prone
data workflows
Expensive to hire
or outsource
Difficult to use and scale Custom platforms and
APIs
Many moving pieces lead
to custom solutions every
time
Reinventing the model
lifecycle management
wheel
153. Introducing Amazon Machine Learning
Easy-to-use, managed machine learning
service built for developers
Robust, powerful machine learning
technology based on Amazon’s internal
systems
Create models using your data already
stored in the AWS cloud
Deploy models to production in seconds
154. Easy-to-use and developer-friendly
Use the intuitive, powerful service console to build
and explore your initial models
• Data retrieval
• Model training, quality evaluation, fine-tuning
• Deployment and management
Automate model lifecycle with fully featured APIs and
SDKs
• Java, Python, .NET, JavaScript, Ruby, PHP
Easily create smart iOS and Android applications with
AWS Mobile SDK
155. Powerful machine learning technology
Based on Amazon’s battle-hardened internal systems
Not just the algorithms:
• Smart data transformations
• Input data and model quality alerts
• Built-in industry best practices
Grows with your needs
• Train on up to 100 GB of data
• Generate billions of predictions
• Obtain predictions in batches or real-time
156. Integrated with the AWS data ecosystem
Access data that is stored in Amazon S3,
Amazon Redshift, or MySQL databases in
Amazon RDS
Output predictions to Amazon S3 for easy
integration with your data flows
Use AWS Identity and Access Management
(IAM) for fine-grained data access permission
policies
157. Fully-managed model and prediction services
End-to-end service, with no servers to provision
and manage
One-click production model deployment
Programmatically query model metadata to
enable automatic retraining workflows
Monitor prediction usage patterns with Amazon
CloudWatch metrics
158. Pay-as-you-go and inexpensive
Data analysis, model training, and
evaluation: $0.42/instance hour
Batch predictions: $0.10/1000
Real-time predictions: $0.10/1000
+ hourly capacity reservation charge
159. Three supported types of predictions
Binary classification
Predict the answer to a Yes/No question
Multiclass classification
Predict the correct category from a list
Regression
Predict the value of a numeric variable
164. Train your model
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> model = ml.create_ml_model(
ml_model_id = ’my_model',
ml_model_type = 'REGRESSION',
training_data_source_id = 'my_datasource')
170. Batch predictions
Asynchronous, large-volume prediction generation
Request through service console or API
Best for applications that deal with batches of data records
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> model = ml.create_batch_prediction(
batch_prediction_id = 'my_batch_prediction’,
batch_prediction_data_source_id = ’my_datasource’,
ml_model_id = ’my_model',
output_uri = 's3://examplebucket/output/’)
171. Real-time predictions
Synchronous, low-latency, high-throughput prediction generation
Request through service API, server, or mobile SDKs
Best for interaction applications that deal with individual data records
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> ml.predict(
ml_model_id = ’my_model',
predict_endpoint = ’example_endpoint’,
record = {’key1':’value1’, ’key2':’value2’})
{
'Prediction': {
'predictedValue': 13.284348,
'details': {
'Algorithm': 'SGD',
'PredictiveModelType': 'REGRESSION’
}
}
}
173. Get predictions with
Amazon ML batch API
Process data with
EMR
Raw data in S3
Aggregated data
in S3
Predictions
in S3 Your application
Batch predictions with EMR
174. Structured data
In Amazon Redshift
Load predictions into
Amazon Redshift
- or -
Read prediction results
directly from S3
Predictions
in S3
Get predictions with
Amazon ML batch API
Your application
Batch predictions with Amazon Redshift
175. Your application
Get predictions with
Amazon ML real-time API
Amazon ML service
Real-time predictions for interactive
applications
179. Building Your First Big Data
Application on AWS
Radhika Ravirala
Solutions Architect
180. Your First Big Data Application on AWS
PROCESS
STORE
ANALYZE & VISUALIZE
COLLECT
181. Your First Big Data Application on AWS
PROCESS:
Amazon EMR with Spark & Hive
STORE
ANALYZE & VISUALIZE:
Amazon Redshift and Amazon QuickSight
COLLECT:
Amazon Kinesis Firehose
184. Data Storage with Amazon S3
Download all the CLI steps: http://bit.ly/aws-big-data-steps
Create an Amazon S3 bucket to store the data collected
with Amazon Kinesis Firehose
aws s3 mb s3://YOUR-S3-BUCKET-NAME
185. Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
firehose-policy.json:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "firehose.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
186. Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
s3-rw-policy.json:
{ "Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-S3-BUCKET-NAME",
"arn:aws:s3:::YOUR-S3-BUCKET-NAME/*"
]
} }
187. Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
aws iam create-role --role-name firehose-demo
--assume-role-policy-document file://firehose-policy.json
Copy the value in “Arn” in the output,
e.g., arn:aws:iam::123456789:role/firehose-demo
aws iam put-role-policy --role-name firehose-demo
--policy-name firehose-s3-rw
--policy-document file://s3-rw-policy.json
188. Data Collection with Amazon Kinesis Firehose
Create a Firehose stream for incoming log data
aws firehose create-delivery-stream
--delivery-stream-name demo-firehose-stream
--s3-destination-configuration
RoleARN=YOUR-FIREHOSE-ARN,
BucketARN="arn:aws:s3:::YOUR-S3-BUCKET-NAME",
Prefix=firehose/,
BufferingHints={IntervalInSeconds=60},
CompressionFormat=GZIP
189. Data Processing with Amazon EMR
Launch an Amazon EMR cluster with Spark and Hive
aws emr create-cluster
--name "demo"
--release-label emr-4.5.0
--instance-type m3.xlarge
--instance-count 2
--ec2-attributes KeyName=YOUR-AWS-SSH-KEY
--use-default-roles
--applications Name=Hive Name=Spark Name=Zeppelin-Sandbox
Record your ClusterId from the output.
190. Data Analysis with Amazon Redshift
Create a single-node Amazon Redshift data warehouse:
aws redshift create-cluster
--cluster-identifier demo
--db-name demo
--node-type dc1.large
--cluster-type single-node
--master-username master
--master-user-password YOUR-REDSHIFT-PASSWORD
--publicly-accessible
--port 8192
192. Weblogs – Common Log Format (CLF)
75.35.230.210 - - [20/Jul/2009:22:22:42 -0700]
"GET /images/pigtrihawk.jpg HTTP/1.1" 200 29236 "http://
www.swivel.com/graphs/show/1163466"
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11)
Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)"
193. Writing into Amazon Kinesis Firehose
Download the demo weblog: http://bit.ly/aws-big-data
Open Python and run the following code to import the log into the stream:
import boto3
iam = boto3.client('iam')
firehose = boto3.client('firehose')
with open('weblog', 'r') as f:
for line in f:
firehose.put_record(
DeliveryStreamName='demo-firehose-stream',
Record={'Data': line}
)
print 'Record added'
195. Apache Spark
• Fast, general purpose engine
for large-scale data processing
• Write applications quickly in
Java, Scala, or Python
• Combine SQL, streaming, and
complex analytics
196. Spark SQL
Spark's module for working with structured data using SQL
Run unmodified Hive queries on existing data
197. Apache Zeppelin
• Web-based notebook for interactive analytics
• Multiple language backend
• Apache Spark integration
• Data visualization
• Collaboration
https://zeppelin.incubator.apache.org/
198. View the Output Files in Amazon S3
After about 1 minute, you should see files in your S3
bucket:
aws s3 ls s3://YOUR-S3-BUCKET-NAME/firehose/ --recursive
199. Connect to Your EMR Cluster and Zeppelin
aws emr describe-cluster --cluster-id YOUR-EMR-CLUSTER-ID
Copy the MasterPublicDnsName. Use port forwarding so you can
access Zeppelin at http://localhost:18890 on your local machine.
ssh -i PATH-TO-YOUR-SSH-KEY -L 18890:localhost:8890
hadoop@YOUR-EMR-DNS-NAME
Open Zeppelin with your local web browser and create a new “Note”:
http://localhost:18890
200. Exploring the Data in Amazon S3 using Spark
Download the Zeppelin notebook: http://bit.ly/aws-big-data-zeppelin
// Load all the files from S3 into a RDD
val accessLogLines = sc.textFile("s3://YOUR-S3-BUCKET-NAME/firehose/*/*/*/*")
// Count the lines
accessLogLines.count
// Print one line as a string
accessLogLines.first
// delimited by space so split them into fields
var accessLogFields = accessLogLines.map(_.split(" ").map(_.trim))
// Print the fields of a line
accessLogFields.first
201. Combine Fields: “A, B, C” à “A B C”
var accessLogColumns = accessLogFields
.map( arrayOfFields => { var temp1 =""; for (field <- arrayOfFields) yield {
var temp2 = ""
if (temp1.replaceAll("[",""").startsWith(""") && !temp1.endsWith("""))
temp1 = temp1 + " " + field.replaceAll("[|]",""")
else temp1 = field.replaceAll("[|]",""")
temp2 = temp1
if (temp1.endsWith(""")) temp1 = ""
temp2
}})
.map( fields => fields.filter(field => (field.startsWith(""") &&
field.endsWith(""")) || !field.startsWith(""") ))
.map(fields => fields.map(_.replaceAll(""","")))
202. Create a Data Frame and Transform the Data
import java.sql.Timestamp
import java.net.URL
case class accessLogs(
ipAddress: String,
requestTime: Timestamp,
requestMethod: String,
requestPath: String,
requestProtocol: String,
responseCode: String,
responseSize: String,
referrerHost: String,
userAgent: String
)
203. Create a Data Frame and Transform the Data
val accessLogsDF = accessLogColumns.map(line => {
var ipAddress = line(0)
var requestTime = new Timestamp(new java.text.SimpleDateFormat("dd/MMM/
yyyy:HH:mm:ss Z").parse(line(3)).getTime())
var requestString = line(4).split(" ").map(_.trim())
var requestMethod = if (line(4).toString() != "-") requestString(0) else ""
var requestPath = if (line(4).toString() != "-") requestString(1) else ""
var requestProtocol = if (line(4).toString() != "-") requestString(2) else ""
var responseCode = line(5).replaceAll("-","")
var responseSize = line(6).replaceAll("-","")
var referrerHost = line(7)
var userAgent = line(8)
accessLogs(ipAddress, requestTime, requestMethod, requestPath,
requestProtocol,responseCode, responseSize, referrerHost, userAgent)
}).toDF()
204. Create an External Table Backed by Amazon S3
%sql
CREATE EXTERNAL TABLE access_logs
(
ip_address String,
request_time Timestamp,
request_method String,
request_path String,
request_protocol String,
response_code String,
response_size String,
referrer_host String,
user_agent String
)
PARTITIONED BY (year STRING,month STRING, day STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY 't'
STORED AS TEXTFILE
LOCATION 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
205. Configure Hive Partitioning and Compression
// set up Hive's "dynamic partitioning”
%sql
SET hive.exec.dynamic.partition=true
// compress output files on Amazon S3 using Gzip
%sql
SET hive.exec.compress.output=true
%sql
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
%sql
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
%sql
SET hive.exec.dynamic.partition.mode=nonstrict;
207. Query the Data Using Spark SQL
// Check the count of records
%sql
select count(*) from access_log_processed
// Fetch the first 10 records
%sql
select * from access_log_processed limit 10
208. View the Output Files in Amazon S3
Leave Zeppelin and go back to the console…
List the partition prefixes and output files:
aws s3 ls s3://YOUR-S3-BUCKET-NAME/access-log-processed/
--recursive
210. Connect to Amazon Redshift
Using the PostgreSQL CLI
psql -h YOUR-REDSHIFT-ENDPOINT
-p 8192 -U master demo
Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers
or native Amazon Redshift support
• Aginity Workbench for Amazon Redshift
• SQL Workbench/J
211. Create an Amazon Redshift Table to Hold Your Data
CREATE TABLE accesslogs
(
host_address varchar(512),
request_time timestamp,
request_method varchar(5),
request_path varchar(1024),
request_protocol varchar(10),
response_code Int,
response_size Int,
referrer_host varchar(1024),
user_agent varchar(512)
)
DISTKEY(host_address)
SORTKEY(request_time);
212. Loading Data into Amazon Redshift
“COPY” command loads files in parallel from
Amazon S3:
COPY accesslogs
FROM 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
CREDENTIALS
'aws_iam_role=arn:aws:iam::YOUR-AWS-ACCOUNT-ID:role/ROLE-NAME'
DELIMITER 't'
MAXERROR 0
GZIP;
213. Amazon Redshift Test Queries
-- find distribution of response codes over days
SELECT TRUNC(request_time), response_code, COUNT(1) FROM
accesslogs GROUP BY 1,2 ORDER BY 1,3 DESC;
-- find the 404 status codes
SELECT COUNT(1) FROM accessLogs WHERE response_code = 404;
-- show all requests for status as PAGE NOT FOUND
SELECT TOP 1 request_path,COUNT(1) FROM accesslogs WHERE
response_code = 404 GROUP BY 1 ORDER BY 2 DESC;
215. Automating the Big Data Application
Amazon
Kinesis
Firehose
Amazon
EMR
Amazon
S3
Amazon
Redshift
Amazon
QuickSight
Amazon
S3
Event
Notification
Spark job
List of objects from Lambda
Write to Amazon Redshift using spark-redshift
216. Learn from big data experts
blogs.aws.amazon.com/bigdata