SlideShare a Scribd company logo
1 of 217
Big Data Solutions Day
Introductory Session
Wesley Wilk
Solutions Architect, AWS
12:30 PM Introductory Session
2:00 PM Best Practices on Real-time Streaming Analytics
3:00 PM Break
3:15 PM Getting started with Amazon Machine Learning
4:15 PM Building your first big data application on AWS
Today
Thank you to our Sponsor
$10B+1,000,000+
Active Customers Per Month
AWS In 2016:
64%
YOY Growth
Q1 2015 TO Q1 20166
10x
Compute Capacity In Use
Of 14 Others, Combined
Infrastructure
Intelligence
Harness data generated from
your systems and infrastructure
Advanced
Analytics
Anticipate future behaviors and
conduct what-if analysis
Largest Ecosystem of ISVs
Solutions vetted by the AWS Partner Competency Program
Data
Integration
Move, synchronize, cleanse,
and manage data
Data Analysis &
Visualization
Turn data into actionable insight
and enhance decision making
Largest Ecosystem of Integrators
Qualified consulting partners with AWS big data expertise
North America
Europe, Middle East and Africa
Asia Pacific and Japan
http://aws.analytic.business
Big Data Analytics in
AWS Marketplace
Thousands of products pre-integrated with the AWS Cloud.
290 specific to big data
Analysis & Visualization
Solutions are pre-configured and ready to run
on AWS
Advanced Analytics
Deploy when you need it, 1-Click launch
in multiple regions around the world
Metered pricing by the hour. Pay only for
what you use. Volume licensing available
Data Integration
Extract valuable information from your
historical and current data
Predict future business performance; location,
text, social, sentiment analysis
Extract, migrate, or prepare and clean
your data for accurate analysis
Proven Customer Success
The vast majority of big data use cases deployed in the
cloud today run on AWS.
Volume
Velocity
Variety
Big Data is Breaking Traditional IT Infrastructure
-  Too many tools to setup, manage &
scale
-  Unsustainable costs
-  Difficult & undifferentiated heavy
lifting just to get started
-  New & expensive skills
-  Bigger responsibility (sensitive data)
The Evolution of Big Data Processing
Descriptive Real-time Predictive
Dashboards;
Traditional query &
reporting
Clickstream analysis;
Ad bidding;
streaming data
Inventory forecasting;
Fraud detection;
Recommendation
engines
What happened
before
What’s happening
now
Probability of “x”
happening
Big Data on AWS
1.  Broadest & Deepest Functionality
2.  Computational Power that is Second to None
3.  Petabyte-scale Analytics Made Affordable
4.  Easiest & Most Powerful Tools for Migrating Data
5.  Security You Can Trust
1. Broadest & Deepest Functionality
Big Data
Storage
Data
Warehousing
Real-time
Streaming
Distributed Analytics
(Hadoop, Spark, Presto)
NoSQL
Databases
Business
Intelligence
Relational
Databases
Internet of
Things (IoT)
Machine
Learning
Server-less
Compute
Elasticsearch
Core Components for
Big Data Workloads
Big Data Storage for Virtually All AWS Services
Store anything
Object storage
Scalable
99.999999999% durability
Extremely low cost
Amazon S3
Real-time processing
High throughput; elastic
Easy to use
Integration with S3, EMR, Redshift,
DynamoDB
Amazon
Kinesis
Real-time Streaming Data
Amazon Kinesis
Streams
•  For Technical Developers
•  Build your own custom
applications that process
or analyze streaming
data
Amazon Kinesis
Firehose
•  For all developers, data
scientists
•  Easily load massive
volumes of streaming data
into S3, Amazon Redshift
and Amazon Elasticsearch
Amazon Kinesis
Analytics
•  For all developers, data
scientists
•  Easily analyze data
streams using standard
SQL queries
•  Coming soon
Amazon Kinesis: Streaming Data Made Easy
Services make it easy to capture, deliver and process streams on AWS
Managed Hadoop framework
Spark, Presto, Hbase, Tez, Hive, etc.
Cost effective; Easy to use
On-demand and spot pricing
HDFS & S3 File systems
Amazon EMR
Distributed Analytics
NoSQL Database
Seamless scalability
Zero admin
Single digit millisecond latency
Amazon
DynamoDB
Fast & Flexible NoSQL Database Service
WRITES
Continuously replicated to 3 AZ’s
Quorum acknowledgment
Persisted to disk (custom SSD)
READS
Strongly or eventually consistent
No trade-off in latency
Durable Low Latency – At Scale
Relational data warehouse
Massively parallel; Petabyte scale
Fully managed
HDD and SSD Platforms
$1,000/TB/Year; start at $0.25/hour
Amazon
Redshift
Fully Managed Petabyte-scale Data Warehouse
Scale from 160GB to 2PB
Push-button Global DR
Built in end-to-end security
Amazon Redshift has security built-in
SSL to secure data in transit
Encryption to secure data at rest
•  AES-256; hardware accelerated
•  All blocks on disks and in Amazon S3 encrypted
•  HSM Support
No direct access to compute nodes
Audit logging, AWS CloudTrail, AWS KMS
integration
Amazon VPC support
SOC 1/2/3, PCI-DSS Level 1, FedRAMP, HIPAA
10 GigE
(HPC)
Ingestion
Backup
Restore
SQL Clients/BI Tools
128GB RAM
16TB disk
16 cores
128GB RAM
16TB disk
16 cores
128GB RAM
16TB disk
16 cores
128GB RAM
16TB disk
16 cores
Amazon S3 / Amazon DynamoDB
Customer VPC
Internal
VPC
JDBC/ODBC
Leader
Node
Compute
Node
Compute
Node
Compute
Node
Easy to use, managed machine learning service
built for developers
Robust, powerful machine learning technology
based on Amazon’s internal systems
Create models using your data already stored in the
AWS cloud
Deploy models to production in seconds
Amazon
Machine Learning
Machine Learning
Simple Model Tuning
Sample Use Cases
Fraud detection Detecting fraudulent transactions, filtering spam emails,
flagging suspicious reviews, …
Personalization Recommending content, predictive content loading,
improving user experience, …
Targeted marketing Matching customers and offers, choosing marketing
campaigns, cross-selling and up-selling, …
Content classification Categorizing documents, matching hiring managers and
resumes, …
Churn prediction Finding customers who are likely to stop using the
service, free-tier upgrade targeting, …
Customer support Predictive routing of customer emails, social media
listening, …
2. Computational Power that is Second to None
Over 2x computational instance types as any other cloud
platform to address a wide range of big data use cases.
Compute
Optimized
(C3, C4)
Memory
Optimized
(R3, X1)
GPU
Instances
(G2)
Storage
Optimized
(I2, D2)
Plus general purpose (M4), low cost burstable (T2), and dedicated instances
Intel® Processor Technologies
Intel®	AVX	–	Drama0cally	increases	performance	for	highly	parallel	HPC	workloads	
such	as	life	science	engineering,	data	mining,	financial	analysis,	media	processing	
Intel®	AES-NI	–	Enhances	security	with	new	encryp0on	instruc0ons	that	reduce	the	
performance	penalty	associated	with	encryp0ng/decryp0ng	data	
Intel®	Turbo	Boost	Technology	–	Increases	compu0ng	power	with	performance	that	
adapts	to	spikes	in	workloads	
Intel	Transac0onal	Synchroniza0on	(TSX)	Extensions	–	Enables	execu0on	of	
transac0ons	that	are	independent	to	accelerate	throughput	
P	state	&	C	state	control	–	provides	granular	performance	tuning	for	cores	and	sleep	
states	to	improve	overall	applica0on	performance
3. Affordable Petabyte-scale Analytics
AWS helps customers maximize the value of Big Data
investments while reducing overall IT costs
Secure,
Highly Durable storage
$28.16 / TB / month
Data
Archiving
$7.16 / TB / month
Real-time
streaming data load
$0.035 / GB
10-node
Spark Cluster
$0.15 / hr
Petabyte-scale
Data Warehouse
$0.25 / hr
Amazon Glacier Amazon S3 Amazon RedshiftAmazon EMRAmazon Kinesis
NASDAQ migrated to Amazon Redshift, achieving cost
savings of 57% compared to legacy system
Volume
Avg 5.5B rows per day.
Peak of 14B rows
Performance
Queries running faster.
Loads finish before 11PM
Scale
Loading more data than
ever stored by legacy DWH
4. Easy, Powerful Tools for Migrating Data
AWS provides the broadest range of tools for easy, fast,
secure data movement to and from the AWS cloud.
AWS Direct
Connect
ISV
Connectors
Amazon
Kinesis
Firehose
Amazon S3
Transfer
Acceleration
AWS Storage
Gateway
Database
Migration
Service
AWS
Snowball
What is Snowball? Petabyte scale data transport
E-ink shipping
label
Ruggedized
case
“8.5G Impact”
All data encrypted
end-to-end
50TB & 80TB
10G network
Rain & dust
resistant
Tamper-resistant
case & electronics
Start your first migration in 10 minutes or less
Keep your apps running during the migration
Replicate within, to, or from Amazon EC2 or Amazon RDS
Move data to the same or different database engine
AWS
Database Migration
Service
5. Security & Compliance You Can Trust
•  AWS manages 1800+ security controls so you don’t have to
•  You benefit from an environment built for the most security sensitive
organizations
•  You get to define the right security controls for your workload
sensitivity
•  You always have full ownership and control of your data
FINRA handles approximately 75 billion market events every
day to build a holistic picture of trading in the U.S.
Deter
misconduct by
enforcing the rules
Detect
and prevent wrongdoing
in the U.S. markets
Discipline
those who break
the rules
Broadest Services to Secure Applications
NETWORKING
IDENTITY
ENCRYPTION
COMPLIANCE
Virtual
Private Cloud
Web Application
Firewall
IAM Active Directory
Integration
SAML Federation
KMS Cloud HSM Server-side
Encryption
Encryption
SDK
Service Catalog CloudTrail Config Config Rules Inspector
Key AWS Certifications and Assurance Programs
Design Patterns &
Architectures
Amazon Web Services
Time to Answer (Latency)
Throughput
collect store
process /
analyze
consume /
visualize
data answers
Time to Answer (Latency)
Throughput
Cost
Simplify Big Data Processing
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
COLLECT STORE CONSUMEPROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
Amazon Elasticsearch
Service
Apache Kafka
Amazon SQS
Amazon Kinesis
Streams
Amazon Kinesis
Firehose
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
Amazon DynamoDB
Streams
HotHotWarm
FastSlowFast
Batch
Message
Interactive
Stream
ML
SearchSQLNoSQLCache
File
Queue
Stream
Amazon EC2
Amazon EC2
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
RECORDS
DOCUMENTS
FILES
MESSAGES
STREAMS
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
Transport
COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
Transport
Messaging
COLLECT STORE CONSUMEPROCESS / ANALYZE
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Collect
Applications
Logging
Transport
Messaging
IOT
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database
Types of Data
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database
Types of Data
Database records
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS Search Search documents
Logging
Types of Data
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Search documents
Log files
Logging
Types of Data
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Search documents
Log files
Logging
Messaging
Message MESSAGES
Messaging
Queue Messaging events
Types of Data
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Database Database records
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Search documents
Log files
Logging
Messaging
Message MESSAGES
Messaging
Queue Messaging events
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Devices / sensors /
IoT stream
Types of Data
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Search
File Store
Logging
Messaging
Message MESSAGES
Messaging
Queue
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Database
CONSUMEPROCESS / ANALYZEETL
Storage
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
File Store
Logging
Messaging
Message MESSAGES
Messaging
Queue
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Amazon Elasticache
Amazon DynamoDB
Amazon RDS
Amazon Elastisearch
Amazon ElastiCache
Amazon DynamoDB
Amazon RDS
Amazon Elasticsearch
Service
Warm
SQL
Cache
NoSQL
Search
Storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon Elasticache
Amazon DynamoDB
Amazon RDS
Amazon Elastisearch
Amazon S3
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Messaging
Message MESSAGES
Messaging
Queue
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Stream
storage
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
File
SQL
Cache
NoSQL
Search
Storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Stream
storage
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Message & Stream Storage
Amazon SQS
•  Managed message queue service
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Stream
storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
•  Managed message queue service
Apache Kafka
•  High throughput distributed
messaging system
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
•  Managed message queue service
Apache Kafka
•  High throughput distributed
messaging system
Amazon Kinesis Streams
•  Managed stream storage +
processing
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Kinesis
Streams
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
•  Managed message queue service
Apache Kafka
•  High throughput distributed
messaging system
Amazon Kinesis Streams
•  Managed stream storage +
processing
Amazon Kinesis Firehose
•  Managed data delivery
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Kinesis
Streams
Amazon Kinesis
Firehose
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
HotWarm
SearchSQLNoSQLCache
File
Queue
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
COLLECT STORE
Mobile apps
Web apps
Data centers
AWS Direct
Connect
RECORDS
Reference architecture
Applications
Amazon SQS
•  Managed message queue service
Apache Kafka
•  High throughput distributed
messaging system
Amazon Kinesis Streams
•  Managed stream storage +
processing
Amazon Kinesis Firehose
•  Managed data delivery
Amazon DynamoDB
•  Managed NoSQL database
•  Tables can be stream-enabled
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
DOCUMENTS
FILES
Transport
Logging
Stream
Devices
Sensors &
IoT platforms
AWS IoT STREAMS
IoT
Messaging
Message MESSAGES
Messaging
Amazon SQS
Apache Kafka
Amazon Kinesis
Streams
Amazon Kinesis
Firehose
Amazon DynamoDB
Streams
Amazon Elasticsearch
Service
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
Warm
SearchSQLNoSQLCache
File
Queue
Hot
Message & Stream Storage
CONSUMEPROCESS / ANALYZEETL
Reference architecture
COLLECT STORE ETL
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
PROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
FastSlowFast
Batch
Message
Interactive
Stream
ML
Amazon EC2
Amazon EC2
Tools and Frameworks
Reference architecture
COLLECT STORE ETL
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
PROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
FastSlowFast
Batch
Message
Interactive
Stream
ML
Amazon EC2
Amazon EC2
Tools and Frameworks
Machine Learning
•  Amazon ML, Amazon EMR (Spark ML)
Reference architecture
COLLECT STORE ETL
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
PROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
FastSlowFast
Batch
Message
Interactive
Stream
ML
Amazon EC2
Amazon EC2
Tools and Frameworks
Machine Learning
•  Amazon ML, Amazon EMR (Spark ML)
Interactive
•  Amazon Redshift, Amazon EMR (Presto, Spark)
Reference architecture
COLLECT STORE ETL
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
PROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
FastSlowFast
Batch
Message
Interactive
Stream
ML
Amazon EC2
Amazon EC2
Tools and Frameworks
Machine Learning
•  Amazon ML, Amazon EMR (Spark ML)
Interactive
•  Amazon Redshift, Amazon EMR (Presto, Spark)
Batch
•  Amazon EMR (MapReduce, Hive, Pig, Spark)
Reference architecture
COLLECT STORE ETL
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
PROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
FastSlowFast
Batch
Message
Interactive
Stream
ML
Amazon EC2
Amazon EC2
Tools and Frameworks
Machine Learning
•  Amazon ML, Amazon EMR (Spark ML)
Interactive
•  Amazon Redshift, Amazon EMR (Presto, Spark)
Batch
•  Amazon EMR (MapReduce, Hive, Pig, Spark)
Messaging
•  Amazon SQS application on Amazon EC2
Reference architecture
COLLECT STORE ETL
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
PROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
FastSlowFast
Batch
Message
Interactive
Stream
ML
Amazon EC2
Amazon EC2
Tools and Frameworks
Machine Learning
•  Amazon ML, Amazon EMR (Spark ML)
Interactive
•  Amazon Redshift, Amazon EMR (Presto, Spark)
Batch
•  Amazon EMR (MapReduce, Hive, Pig, Spark)
Messaging
•  Amazon SQS application on Amazon EC2
Streaming
•  Micro-batch: Spark Streaming, KCL
•  Real-time: Amazon Kinesis Analytics, Storm, AWS
Lambda, KCLç
COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Consume
COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Consume
COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Analysis and visualization
Consume
COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Analysis and visualization
Notebooks
Consume
COLLECT STORE PROCESS / ANALYZE
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
ETL CONSUME
Applications & API
Analysis and visualization
Notebooks
IDE
Consume
Amazon SQS apps
Streaming
Amazon Kinesis
Analytics
Amazon KCL
apps
AWS Lambda
Amazon Redshift
COLLECT STORE CONSUMEPROCESS / ANALYZE
Amazon Machine
Learning
Presto
Amazon
EMR
Amazon Elasticsearch
Service
Apache Kafka
Amazon SQS
Amazon Kinesis
Streams
Amazon Kinesis
Firehose
Amazon DynamoDB
Amazon S3
Amazon ElastiCache
Amazon RDS
Amazon DynamoDB
Streams
HotHotWarm
FastSlowFast
Batch
Message
Interactive
Stream
ML
SearchSQLNoSQLCache
File
Queue
Stream
Amazon EC2
Amazon EC2
Mobile apps
Web apps
Devices
Messaging
Message
Sensors &
IoT platforms
AWS IoT
Data centers
AWS Direct
Connect
AWS Import/Export
Snowball
Logging
Amazon
CloudWatch
AWS
CloudTrail
RECORDS
DOCUMENTS
FILES
MESSAGES
STREAMS
Amazon QuickSight
Apps & Services
Analysis&visualization
Notebooks
IDE
API
Reference architecture
Logging
IoT
Applications
Transport
Messaging
ETL
Storage Compute
•  Decoupled “data bus”
•  Completely autonomous components
•  Independently scale storage & compute resources on-demand
Collect → Store → Process | Analyze → Consume
Multi-Stage Decoupled “Data Bus”
Multi-Stage Decoupled “Data Bus”
Multiple stages
Storage decouples multiple processing stages
Store Process Store ProcessData
process
store
Multiple Stream Processing Applications Can Read
from Amazon Kinesis
Amazon
Kinesis
AWS Lambda
Amazon
DynamoDB
Amazon
Kinesis S3
Connector
process
store
Amazon S3
Amazon EMR
Analysis Frameworks Can Read from Multiple
Data Stores
Amazon
Kinesis
AWS
Lambda
Amazon S3
Data
Amazon
DynamoDB
AnswerSpark
Streaming
Amazon
Kinesis S3
Connector
process
store
Spark
SQL
Amazon EMR
Real-time Analytics
Apache
Kafka
KCL
AWS Lambda
Spark
Streaming
Apache
Storm
Amazon
SNS
Amazon
ML
Notifications
Amazon
ElastiCache
(Redis)
Amazon
DynamoDB
Amazon
RDS
Amazon
ES
Alert
App state
Real-time prediction
KPI
process
store
DynamoDB
Streams
Amazon
Kinesis
Data
stream
Case Study: Clickstream Analysis
Hearst Corporation monitors trending content for over 250 digital properties
worldwide and processes more than 30TB of data per day, using an architecture
that includes Amazon Kinesis and Spark running on Amazon EMR.
Store → Process | Analyze → Answers
Interactive
&
Batch
Analytics
Amazon S3
Amazon EMR
Hive
Pig
Spark
Amazon
ML
process
store
Consume
Amazon
Redshift
Amazon EMR
Presto
Spark
Batch
Interactive
Batch prediction
Real-time prediction
Amazon
Kinesis
Firehose
Batch layer
Amazon
Kinesis
Data
stream
process
store
Amazon
Kinesis S3
Connector
A
p
p
l
i
c
a
t
i
o
n
s
Amazon
Redshift
Amazon EMR
Presto
Hive
Pig
Spark
answer
Amazon S3
Speed layer
answer
Serving
layer
Amazon
ElastiCache
Amazon
DynamoDB
Amazon
RDS
Amazon
ES
answer
Amazon
ML
KCL
AWS Lambda
Storm
Lambda Architecture
Spark Streaming
on Amazon EMR
Case Study: On-demand Big Data Analytics
Redfin uses Amazon EMR with spot instances – dynamically spinning up & down
Apache Hadoop clusters – to perform large data transformations and deliver data
to internal and external customers.
Store → Process → Store → Analyze → Answers
Architectural Principles
Decoupled “data bus”
•  Data → Store → Process → Store → Analyze → Answers
Use the right tool for the job
•  Data structure, latency, throughput, access patterns
Use Lambda architecture ideas
•  Immutable (append-only) log, batch/speed/serving layer
Leverage AWS managed services
•  Scalable/elastic, available, reliable, secure, no/low admin
Big data ≠ big cost
Building a Big Data
Application
Building a Big Data Application
web clients
mobile clients
DBMS
corporate data center
Getting Started
Building a Big Data Application
web clients
mobile clients
DBMS
Amazon Redshift
Amazon
QuickSight
AWS cloudcorporate data center
Adding a data warehouse
Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Amazon
QuickSight
AWS cloud
Bringing in Log Data
corporate data center
Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
AWS cloud
Extending your DW to S3
corporate data center
Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
Adding a real-time layer
corporate data center
Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
Adding predictive analytics
corporate data center
Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
Adding encryption at rest with AWS KMS
corporate data center
AWS KMS
Building a Big Data Application
web clients
mobile clients
DBMS
Raw data
Amazon Redshift
Staging Data
Orc/Parquet
(Query optimized)
Amazon
QuickSight
Kinesis
Streams
AWS cloud
AWS KMS
VPC subnet
SSL/TLS
SSL/TLS
Protecting Data in Transit & Adding Network Isolation
corporate data center
The Evolution of Big Data Processing
Descriptive Real-time Predictive
Dashboards;
Traditional query &
reporting
Clickstream analysis;
Ad bidding;
streaming data
Inventory forecasting;
Fraud detection;
Recommendation
engines
What happened
before
What’s happening
now
Probability of “x”
happening
Amazon Kinesis
Amazon EC2
AWS Lambda
Amazon DynamoDB
Amazon EMR
Amazon Redshift
Amazon QuickSight
Amazon EMR
Amazon Machine Learning
Amazon EMR
PLACEHOLDER CALL TO ACTION
aws.amazon.com/big-data
Big Data Quest
Learn at your own pace and practice working with AWS
services for big data on qwikLABS. (3 Hours | Online)


Big Data on AWS
How to use AWS services to process data with Hadoop
& create big data environments (3 Days | Classroom ) 
Big Data Technology Fundamentals FREE!
Overview of AWS big data solutions for architects or
data scientists new to big data. (3 Hours | Online) 


AWS Courses
Self-paced Online Labs
2:00 PM Best Practices on Real-time Streaming Analytics
3:00 PM Break
3:15 PM Getting started with Amazon Machine Learning
4:15 PM Building your first big data application on AWS
Next…
Real-time Streaming Data on AWS
Deep Dive & Best Practices Using Amazon Kinesis,
Spark Streaming, AWS Lambda, and Amazon EMR
Steve Abraham
Solutions Architect, AWS
Batch Processing
Hourly server logs
Weekly or monthly bills
Daily web-site clickstream
Daily fraud reports
Stream Processing
Real-time metrics
Real-time spending alerts/caps
Real-time clickstream analysis
Real-time detection
It’s All About the Pace
Streaming Data Scenarios Across Verticals
Scenarios/
Verticals
Accelerated Ingest-
Transform-Load
Continuous Metrics
Generation
Responsive Data Analysis
Digital Ad Tech/
Marketing
Publisher, bidder data
aggregation
Advertising metrics like
coverage, yield, and
conversion
User engagement with
ads, optimized bid/buy
engines
IoT Sensor, device telemetry
data ingestion
Operational metrics and
dashboards
Device operational
intelligence and alerts
Gaming Online data aggregation,
e.g., top 10 players
Massively multiplayer
online game (MMOG) live
dashboard
Leader board generation,
player-skill match
Consumer
Online
Clickstream analytics Metrics like impressions
and page views
Recommendation engines,
proactive care
Customer Use Cases
Sonos runs near real-time streaming
analytics on device data logs from
their connected hi-fi audio equipment.
Analyzing 30TB+ clickstream
data enabling real-time insights for
Publishers.
Glu Mobile collects billions of
gaming events data points from
millions of user devices in
real-time every single day.
Nordstorm
Nordstorm recommendation team built
online stylist using Amazon Kinesis
Streams and AWS Lambda.
Metering Record Common Log Entry
MQTT RecordSyslog Entry
{
"payerId": "Joe",
"productCode": "AmazonS3",
"clientProductCode": "AmazonS3",
"usageType": "Bandwidth",
"operation": "PUT",
"value": "22490",
"timestamp": "1216674828"
}
{
127.0.0.1 user-
identifier frank
[10/Oct/
2000:13:55:36
-0700] "GET /
apache_pb.gif
HTTP/1.0" 200
2326
}
{
“SeattlePublicWa
ter/Kinesis/123/
Realtime” –
412309129140
}
{
<165>1 2003-10-11T22:14:15.003Z
mymachine.example.com evntslog -
ID47 [exampleSDID@32473 iut="3"
eventSource="Application"
eventID="1011"]
[examplePriority@32473
class="high"]
}
Streaming Data Challenges: Variety & Velocity
•  Streaming data comes in
different types and formats
−  Metering records,
logs and sensor data
−  JSON, CSV, TSV
•  Can vary in size from a
few bytes to kilobytes or
megabytes
•  High velocity and
continuous
Two Main Processing Patterns
Stream processing (real time)
•  Real-time response to events in data streams
Examples:
•  Proactively detect hardware errors in device logs
•  Notify when inventory drops below a threshold
•  Fraud detection
Micro-batching (near real time)
•  Near real-time operations on small batches of events in data streams
Examples:
•  Aggregate and archive events
•  Monitor performance SLAs
Amazon Kinesis Deep Dive
Amazon Kinesis
Streams
•  For Technical Developers
•  Build your own custom
applications that process
or analyze streaming
data
Amazon Kinesis
Firehose
•  For all developers, data
scientists
•  Easily load massive
volumes of streaming data
into S3, Amazon Redshift
and Amazon Elasticsearch
Amazon Kinesis
Analytics
•  For all developers, data
scientists
•  Easily analyze data
streams using standard
SQL queries
•  Coming soon
Amazon Kinesis: Streaming Data Made Easy
Services make it easy to capture, deliver and process streams on AWS
Amazon Kinesis Firehose
Load massive volumes of streaming data into Amazon S3, Amazon
Redshift and Amazon Elasticsearch
Zero administration: Capture and deliver streaming data into Amazon S3, Amazon Redshift
and Amazon Elasticsearch without writing an application or managing infrastructure.
Direct-to-data store integration: Batch, compress, and encrypt streaming data for
delivery into data destinations in as little as 60 secs using simple configurations.
Seamless elasticity: Seamlessly scales to match data throughput w/o intervention
Capture and submit
streaming data to Firehose
Analyze streaming data using your
favorite BI tools
Firehose loads streaming data
continuously into S3, Amazon Redshift
and Amazon Elasticsearch
Data	
Sources	
App.4	
	
[Machine	
Learning]	
															
	
	
	
AWS	Endpoint	
App.1	
	
[Aggregate	&	
De-Duplicate]	
	Data	
Sources	
Data	
Sources	
	Data	
Sources	
App.2	
	
[Metric	
Extrac=on]	
Amazon S3
Amazon Redshift
App.3	
[Sliding	
Window	
Analysis]	
Availability
Zone
Shard	1	
Shard	2	
Shard	N	
Availability
Zone
Availability
Zone
Amazon Kinesis Streams
Managed service for real-time streaming
AWS Lambda
Amazon EMR
• Streams are made of shards
• Each shard ingests up to 1MB/sec, and
1000 records/sec
• Each shard emits up to 2 MB/sec
• All data is stored for 24 hours by
default; storage can be extended for
up to 7 days
• Scale Kinesis streams using scaling util
• Replay data inside of 24-hour window
Amazon Kinesis Firehose vs. Amazon Kinesis
Streams
Amazon Kinesis Streams is for use cases that require custom
processing, per incoming record, with sub-1 second processing
latency, and a choice of stream processing frameworks.
Amazon Kinesis Firehose is for use cases that require zero
administration, ability to use existing analytics tools based on
Amazon S3, Amazon Redshift and Amazon Elasticsearch, and a
data latency of 60 seconds or higher.
Streaming Data Ingestion
Putting Data into Amazon Kinesis Streams
Determine your partition key strategy
•  Managed buffer or streaming MapReduce job
•  Ensure high cardinality for your shards
Provision adequate shards
•  For ingress needs
•  Egress needs for all consuming applications: if more
than two simultaneous applications
•  Include headroom for catching up with data in stream
Putting Data into Amazon Kinesis
Amazon Kinesis Agent – (supports pre-processing)
•  http://docs.aws.amazon.com/firehose/latest/dev/writing-with-agents.html
Pre-batch before Puts for better efficiency
•  Consider Flume, Fluentd as collectors/agents
•  See https://github.com/awslabs/aws-fluent-plugin-kinesis
Make a tweak to your existing logging
•  log4j appender option
•  See https://github.com/awslabs/kinesis-log4j-appender
AWS IoT
•  The Rules Engine can also route messages to AWS endpoints including Amazon
Kinesis.
•  A provisioned entity called a Stream composed of Shards
•  Producers use a PUT call to store data in a Stream.
•  Each record <= 1 MB. PutRecord or PutRecords
•  A partition key is supplied by producer and used to distribute (MD5
hash) the PUTs across (hash key range) of Shards
•  Unique Sequence# returned upon successful PUT call
•  Approximate arrival timestamp affixed to each record
Putting Data into Amazon Kinesis Streams
Simple Put* interface to capture and store data in Streams
Amazon Kinesis Producer Library
•  Writes to one or more Amazon Kinesis streams with automatic,
configurable retry mechanism
•  Collects records and uses PutRecords to write multiple records to
multiple shards per request
•  Aggregates user records to increase payload size and improve
throughput
•  Integrates seamlessly with KCL to de-aggregate batched records
•  Use Amazon Kinesis Producer Library with AWS Lambda (New!)
•  Submits Amazon CloudWatch metrics on your behalf to provide
visibility into producer performance
Record Order and Multiple Shards
Unordered processing
•  Randomize partition key to distribute events over
many shards and use multiple workers
Exact order processing
•  Control partition key to ensure events are
grouped into the same shard and read by the
same worker
Need both? Use global sequence number
Producer
Get Global
Sequence
Unordered
Stream
Campaign Centric
Stream
Fraud Inspection
Stream
Get Event
Metadata
Sample Code for Scaling Shards
java -cp
KinesisScalingUtils.jar-complete.jar
-Dstream-name=MyStream
-Dscaling-action=scaleUp
-Dcount=10
-Dregion=eu-west-1 ScalingClient
Options:
•  stream-name - The name of the stream to be scaled
•  scaling-action - The action to be taken to scale. Must be one of "scaleUp”, "scaleDown"
or “resize”
•  count - Number of shards by which to absolutely scale up or down, or resize
See https://github.com/awslabs/amazon-kinesis-scaling-utils
Managed Buffer
•  Care about a reliable, scalable
way to capture data
•  Defer all other aggregation to
consumer
•  Generate Random Partition
Keys
•  Ensure a high cardinality for
Partition Keys with respect to
shards, to spray evenly across
available shards
Putting Data into Amazon Kinesis Streams
Streaming Map-Reduce
•  Streaming Map-Reduce: leverage
partition keys as a natural way to
aggregate data
•  For e.g. Partition Keys per billing
customer, per DeviceId, per stock
symbol
•  Design partition keys to scale
•  Be aware of “hot partition keys or
shards ”
Determine your partition key strategy
Putting Data into Amazon Kinesis Streams
Provision adequate shards
•  For ingress needs
•  Egress needs for all consuming applications: if more
than two simultaneous applications
•  Include headroom for catching up with data in stream
Amazon Kinesis Stream Processing
Amazon Kinesis Client Library
•  Build Kinesis Applications with Kinesis Client Library (KCL)
•  Open source client library available for Java, Ruby, Python,
Node.JS dev
•  Deploy on your EC2 instances
•  KCL Application includes three components:
1.  Record Processor Factory – Creates the record processor
2.  Record Processor – Processor unit that processes data from a
shard in Amazon Kinesis Streams
3.  Worker – Processing unit that maps to each application instance
State Management with Kinesis Client Library
•  One record processor maps to one shard and processes data records from
that shard
•  One worker maps to one or more record processors
•  Balances shard-worker associations when worker / instance counts change
•  Balances shard-worker associations when shards split or merge
Other Options
•  Third-party connectors (for example, Splunk)
•  AWS IoT Platform
•  AWS Lambda
•  Amazon EMR with Apache Spark, Pig or Hive
Apache Spark and Amazon Kinesis Streams
Apache Spark is an in-memory analytics cluster using
RDD for fast processing
Spark Streaming can read directly from an Amazon
Kinesis stream
Amazon software license linking – Add ASL
dependency to SBT/MAVEN project, artifactId =
spark-streaming-kinesis-asl_2.10
KinesisUtils.createStream(‘twitter-stream’)
.filter(_.getText.contains(”Open-Source"))
.countByWindow(Seconds(5))
Example: Counting tweets on a sliding window
Amazon EMR
Amazon
Kinesis
StreamsStreaming Input
Tumbling/Fixed
Window
Aggregation
Periodic Output
Amazon Redshift
COPY from
Amazon EMR
Common Integration Pattern with Amazon EMR
Tumbling Window Reporting
Using Spark Streaming with Amazon Kinesis
Streams
1.  Use Spark 1.6+ with EMRFS consistent view option – if you
use Amazon S3 as storage for Spark checkpoint
2.  Amazon DynamoDB table name – make sure there is only one
instance of the application running with Spark Streaming
3.  Enable Spark-based checkpoints
4.  Number of Amazon Kinesis receivers is multiple of executors so
they are load-balanced
5.  Total processing time is less than the batch interval
6.  Number of executors is the same as number of cores per
executor
7.  Spark Streaming uses default of 1 sec with KCL
Demo
Sensors
Amazon
Kinesis
Streams
Streaming
ProcessData Source
Collect /
Store
Consume
- Simulated using EC2
- 20K events/sec
- Scalable
- Managed
-  Millisecond latency
-  Highly available
-  Scalable
-  Managed
-  Micro-batch
-  Checkpoint in DynamoDB
Amazon
DynamoDB
Device_Id, temperature, timestamp
1,65,2016-03-25 01:01:20
2,68,2016-03-25 01:01:20
3,67,2016-03-25 01:01:20
4,85,2016-03-25 01:01:20
Amazon Kinesis Streams with AWS Lambda
Build your Streaming Pipeline with Amazon
Kinesis, AWS Lambda and Amazon EMR
Conclusion
•  Amazon Kinesis offers: managed service to build applications, streaming
data ingestion, and continuous processing
•  Ingest aggregate data using Amazon Producer Library
•  Process data using Amazon Connector Library and open source connectors
•  Determine your partition key strategy
•  Try out Amazon Kinesis at http://aws.amazon.com/kinesis/
•  Technical documentation
•  Amazon Kinesis Agent
•  Amazon Kinesis Streams and Spark Streaming
•  Amazon Kinesis Producer Library Best Practice
•  Amazon Kinesis Firehose and AWS Lambda
•  Building Near Real-Time Discovery Platform with Amazon Kinesis
•  Public case studies
•  Glu mobile – Real-Time Analytics
•  Hearst Publishing – Clickstream Analytics
•  How Sonos Leverages Amazon Kinesis
•  Nordstorm Online Stylist
Reference
3:00 PM Break
3:15 PM Getting started with Amazon Machine Learning
4:15 PM Building your first big data application on AWS
Next…
Getting Started with
Amazon Machine Learning
Fabio Silva
Solution Architect, AWS
Agenda
•  Machine learning and the data ecosystem
•  Smart applications by example (and counterexample)
•  Amazon Machine Learning (Amazon ML) features and
benefits
•  Developing with Amazon ML
•  Q&A
Three types of data-driven development
Retrospective
analysis and
reporting
Amazon Redshift,
Amazon RDS
Amazon S3
Amazon EMR
Three types of data-driven development
Retrospective
analysis and
reporting
Here-and-now
real-time processing
and dashboards
Amazon Kinesis
Amazon EC2
AWS Lambda
Amazon Redshift,
Amazon RDS
Amazon S3
Amazon EMR
Three types of data-driven development
Retrospective
analysis and
reporting
Here-and-now
real-time processing
and dashboards
Predictions
to enable smart
applications
Amazon Kinesis
Amazon EC2
AWS Lambda
Amazon Redshift,
Amazon RDS
Amazon S3
Amazon EMR
Machine learning and smart applications
Machine learning is the technology that
automatically finds patterns in your data and
uses them to make predictions for new data
points as they become available.
Machine learning and smart applications
Machine learning is the technology that
automatically finds patterns in your data and
uses them to make predictions for new data
points as they become available.
Your data + machine learning = smart applications
Smart applications by example
Based on what you
know about the user:
Will they use your
product?
Smart applications by example
Based on what you
know about the user:
Will they use your
product?
Based on what you
know about an order:
Is this order
fraudulent?
Smart applications by example
Based on what you
know about the user:
Will they use your
product?
Based on what you
know about an order:
Is this order
fraudulent?
Based on what you know
about a news article:
What other articles are
interesting?
And a few more examples…
Fraud detection Detecting fraudulent transactions, filtering spam emails,
flagging suspicious reviews,…
Personalization Recommending content, predictive content loading,
improving user experience,…
Targeted marketing Matching customers and offers, choosing marketing
campaigns, cross-selling and up-selling,…
Content classification Categorizing documents, matching hiring managers and
resumes,…
Churn prediction Finding customers who are likely to stop using the
service, upgrade targeting,…
Customer support Predictive routing of customer emails, social media
listening,…
Smart applications by counterexample
Dear Alex,
This awesome quadcopter is on sale
for just $49.99!
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
GROUP BY c.ID
HAVING o.date > GETDATE() – 30
We can start by sending
the offer to all customers
who placed an order in
the last 30 days
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
GROUP BY c.ID
HAVING O.CATEGORY = ‘TOYS’
AND o.date > GETDATE() – 30
…let’s narrow it down to
just customers who
bought toys
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN PRODUCTS P
ON P.ID = O.PRODUCT
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((P.DESCRIPTION LIKE ‘%HELICOPTER%’
AND O.DATE > GETDATE() - 60)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 30)
)
…and expand the query
to customers who
purchased other toy
helicopters recently, or
made several expensive
toy purchases
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%COPTER%’
AND o.date > GETDATE() - 60)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 30)
)
…but what about
quadcopters?
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 120)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 30)
)
…maybe we should go
back further in time
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 120)
OR (COUNT(*) > 2
AND SUM(o.price) > 200
AND o.date > GETDATE() – 40)
)
…tweak the query more
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 120)
OR (COUNT(*) > 2
AND SUM(o.price) > 150
AND o.date > GETDATE() – 40)
)
…again
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 90)
OR (COUNT(*) > 2
AND SUM(o.price) > 150
AND o.date > GETDATE() – 40)
)
…and again
Smart applications by counterexample
SELECT c.ID
FROM customers c
LEFT JOIN orders o
ON c.ID = o.customer
LEFT JOIN products p
ON p.ID = o.product
GROUP BY c.ID
HAVING o.category = ‘toys’
AND ((p.description LIKE ‘%copter%’
AND o.date > GETDATE() - 90)
OR (COUNT(*) > 2
AND SUM(o.price) > 150
AND o.date > GETDATE() – 40)
)
Use machine learning
technology to learn your
business rules from data!
Why aren’t there more smart applications?
1.  Machine learning expertise is rare.
2.  Building and scaling machine learning technology is
hard.
3.  Closing the gap between models and applications is
time-consuming and expensive.
Building smart applications today
Expertise Technology Operationalization
Limited supply of
data scientists
Many choices, few
mainstays
Complex and error-prone
data workflows
Expensive to hire
or outsource
Difficult to use and scale Custom platforms and
APIs
Many moving pieces lead
to custom solutions every
time
Reinventing the model
lifecycle management
wheel
What if there were a better
way?
Introducing Amazon Machine Learning
Easy-to-use, managed machine learning
service built for developers
Robust, powerful machine learning
technology based on Amazon’s internal
systems
Create models using your data already
stored in the AWS cloud
Deploy models to production in seconds
Easy-to-use and developer-friendly
Use the intuitive, powerful service console to build
and explore your initial models
•  Data retrieval
•  Model training, quality evaluation, fine-tuning
•  Deployment and management
Automate model lifecycle with fully featured APIs and
SDKs
•  Java, Python, .NET, JavaScript, Ruby, PHP
Easily create smart iOS and Android applications with
AWS Mobile SDK
Powerful machine learning technology
Based on Amazon’s battle-hardened internal systems
Not just the algorithms:
•  Smart data transformations
•  Input data and model quality alerts
•  Built-in industry best practices
Grows with your needs
•  Train on up to 100 GB of data
•  Generate billions of predictions
•  Obtain predictions in batches or real-time
Integrated with the AWS data ecosystem
Access data that is stored in Amazon S3,
Amazon Redshift, or MySQL databases in
Amazon RDS
Output predictions to Amazon S3 for easy
integration with your data flows
Use AWS Identity and Access Management
(IAM) for fine-grained data access permission
policies
Fully-managed model and prediction services
End-to-end service, with no servers to provision
and manage
One-click production model deployment
Programmatically query model metadata to
enable automatic retraining workflows
Monitor prediction usage patterns with Amazon
CloudWatch metrics
Pay-as-you-go and inexpensive
Data analysis, model training, and
evaluation: $0.42/instance hour
Batch predictions: $0.10/1000
Real-time predictions: $0.10/1000
+ hourly capacity reservation charge
Three supported types of predictions
Binary classification
Predict the answer to a Yes/No question
Multiclass classification
Predict the correct category from a list
Regression
Predict the value of a numeric variable
Train
model
Evaluate and
optimize
Retrieve
predictions
1 2 3
Building smart applications with Amazon ML
Train
model
Evaluate and
optimize
Retrieve
predictions
1 2 3
Building smart applications with Amazon ML
-  Create a datasource object pointing to your data
-  Explore and understand your data
-  Transform data and train your model
Create a datasource object
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> ds = ml.create_data_source_from_s3(
data_source_id = ’my_datasource',
data_spec = {
'DataLocationS3': 's3://bucket/input/data.csv',
'DataSchemaLocationS3': 's3://bucket/input/data.schema',
’compute_statistics’: True } )
Explore and understand your data
Train your model
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> model = ml.create_ml_model(
ml_model_id = ’my_model',
ml_model_type = 'REGRESSION',
training_data_source_id = 'my_datasource')
Train
model
Evaluate and
optimize
Retrieve
predictions
1 2 3
Building smart applications with Amazon ML
-  Measure and understand model quality
-  Adjust model interpretation
Explore model quality
Fine-tune model interpretation
Fine-tune model interpretation
Train
model
Evaluate and
optimize
Retrieve
predictions
1 2 3
Building smart applications with Amazon ML
-  Batch predictions
-  Real-time predictions
Batch predictions
Asynchronous, large-volume prediction generation
Request through service console or API
Best for applications that deal with batches of data records
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> model = ml.create_batch_prediction(
batch_prediction_id = 'my_batch_prediction’,
batch_prediction_data_source_id = ’my_datasource’,
ml_model_id = ’my_model',
output_uri = 's3://examplebucket/output/’)
Real-time predictions
Synchronous, low-latency, high-throughput prediction generation
Request through service API, server, or mobile SDKs
Best for interaction applications that deal with individual data records
>>> import boto
>>> ml = boto.connect_machinelearning()
>>> ml.predict(
ml_model_id = ’my_model',
predict_endpoint = ’example_endpoint’,
record = {’key1':’value1’, ’key2':’value2’})
{
'Prediction': {
'predictedValue': 13.284348,
'details': {
'Algorithm': 'SGD',
'PredictiveModelType': 'REGRESSION’
}
}
}
Architecture patterns for
smart applications
Get predictions with
Amazon ML batch API
Process data with
EMR
Raw data in S3
Aggregated data
in S3
Predictions
in S3 Your application
Batch predictions with EMR
Structured data
In Amazon Redshift
Load predictions into
Amazon Redshift
- or -
Read prediction results
directly from S3
Predictions
in S3
Get predictions with
Amazon ML batch API
Your application
Batch predictions with Amazon Redshift
Your application
Get predictions with
Amazon ML real-time API
Amazon ML service
Real-time predictions for interactive
applications
Your application Amazon
DynamoDB
+
Trigger events with Lambda
+
Get predictions with
Amazon ML real-time API
Adding predictions to an existing data flow
Q&A
http://aws.amazon.com/machine-learning
4:15 PM Building your first big data application on AWS
Next…
Building Your First Big Data
Application on AWS
Radhika Ravirala
Solutions Architect
Your First Big Data Application on AWS
PROCESS
STORE
ANALYZE & VISUALIZE
COLLECT
Your First Big Data Application on AWS
PROCESS:
Amazon EMR with Spark & Hive
STORE
ANALYZE & VISUALIZE:
Amazon Redshift and Amazon QuickSight
COLLECT:
Amazon Kinesis Firehose
http://aws.amazon.com/big-data/use-cases/
A Modern Take on the Classic Data Warehouse
Setting up the environment
Data Storage with Amazon S3
Download all the CLI steps: http://bit.ly/aws-big-data-steps
Create an Amazon S3 bucket to store the data collected
with Amazon Kinesis Firehose
aws s3 mb s3://YOUR-S3-BUCKET-NAME
Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
firehose-policy.json:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "firehose.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
s3-rw-policy.json:
{ "Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-S3-BUCKET-NAME",
"arn:aws:s3:::YOUR-S3-BUCKET-NAME/*"
]
} }
Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
aws iam create-role --role-name firehose-demo 
--assume-role-policy-document file://firehose-policy.json
Copy the value in “Arn” in the output,
e.g., arn:aws:iam::123456789:role/firehose-demo
aws iam put-role-policy --role-name firehose-demo 
--policy-name firehose-s3-rw 
--policy-document file://s3-rw-policy.json
Data Collection with Amazon Kinesis Firehose
Create a Firehose stream for incoming log data
aws firehose create-delivery-stream 
--delivery-stream-name demo-firehose-stream 
--s3-destination-configuration 
RoleARN=YOUR-FIREHOSE-ARN,
BucketARN="arn:aws:s3:::YOUR-S3-BUCKET-NAME",
Prefix=firehose/,
BufferingHints={IntervalInSeconds=60},
CompressionFormat=GZIP
Data Processing with Amazon EMR
Launch an Amazon EMR cluster with Spark and Hive
aws emr create-cluster 
--name "demo" 
--release-label emr-4.5.0 
--instance-type m3.xlarge 
--instance-count 2 
--ec2-attributes KeyName=YOUR-AWS-SSH-KEY 
--use-default-roles 
--applications Name=Hive Name=Spark Name=Zeppelin-Sandbox
Record your ClusterId from the output.
Data Analysis with Amazon Redshift
Create a single-node Amazon Redshift data warehouse:
aws redshift create-cluster 
--cluster-identifier demo 
--db-name demo 
--node-type dc1.large 
--cluster-type single-node 
--master-username master 
--master-user-password YOUR-REDSHIFT-PASSWORD 
--publicly-accessible 
--port 8192
Collect
Weblogs – Common Log Format (CLF)
75.35.230.210 - - [20/Jul/2009:22:22:42 -0700]
"GET /images/pigtrihawk.jpg HTTP/1.1" 200 29236 "http://
www.swivel.com/graphs/show/1163466"
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11)
Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)"
Writing into Amazon Kinesis Firehose
Download the demo weblog: http://bit.ly/aws-big-data
Open Python and run the following code to import the log into the stream:
import boto3
iam = boto3.client('iam')
firehose = boto3.client('firehose')
with open('weblog', 'r') as f:
for line in f:
firehose.put_record(
DeliveryStreamName='demo-firehose-stream',
Record={'Data': line}
)
print 'Record added'
Process
Apache Spark
•  Fast, general purpose engine
for large-scale data processing
•  Write applications quickly in
Java, Scala, or Python
•  Combine SQL, streaming, and
complex analytics
Spark SQL
Spark's module for working with structured data using SQL
Run unmodified Hive queries on existing data
Apache Zeppelin
•  Web-based notebook for interactive analytics
•  Multiple language backend
•  Apache Spark integration
•  Data visualization
•  Collaboration
https://zeppelin.incubator.apache.org/
View the Output Files in Amazon S3
After about 1 minute, you should see files in your S3
bucket:
aws s3 ls s3://YOUR-S3-BUCKET-NAME/firehose/ --recursive
Connect to Your EMR Cluster and Zeppelin
aws emr describe-cluster --cluster-id YOUR-EMR-CLUSTER-ID
Copy the MasterPublicDnsName. Use port forwarding so you can
access Zeppelin at http://localhost:18890 on your local machine.
ssh -i PATH-TO-YOUR-SSH-KEY -L 18890:localhost:8890 
hadoop@YOUR-EMR-DNS-NAME
Open Zeppelin with your local web browser and create a new “Note”:
http://localhost:18890
Exploring the Data in Amazon S3 using Spark
Download the Zeppelin notebook: http://bit.ly/aws-big-data-zeppelin
// Load all the files from S3 into a RDD
val accessLogLines = sc.textFile("s3://YOUR-S3-BUCKET-NAME/firehose/*/*/*/*")
// Count the lines
accessLogLines.count
// Print one line as a string
accessLogLines.first
// delimited by space so split them into fields
var accessLogFields = accessLogLines.map(_.split(" ").map(_.trim))
// Print the fields of a line
accessLogFields.first
Combine Fields: “A, B, C” à “A B C”
var accessLogColumns = accessLogFields
.map( arrayOfFields => { var temp1 =""; for (field <- arrayOfFields) yield {
var temp2 = ""
if (temp1.replaceAll("[",""").startsWith(""") && !temp1.endsWith("""))
temp1 = temp1 + " " + field.replaceAll("[|]",""")
else temp1 = field.replaceAll("[|]",""")
temp2 = temp1
if (temp1.endsWith(""")) temp1 = ""
temp2
}})
.map( fields => fields.filter(field => (field.startsWith(""") &&
field.endsWith(""")) || !field.startsWith(""") ))
.map(fields => fields.map(_.replaceAll(""","")))
Create a Data Frame and Transform the Data
import java.sql.Timestamp
import java.net.URL
case class accessLogs(
ipAddress: String,
requestTime: Timestamp,
requestMethod: String,
requestPath: String,
requestProtocol: String,
responseCode: String,
responseSize: String,
referrerHost: String,
userAgent: String
)
Create a Data Frame and Transform the Data
val accessLogsDF = accessLogColumns.map(line => {
var ipAddress = line(0)
var requestTime = new Timestamp(new java.text.SimpleDateFormat("dd/MMM/
yyyy:HH:mm:ss Z").parse(line(3)).getTime())
var requestString = line(4).split(" ").map(_.trim())
var requestMethod = if (line(4).toString() != "-") requestString(0) else ""
var requestPath = if (line(4).toString() != "-") requestString(1) else ""
var requestProtocol = if (line(4).toString() != "-") requestString(2) else ""
var responseCode = line(5).replaceAll("-","")
var responseSize = line(6).replaceAll("-","")
var referrerHost = line(7)
var userAgent = line(8)
accessLogs(ipAddress, requestTime, requestMethod, requestPath,
requestProtocol,responseCode, responseSize, referrerHost, userAgent)
}).toDF()
Create an External Table Backed by Amazon S3
%sql
CREATE EXTERNAL TABLE access_logs
(
ip_address String,
request_time Timestamp,
request_method String,
request_path String,
request_protocol String,
response_code String,
response_size String,
referrer_host String,
user_agent String
)
PARTITIONED BY (year STRING,month STRING, day STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY 't'
STORED AS TEXTFILE
LOCATION 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
Configure Hive Partitioning and Compression
// set up Hive's "dynamic partitioning”
%sql
SET hive.exec.dynamic.partition=true
// compress output files on Amazon S3 using Gzip
%sql
SET hive.exec.compress.output=true
%sql
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
%sql
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
%sql
SET hive.exec.dynamic.partition.mode=nonstrict;
Write Output to Amazon S3
import org.apache.spark.sql.SaveMode
accessLogsDF
.withColumn("year", year(accessLogsDF("requestTime")))
.withColumn("month", month(accessLogsDF("requestTime")))
.withColumn("day", dayofmonth(accessLogsDF("requestTime")))
.write
.partitionBy("year","month","day")
.mode(SaveMode.Overwrite)
.insertInto("access_logs")
Query the Data Using Spark SQL
// Check the count of records
%sql
select count(*) from access_log_processed
// Fetch the first 10 records
%sql
select * from access_log_processed limit 10
View the Output Files in Amazon S3
Leave Zeppelin and go back to the console…
List the partition prefixes and output files:
aws s3 ls s3://YOUR-S3-BUCKET-NAME/access-log-processed/ 
--recursive
Analyze
Connect to Amazon Redshift
Using the PostgreSQL CLI
psql -h YOUR-REDSHIFT-ENDPOINT 
-p 8192 -U master demo
Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers
or native Amazon Redshift support
•  Aginity Workbench for Amazon Redshift
•  SQL Workbench/J
Create an Amazon Redshift Table to Hold Your Data
CREATE TABLE accesslogs
(
host_address varchar(512),
request_time timestamp,
request_method varchar(5),
request_path varchar(1024),
request_protocol varchar(10),
response_code Int,
response_size Int,
referrer_host varchar(1024),
user_agent varchar(512)
)
DISTKEY(host_address)
SORTKEY(request_time);
Loading Data into Amazon Redshift
“COPY” command loads files in parallel from
Amazon S3:
COPY accesslogs
FROM 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
CREDENTIALS
'aws_iam_role=arn:aws:iam::YOUR-AWS-ACCOUNT-ID:role/ROLE-NAME'
DELIMITER 't'
MAXERROR 0
GZIP;
Amazon Redshift Test Queries
-- find distribution of response codes over days
SELECT TRUNC(request_time), response_code, COUNT(1) FROM
accesslogs GROUP BY 1,2 ORDER BY 1,3 DESC;
-- find the 404 status codes
SELECT COUNT(1) FROM accessLogs WHERE response_code = 404;
-- show all requests for status as PAGE NOT FOUND
SELECT TOP 1 request_path,COUNT(1) FROM accesslogs WHERE
response_code = 404 GROUP BY 1 ORDER BY 2 DESC;
Visualize the Results
DEMO
Amazon QuickSight
Automating the Big Data Application
Amazon
Kinesis
Firehose
Amazon
EMR
Amazon
S3
Amazon
Redshift
Amazon
QuickSight
Amazon
S3
Event
Notification
Spark job
List of objects from Lambda
Write to Amazon Redshift using spark-redshift
Learn from big data experts
blogs.aws.amazon.com/bigdata
Thank you to our Sponsor

More Related Content

What's hot

ENT314 Automate Best Practices and Operational Health for Your AWS Resources
ENT314 Automate Best Practices and Operational Health for Your AWS ResourcesENT314 Automate Best Practices and Operational Health for Your AWS Resources
ENT314 Automate Best Practices and Operational Health for Your AWS ResourcesAmazon Web Services
 
Real-Time Streaming: Intro to Amazon Kinesis
Real-Time Streaming: Intro to Amazon KinesisReal-Time Streaming: Intro to Amazon Kinesis
Real-Time Streaming: Intro to Amazon KinesisAmazon Web Services
 
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...Amazon Web Services
 
Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...
Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...
Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...Amazon Web Services
 
The AWS Big Data Platform – Overview
The AWS Big Data Platform – OverviewThe AWS Big Data Platform – Overview
The AWS Big Data Platform – OverviewAmazon Web Services
 
Building Your Data Lake on AWS - Level 200
Building Your Data Lake on AWS - Level 200Building Your Data Lake on AWS - Level 200
Building Your Data Lake on AWS - Level 200Amazon Web Services
 
16h00 globant - aws globant-big-data_summit2012
16h00   globant - aws globant-big-data_summit201216h00   globant - aws globant-big-data_summit2012
16h00 globant - aws globant-big-data_summit2012infolive
 
Big Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWSBig Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
 
AWS Webcast - Tableau Big Data Solution Showcase
AWS Webcast - Tableau Big Data Solution ShowcaseAWS Webcast - Tableau Big Data Solution Showcase
AWS Webcast - Tableau Big Data Solution ShowcaseAmazon Web Services
 
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
 
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...Amazon Web Services
 
AWS Webcast - Managing Big Data in the AWS Cloud_20140924
AWS Webcast - Managing Big Data in the AWS Cloud_20140924AWS Webcast - Managing Big Data in the AWS Cloud_20140924
AWS Webcast - Managing Big Data in the AWS Cloud_20140924Amazon Web Services
 
Building your First Big Data Application on AWS
Building your First Big Data Application on AWSBuilding your First Big Data Application on AWS
Building your First Big Data Application on AWSAmazon Web Services
 
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
 
Best Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWSBest Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWSAmazon Web Services
 
Visualizing Big Data Insights with Amazon QuickSight
Visualizing Big Data Insights with Amazon QuickSightVisualizing Big Data Insights with Amazon QuickSight
Visualizing Big Data Insights with Amazon QuickSightAmazon Web Services
 
Tapping the cloud for real time data analytics
 Tapping the cloud for real time data analytics Tapping the cloud for real time data analytics
Tapping the cloud for real time data analyticsAmazon Web Services
 

What's hot (20)

ENT314 Automate Best Practices and Operational Health for Your AWS Resources
ENT314 Automate Best Practices and Operational Health for Your AWS ResourcesENT314 Automate Best Practices and Operational Health for Your AWS Resources
ENT314 Automate Best Practices and Operational Health for Your AWS Resources
 
Securing Your Big Data on AWS
Securing Your Big Data on AWSSecuring Your Big Data on AWS
Securing Your Big Data on AWS
 
Real-Time Streaming: Intro to Amazon Kinesis
Real-Time Streaming: Intro to Amazon KinesisReal-Time Streaming: Intro to Amazon Kinesis
Real-Time Streaming: Intro to Amazon Kinesis
 
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...
AWS re:Invent 2016| HLC301 | Data Science and Healthcare: Running Large Scale...
 
Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...
Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...
Think Big Data, Think Cloud - AWS Presentation - AWS Cloud Storage for the En...
 
The AWS Big Data Platform – Overview
The AWS Big Data Platform – OverviewThe AWS Big Data Platform – Overview
The AWS Big Data Platform – Overview
 
Building Your Data Lake on AWS - Level 200
Building Your Data Lake on AWS - Level 200Building Your Data Lake on AWS - Level 200
Building Your Data Lake on AWS - Level 200
 
16h00 globant - aws globant-big-data_summit2012
16h00   globant - aws globant-big-data_summit201216h00   globant - aws globant-big-data_summit2012
16h00 globant - aws globant-big-data_summit2012
 
AWS Big Data Platform
AWS Big Data PlatformAWS Big Data Platform
AWS Big Data Platform
 
Big Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWSBig Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWS
 
AWS Webcast - Tableau Big Data Solution Showcase
AWS Webcast - Tableau Big Data Solution ShowcaseAWS Webcast - Tableau Big Data Solution Showcase
AWS Webcast - Tableau Big Data Solution Showcase
 
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...
 
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...
AWS re:Invent 2016: Visualizing Big Data Insights with Amazon QuickSight (BDM...
 
AWS Webcast - Managing Big Data in the AWS Cloud_20140924
AWS Webcast - Managing Big Data in the AWS Cloud_20140924AWS Webcast - Managing Big Data in the AWS Cloud_20140924
AWS Webcast - Managing Big Data in the AWS Cloud_20140924
 
Building your First Big Data Application on AWS
Building your First Big Data Application on AWSBuilding your First Big Data Application on AWS
Building your First Big Data Application on AWS
 
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...
 
Building a Data Lake on AWS
Building a Data Lake on AWSBuilding a Data Lake on AWS
Building a Data Lake on AWS
 
Best Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWSBest Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWS
 
Visualizing Big Data Insights with Amazon QuickSight
Visualizing Big Data Insights with Amazon QuickSightVisualizing Big Data Insights with Amazon QuickSight
Visualizing Big Data Insights with Amazon QuickSight
 
Tapping the cloud for real time data analytics
 Tapping the cloud for real time data analytics Tapping the cloud for real time data analytics
Tapping the cloud for real time data analytics
 

Viewers also liked

Big Data Use Cases and Solutions in the AWS Cloud
Big Data Use Cases and Solutions in the AWS CloudBig Data Use Cases and Solutions in the AWS Cloud
Big Data Use Cases and Solutions in the AWS CloudAmazon Web Services
 
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWSAWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWSAmazon Web Services
 
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
 
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...Amazon Web Services
 
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...SoftServe
 
Big Data & Analytics Architecture
Big Data & Analytics ArchitectureBig Data & Analytics Architecture
Big Data & Analytics ArchitectureArvind Sathi
 
(BDT310) Big Data Architectural Patterns and Best Practices on AWS
(BDT310) Big Data Architectural Patterns and Best Practices on AWS(BDT310) Big Data Architectural Patterns and Best Practices on AWS
(BDT310) Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
 
Building A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWSBuilding A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWSAmazon Web Services
 
Rolta iot analytics 17 mar 2015
Rolta iot analytics 17 mar 2015Rolta iot analytics 17 mar 2015
Rolta iot analytics 17 mar 2015Ron Elias
 
Turning big data to business outcomes
Turning big data to business outcomes Turning big data to business outcomes
Turning big data to business outcomes Rolta
 
Dell Cloud Manager Overview
Dell Cloud Manager OverviewDell Cloud Manager Overview
Dell Cloud Manager OverviewDell World
 
Search Engine Marketing
Search Engine MarketingSearch Engine Marketing
Search Engine MarketingSam Lalonde
 
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsCreating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
 
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsCreating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsAmazon Web Services
 
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...Amazon Web Services
 
Eurisc analytics and big data solutions
Eurisc analytics and big data solutionsEurisc analytics and big data solutions
Eurisc analytics and big data solutionsmarcoxplace
 
Advance marketing analytics
Advance marketing analyticsAdvance marketing analytics
Advance marketing analyticsRajiv Kumar
 
Big data solutions for advanced marketing analytics
Big data solutions for advanced marketing analyticsBig data solutions for advanced marketing analytics
Big data solutions for advanced marketing analyticsNatalino Busa
 
Getting Started with Docker on AWS
Getting Started with Docker on AWSGetting Started with Docker on AWS
Getting Started with Docker on AWSAmazon Web Services
 

Viewers also liked (20)

Big Data and Analytics on AWS
Big Data and Analytics on AWS Big Data and Analytics on AWS
Big Data and Analytics on AWS
 
Big Data Use Cases and Solutions in the AWS Cloud
Big Data Use Cases and Solutions in the AWS CloudBig Data Use Cases and Solutions in the AWS Cloud
Big Data Use Cases and Solutions in the AWS Cloud
 
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWSAWS January 2016 Webinar Series - Getting Started with Big Data on AWS
AWS January 2016 Webinar Series - Getting Started with Big Data on AWS
 
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
 
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...
AWS March 2016 Webinar Series - Building Big Data Solutions with Amazon EMR a...
 
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...
Big Data Analytics: Reference Architectures and Case Studies by Serhiy Haziye...
 
Big Data & Analytics Architecture
Big Data & Analytics ArchitectureBig Data & Analytics Architecture
Big Data & Analytics Architecture
 
(BDT310) Big Data Architectural Patterns and Best Practices on AWS
(BDT310) Big Data Architectural Patterns and Best Practices on AWS(BDT310) Big Data Architectural Patterns and Best Practices on AWS
(BDT310) Big Data Architectural Patterns and Best Practices on AWS
 
Building A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWSBuilding A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWS
 
Rolta iot analytics 17 mar 2015
Rolta iot analytics 17 mar 2015Rolta iot analytics 17 mar 2015
Rolta iot analytics 17 mar 2015
 
Turning big data to business outcomes
Turning big data to business outcomes Turning big data to business outcomes
Turning big data to business outcomes
 
Dell Cloud Manager Overview
Dell Cloud Manager OverviewDell Cloud Manager Overview
Dell Cloud Manager Overview
 
Search Engine Marketing
Search Engine MarketingSearch Engine Marketing
Search Engine Marketing
 
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsCreating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
 
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity OptionsCreating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
Creating Your Virtual Data Center: VPC Fundamentals and Connectivity Options
 
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...
AWS re:Invent 2016: Relational and NoSQL Databases on AWS: NBC, MarkLogic, an...
 
Eurisc analytics and big data solutions
Eurisc analytics and big data solutionsEurisc analytics and big data solutions
Eurisc analytics and big data solutions
 
Advance marketing analytics
Advance marketing analyticsAdvance marketing analytics
Advance marketing analytics
 
Big data solutions for advanced marketing analytics
Big data solutions for advanced marketing analyticsBig data solutions for advanced marketing analytics
Big data solutions for advanced marketing analytics
 
Getting Started with Docker on AWS
Getting Started with Docker on AWSGetting Started with Docker on AWS
Getting Started with Docker on AWS
 

Similar to 2016 AWS Big Data Solution Days

Big Data Solutions Day - Calgary
Big Data Solutions Day - CalgaryBig Data Solutions Day - Calgary
Big Data Solutions Day - CalgaryAmazon Web Services
 
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
 
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
 
¿Quién es Amazon Web Services?
¿Quién es Amazon Web Services?¿Quién es Amazon Web Services?
¿Quién es Amazon Web Services?Software Guru
 
Welcome & AWS Big Data Solution Overview
Welcome & AWS Big Data Solution OverviewWelcome & AWS Big Data Solution Overview
Welcome & AWS Big Data Solution OverviewAmazon Web Services
 
Aws what is cloud computing deck 08 14 13
Aws what is cloud computing deck 08 14 13Aws what is cloud computing deck 08 14 13
Aws what is cloud computing deck 08 14 13Amazon Web Services
 
AWS webinar what is cloud computing 13 09 11
AWS webinar what is cloud computing 13 09 11AWS webinar what is cloud computing 13 09 11
AWS webinar what is cloud computing 13 09 11Amazon Web Services
 
Introduction to the AWS Cloud from Digital Tuesday Meetup
Introduction to the AWS Cloud from Digital Tuesday MeetupIntroduction to the AWS Cloud from Digital Tuesday Meetup
Introduction to the AWS Cloud from Digital Tuesday MeetupIan Massingham
 
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...Amazon Web Services
 
Finding Meaning in the Noise: Understanding Big Data with AWS Analytics
Finding Meaning in the Noise: Understanding Big Data with AWS AnalyticsFinding Meaning in the Noise: Understanding Big Data with AWS Analytics
Finding Meaning in the Noise: Understanding Big Data with AWS AnalyticsAmazon Web Services
 
BDA305 Building Data Lakes and Analytics on AWS
BDA305 Building Data Lakes and Analytics on AWSBDA305 Building Data Lakes and Analytics on AWS
BDA305 Building Data Lakes and Analytics on AWSAmazon Web Services
 
State of the Union: Database & Analytics
State of the Union: Database & AnalyticsState of the Union: Database & Analytics
State of the Union: Database & AnalyticsAmazon Web Services
 
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...Amazon Web Services
 
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...Amazon Web Services
 
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017Amazon Web Services
 
Vancouver keynote - AWS Innovate - Sam Elmalak
Vancouver keynote - AWS Innovate - Sam ElmalakVancouver keynote - AWS Innovate - Sam Elmalak
Vancouver keynote - AWS Innovate - Sam ElmalakAmazon Web Services
 
AWS October Webinar Series - Introducing Amazon QuickSight
AWS October Webinar Series - Introducing Amazon QuickSightAWS October Webinar Series - Introducing Amazon QuickSight
AWS October Webinar Series - Introducing Amazon QuickSightAmazon Web Services
 
Using real time big data analytics for competitive advantage
 Using real time big data analytics for competitive advantage Using real time big data analytics for competitive advantage
Using real time big data analytics for competitive advantageAmazon Web Services
 
Data Analysis - Journey Through the Cloud
Data Analysis - Journey Through the CloudData Analysis - Journey Through the Cloud
Data Analysis - Journey Through the CloudIan Massingham
 
Journey Through the Cloud - Data Analysis
Journey Through the Cloud - Data AnalysisJourney Through the Cloud - Data Analysis
Journey Through the Cloud - Data AnalysisAmazon Web Services
 

Similar to 2016 AWS Big Data Solution Days (20)

Big Data Solutions Day - Calgary
Big Data Solutions Day - CalgaryBig Data Solutions Day - Calgary
Big Data Solutions Day - Calgary
 
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
 
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...
 
¿Quién es Amazon Web Services?
¿Quién es Amazon Web Services?¿Quién es Amazon Web Services?
¿Quién es Amazon Web Services?
 
Welcome & AWS Big Data Solution Overview
Welcome & AWS Big Data Solution OverviewWelcome & AWS Big Data Solution Overview
Welcome & AWS Big Data Solution Overview
 
Aws what is cloud computing deck 08 14 13
Aws what is cloud computing deck 08 14 13Aws what is cloud computing deck 08 14 13
Aws what is cloud computing deck 08 14 13
 
AWS webinar what is cloud computing 13 09 11
AWS webinar what is cloud computing 13 09 11AWS webinar what is cloud computing 13 09 11
AWS webinar what is cloud computing 13 09 11
 
Introduction to the AWS Cloud from Digital Tuesday Meetup
Introduction to the AWS Cloud from Digital Tuesday MeetupIntroduction to the AWS Cloud from Digital Tuesday Meetup
Introduction to the AWS Cloud from Digital Tuesday Meetup
 
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...
 
Finding Meaning in the Noise: Understanding Big Data with AWS Analytics
Finding Meaning in the Noise: Understanding Big Data with AWS AnalyticsFinding Meaning in the Noise: Understanding Big Data with AWS Analytics
Finding Meaning in the Noise: Understanding Big Data with AWS Analytics
 
BDA305 Building Data Lakes and Analytics on AWS
BDA305 Building Data Lakes and Analytics on AWSBDA305 Building Data Lakes and Analytics on AWS
BDA305 Building Data Lakes and Analytics on AWS
 
State of the Union: Database & Analytics
State of the Union: Database & AnalyticsState of the Union: Database & Analytics
State of the Union: Database & Analytics
 
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...
How Citrix Uses AWS Marketplace Solutions to Accelerate Analytic Workloads on...
 
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...
 
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017
Best Practices Using Big Data on AWS | AWS Public Sector Summit 2017
 
Vancouver keynote - AWS Innovate - Sam Elmalak
Vancouver keynote - AWS Innovate - Sam ElmalakVancouver keynote - AWS Innovate - Sam Elmalak
Vancouver keynote - AWS Innovate - Sam Elmalak
 
AWS October Webinar Series - Introducing Amazon QuickSight
AWS October Webinar Series - Introducing Amazon QuickSightAWS October Webinar Series - Introducing Amazon QuickSight
AWS October Webinar Series - Introducing Amazon QuickSight
 
Using real time big data analytics for competitive advantage
 Using real time big data analytics for competitive advantage Using real time big data analytics for competitive advantage
Using real time big data analytics for competitive advantage
 
Data Analysis - Journey Through the Cloud
Data Analysis - Journey Through the CloudData Analysis - Journey Through the Cloud
Data Analysis - Journey Through the Cloud
 
Journey Through the Cloud - Data Analysis
Journey Through the Cloud - Data AnalysisJourney Through the Cloud - Data Analysis
Journey Through the Cloud - Data Analysis
 

More from Amazon Web Services

Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
 
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateAmazon Web Services
 
Costruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSAmazon Web Services
 
Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
 
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
 
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
 
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsMicrosoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
 
Database Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareDatabase Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareAmazon Web Services
 
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSCrea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
 
API moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAPI moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAmazon Web Services
 
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareDatabase Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
 
Tools for building your MVP on AWS
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWSAmazon Web Services
 
How to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckAmazon Web Services
 
Building a web application without servers
Building a web application without serversBuilding a web application without servers
Building a web application without serversAmazon Web Services
 
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
 
Introduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceAmazon Web Services
 

More from Amazon Web Services (20)

Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
 
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS Fargate
 
Costruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWS
 
Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot
 
Open banking as a service
Open banking as a serviceOpen banking as a service
Open banking as a service
 
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
 
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
 
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsMicrosoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
 
Computer Vision con AWS
Computer Vision con AWSComputer Vision con AWS
Computer Vision con AWS
 
Database Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareDatabase Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatare
 
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSCrea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
 
API moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAPI moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e web
 
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareDatabase Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
 
Tools for building your MVP on AWS
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWS
 
How to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch Deck
 
Building a web application without servers
Building a web application without serversBuilding a web application without servers
Building a web application without servers
 
Fundraising Essentials
Fundraising EssentialsFundraising Essentials
Fundraising Essentials
 
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
 
Introduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container Service
 

Recently uploaded

DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 

Recently uploaded (20)

DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 

2016 AWS Big Data Solution Days

  • 1. Big Data Solutions Day Introductory Session Wesley Wilk Solutions Architect, AWS
  • 2. 12:30 PM Introductory Session 2:00 PM Best Practices on Real-time Streaming Analytics 3:00 PM Break 3:15 PM Getting started with Amazon Machine Learning 4:15 PM Building your first big data application on AWS Today
  • 3. Thank you to our Sponsor
  • 4. $10B+1,000,000+ Active Customers Per Month AWS In 2016: 64% YOY Growth Q1 2015 TO Q1 20166 10x Compute Capacity In Use Of 14 Others, Combined
  • 5. Infrastructure Intelligence Harness data generated from your systems and infrastructure Advanced Analytics Anticipate future behaviors and conduct what-if analysis Largest Ecosystem of ISVs Solutions vetted by the AWS Partner Competency Program Data Integration Move, synchronize, cleanse, and manage data Data Analysis & Visualization Turn data into actionable insight and enhance decision making
  • 6. Largest Ecosystem of Integrators Qualified consulting partners with AWS big data expertise North America Europe, Middle East and Africa Asia Pacific and Japan
  • 7. http://aws.analytic.business Big Data Analytics in AWS Marketplace Thousands of products pre-integrated with the AWS Cloud. 290 specific to big data Analysis & Visualization Solutions are pre-configured and ready to run on AWS Advanced Analytics Deploy when you need it, 1-Click launch in multiple regions around the world Metered pricing by the hour. Pay only for what you use. Volume licensing available Data Integration Extract valuable information from your historical and current data Predict future business performance; location, text, social, sentiment analysis Extract, migrate, or prepare and clean your data for accurate analysis
  • 8. Proven Customer Success The vast majority of big data use cases deployed in the cloud today run on AWS.
  • 9. Volume Velocity Variety Big Data is Breaking Traditional IT Infrastructure -  Too many tools to setup, manage & scale -  Unsustainable costs -  Difficult & undifferentiated heavy lifting just to get started -  New & expensive skills -  Bigger responsibility (sensitive data)
  • 10. The Evolution of Big Data Processing Descriptive Real-time Predictive Dashboards; Traditional query & reporting Clickstream analysis; Ad bidding; streaming data Inventory forecasting; Fraud detection; Recommendation engines What happened before What’s happening now Probability of “x” happening
  • 11. Big Data on AWS 1.  Broadest & Deepest Functionality 2.  Computational Power that is Second to None 3.  Petabyte-scale Analytics Made Affordable 4.  Easiest & Most Powerful Tools for Migrating Data 5.  Security You Can Trust
  • 12. 1. Broadest & Deepest Functionality Big Data Storage Data Warehousing Real-time Streaming Distributed Analytics (Hadoop, Spark, Presto) NoSQL Databases Business Intelligence Relational Databases Internet of Things (IoT) Machine Learning Server-less Compute Elasticsearch
  • 13. Core Components for Big Data Workloads
  • 14. Big Data Storage for Virtually All AWS Services Store anything Object storage Scalable 99.999999999% durability Extremely low cost Amazon S3
  • 15. Real-time processing High throughput; elastic Easy to use Integration with S3, EMR, Redshift, DynamoDB Amazon Kinesis Real-time Streaming Data
  • 16. Amazon Kinesis Streams •  For Technical Developers •  Build your own custom applications that process or analyze streaming data Amazon Kinesis Firehose •  For all developers, data scientists •  Easily load massive volumes of streaming data into S3, Amazon Redshift and Amazon Elasticsearch Amazon Kinesis Analytics •  For all developers, data scientists •  Easily analyze data streams using standard SQL queries •  Coming soon Amazon Kinesis: Streaming Data Made Easy Services make it easy to capture, deliver and process streams on AWS
  • 17. Managed Hadoop framework Spark, Presto, Hbase, Tez, Hive, etc. Cost effective; Easy to use On-demand and spot pricing HDFS & S3 File systems Amazon EMR Distributed Analytics
  • 18. NoSQL Database Seamless scalability Zero admin Single digit millisecond latency Amazon DynamoDB Fast & Flexible NoSQL Database Service
  • 19. WRITES Continuously replicated to 3 AZ’s Quorum acknowledgment Persisted to disk (custom SSD) READS Strongly or eventually consistent No trade-off in latency Durable Low Latency – At Scale
  • 20. Relational data warehouse Massively parallel; Petabyte scale Fully managed HDD and SSD Platforms $1,000/TB/Year; start at $0.25/hour Amazon Redshift Fully Managed Petabyte-scale Data Warehouse
  • 21. Scale from 160GB to 2PB Push-button Global DR Built in end-to-end security
  • 22. Amazon Redshift has security built-in SSL to secure data in transit Encryption to secure data at rest •  AES-256; hardware accelerated •  All blocks on disks and in Amazon S3 encrypted •  HSM Support No direct access to compute nodes Audit logging, AWS CloudTrail, AWS KMS integration Amazon VPC support SOC 1/2/3, PCI-DSS Level 1, FedRAMP, HIPAA 10 GigE (HPC) Ingestion Backup Restore SQL Clients/BI Tools 128GB RAM 16TB disk 16 cores 128GB RAM 16TB disk 16 cores 128GB RAM 16TB disk 16 cores 128GB RAM 16TB disk 16 cores Amazon S3 / Amazon DynamoDB Customer VPC Internal VPC JDBC/ODBC Leader Node Compute Node Compute Node Compute Node
  • 23. Easy to use, managed machine learning service built for developers Robust, powerful machine learning technology based on Amazon’s internal systems Create models using your data already stored in the AWS cloud Deploy models to production in seconds Amazon Machine Learning Machine Learning
  • 25. Sample Use Cases Fraud detection Detecting fraudulent transactions, filtering spam emails, flagging suspicious reviews, … Personalization Recommending content, predictive content loading, improving user experience, … Targeted marketing Matching customers and offers, choosing marketing campaigns, cross-selling and up-selling, … Content classification Categorizing documents, matching hiring managers and resumes, … Churn prediction Finding customers who are likely to stop using the service, free-tier upgrade targeting, … Customer support Predictive routing of customer emails, social media listening, …
  • 26. 2. Computational Power that is Second to None Over 2x computational instance types as any other cloud platform to address a wide range of big data use cases. Compute Optimized (C3, C4) Memory Optimized (R3, X1) GPU Instances (G2) Storage Optimized (I2, D2) Plus general purpose (M4), low cost burstable (T2), and dedicated instances
  • 28. 3. Affordable Petabyte-scale Analytics AWS helps customers maximize the value of Big Data investments while reducing overall IT costs Secure, Highly Durable storage $28.16 / TB / month Data Archiving $7.16 / TB / month Real-time streaming data load $0.035 / GB 10-node Spark Cluster $0.15 / hr Petabyte-scale Data Warehouse $0.25 / hr Amazon Glacier Amazon S3 Amazon RedshiftAmazon EMRAmazon Kinesis
  • 29. NASDAQ migrated to Amazon Redshift, achieving cost savings of 57% compared to legacy system Volume Avg 5.5B rows per day. Peak of 14B rows Performance Queries running faster. Loads finish before 11PM Scale Loading more data than ever stored by legacy DWH
  • 30. 4. Easy, Powerful Tools for Migrating Data AWS provides the broadest range of tools for easy, fast, secure data movement to and from the AWS cloud. AWS Direct Connect ISV Connectors Amazon Kinesis Firehose Amazon S3 Transfer Acceleration AWS Storage Gateway Database Migration Service AWS Snowball
  • 31. What is Snowball? Petabyte scale data transport E-ink shipping label Ruggedized case “8.5G Impact” All data encrypted end-to-end 50TB & 80TB 10G network Rain & dust resistant Tamper-resistant case & electronics
  • 32. Start your first migration in 10 minutes or less Keep your apps running during the migration Replicate within, to, or from Amazon EC2 or Amazon RDS Move data to the same or different database engine AWS Database Migration Service
  • 33. 5. Security & Compliance You Can Trust •  AWS manages 1800+ security controls so you don’t have to •  You benefit from an environment built for the most security sensitive organizations •  You get to define the right security controls for your workload sensitivity •  You always have full ownership and control of your data
  • 34. FINRA handles approximately 75 billion market events every day to build a holistic picture of trading in the U.S. Deter misconduct by enforcing the rules Detect and prevent wrongdoing in the U.S. markets Discipline those who break the rules
  • 35. Broadest Services to Secure Applications NETWORKING IDENTITY ENCRYPTION COMPLIANCE Virtual Private Cloud Web Application Firewall IAM Active Directory Integration SAML Federation KMS Cloud HSM Server-side Encryption Encryption SDK Service Catalog CloudTrail Config Config Rules Inspector
  • 36. Key AWS Certifications and Assurance Programs
  • 38. Amazon Web Services Time to Answer (Latency) Throughput collect store process / analyze consume / visualize data answers Time to Answer (Latency) Throughput Cost Simplify Big Data Processing
  • 39. Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift COLLECT STORE CONSUMEPROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR Amazon Elasticsearch Service Apache Kafka Amazon SQS Amazon Kinesis Streams Amazon Kinesis Firehose Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS Amazon DynamoDB Streams HotHotWarm FastSlowFast Batch Message Interactive Stream ML SearchSQLNoSQLCache File Queue Stream Amazon EC2 Amazon EC2 Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail RECORDS DOCUMENTS FILES MESSAGES STREAMS Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture Logging IoT Applications Transport Messaging ETL
  • 40. COLLECT STORE CONSUMEPROCESS / ANALYZE Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail Reference architecture Logging IoT Applications Transport Messaging ETL Collect
  • 41. COLLECT STORE CONSUMEPROCESS / ANALYZE Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail Reference architecture Logging IoT Applications Transport Messaging ETL Collect Applications
  • 42. COLLECT STORE CONSUMEPROCESS / ANALYZE Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail Reference architecture Logging IoT Applications Transport Messaging ETL Collect Applications Logging
  • 43. COLLECT STORE CONSUMEPROCESS / ANALYZE Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail Reference architecture Logging IoT Applications Transport Messaging ETL Collect Applications Logging Transport
  • 44. COLLECT STORE CONSUMEPROCESS / ANALYZE Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail Reference architecture Logging IoT Applications Transport Messaging ETL Collect Applications Logging Transport Messaging
  • 45. COLLECT STORE CONSUMEPROCESS / ANALYZE Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail Reference architecture Logging IoT Applications Transport Messaging ETL Collect Applications Logging Transport Messaging IOT
  • 46. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Database Types of Data CONSUMEPROCESS / ANALYZEETL
  • 47. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Database Types of Data Database records CONSUMEPROCESS / ANALYZEETL
  • 48. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Database Database records Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS Search Search documents Logging Types of Data CONSUMEPROCESS / ANALYZEETL
  • 49. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Database Database records AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Search File Store Search documents Log files Logging Types of Data CONSUMEPROCESS / ANALYZEETL
  • 50. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Database Database records AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Search File Store Search documents Log files Logging Messaging Message MESSAGES Messaging Queue Messaging events Types of Data CONSUMEPROCESS / ANALYZEETL
  • 51. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Database Database records AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Search File Store Search documents Log files Logging Messaging Message MESSAGES Messaging Queue Messaging events Devices Sensors & IoT platforms AWS IoT STREAMS IoT Stream storage Devices / sensors / IoT stream Types of Data CONSUMEPROCESS / ANALYZEETL
  • 52. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Search File Store Logging Messaging Message MESSAGES Messaging Queue Devices Sensors & IoT platforms AWS IoT STREAMS IoT Stream storage Database CONSUMEPROCESS / ANALYZEETL Storage
  • 53. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport File Store Logging Messaging Message MESSAGES Messaging Queue Devices Sensors & IoT platforms AWS IoT STREAMS IoT Stream storage Amazon Elasticache Amazon DynamoDB Amazon RDS Amazon Elastisearch Amazon ElastiCache Amazon DynamoDB Amazon RDS Amazon Elasticsearch Service Warm SQL Cache NoSQL Search Storage CONSUMEPROCESS / ANALYZEETL
  • 54. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Amazon Elasticache Amazon DynamoDB Amazon RDS Amazon Elastisearch Amazon S3 AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Messaging Message MESSAGES Messaging Queue Devices Sensors & IoT platforms AWS IoT STREAMS IoT Stream storage Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS HotWarm File SQL Cache NoSQL Search Storage CONSUMEPROCESS / ANALYZEETL
  • 55. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Devices Sensors & IoT platforms AWS IoT STREAMS IoT Messaging Message MESSAGES Messaging Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS HotWarm SearchSQLNoSQLCache File Stream storage Queue Message & Stream Storage CONSUMEPROCESS / ANALYZEETL
  • 56. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Message & Stream Storage Amazon SQS •  Managed message queue service AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Devices Sensors & IoT platforms AWS IoT STREAMS IoT Messaging Message MESSAGES Messaging Amazon SQS Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS HotWarm SearchSQLNoSQLCache File Queue Stream storage CONSUMEPROCESS / ANALYZEETL
  • 57. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Amazon SQS •  Managed message queue service Apache Kafka •  High throughput distributed messaging system AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Stream Devices Sensors & IoT platforms AWS IoT STREAMS IoT Messaging Message MESSAGES Messaging Amazon SQS Apache Kafka Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS HotWarm SearchSQLNoSQLCache File Queue Message & Stream Storage CONSUMEPROCESS / ANALYZEETL
  • 58. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Amazon SQS •  Managed message queue service Apache Kafka •  High throughput distributed messaging system Amazon Kinesis Streams •  Managed stream storage + processing AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Stream Devices Sensors & IoT platforms AWS IoT STREAMS IoT Messaging Message MESSAGES Messaging Amazon SQS Apache Kafka Amazon Kinesis Streams Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS HotWarm SearchSQLNoSQLCache File Queue Message & Stream Storage CONSUMEPROCESS / ANALYZEETL
  • 59. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Amazon SQS •  Managed message queue service Apache Kafka •  High throughput distributed messaging system Amazon Kinesis Streams •  Managed stream storage + processing Amazon Kinesis Firehose •  Managed data delivery AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Stream Devices Sensors & IoT platforms AWS IoT STREAMS IoT Messaging Message MESSAGES Messaging Amazon SQS Apache Kafka Amazon Kinesis Streams Amazon Kinesis Firehose Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS HotWarm SearchSQLNoSQLCache File Queue Message & Stream Storage CONSUMEPROCESS / ANALYZEETL
  • 60. COLLECT STORE Mobile apps Web apps Data centers AWS Direct Connect RECORDS Reference architecture Applications Amazon SQS •  Managed message queue service Apache Kafka •  High throughput distributed messaging system Amazon Kinesis Streams •  Managed stream storage + processing Amazon Kinesis Firehose •  Managed data delivery Amazon DynamoDB •  Managed NoSQL database •  Tables can be stream-enabled AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail DOCUMENTS FILES Transport Logging Stream Devices Sensors & IoT platforms AWS IoT STREAMS IoT Messaging Message MESSAGES Messaging Amazon SQS Apache Kafka Amazon Kinesis Streams Amazon Kinesis Firehose Amazon DynamoDB Streams Amazon Elasticsearch Service Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS Warm SearchSQLNoSQLCache File Queue Hot Message & Stream Storage CONSUMEPROCESS / ANALYZEETL
  • 61. Reference architecture COLLECT STORE ETL Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift PROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR FastSlowFast Batch Message Interactive Stream ML Amazon EC2 Amazon EC2 Tools and Frameworks
  • 62. Reference architecture COLLECT STORE ETL Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift PROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR FastSlowFast Batch Message Interactive Stream ML Amazon EC2 Amazon EC2 Tools and Frameworks Machine Learning •  Amazon ML, Amazon EMR (Spark ML)
  • 63. Reference architecture COLLECT STORE ETL Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift PROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR FastSlowFast Batch Message Interactive Stream ML Amazon EC2 Amazon EC2 Tools and Frameworks Machine Learning •  Amazon ML, Amazon EMR (Spark ML) Interactive •  Amazon Redshift, Amazon EMR (Presto, Spark)
  • 64. Reference architecture COLLECT STORE ETL Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift PROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR FastSlowFast Batch Message Interactive Stream ML Amazon EC2 Amazon EC2 Tools and Frameworks Machine Learning •  Amazon ML, Amazon EMR (Spark ML) Interactive •  Amazon Redshift, Amazon EMR (Presto, Spark) Batch •  Amazon EMR (MapReduce, Hive, Pig, Spark)
  • 65. Reference architecture COLLECT STORE ETL Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift PROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR FastSlowFast Batch Message Interactive Stream ML Amazon EC2 Amazon EC2 Tools and Frameworks Machine Learning •  Amazon ML, Amazon EMR (Spark ML) Interactive •  Amazon Redshift, Amazon EMR (Presto, Spark) Batch •  Amazon EMR (MapReduce, Hive, Pig, Spark) Messaging •  Amazon SQS application on Amazon EC2
  • 66. Reference architecture COLLECT STORE ETL Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift PROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR FastSlowFast Batch Message Interactive Stream ML Amazon EC2 Amazon EC2 Tools and Frameworks Machine Learning •  Amazon ML, Amazon EMR (Spark ML) Interactive •  Amazon Redshift, Amazon EMR (Presto, Spark) Batch •  Amazon EMR (MapReduce, Hive, Pig, Spark) Messaging •  Amazon SQS application on Amazon EC2 Streaming •  Micro-batch: Spark Streaming, KCL •  Real-time: Amazon Kinesis Analytics, Storm, AWS Lambda, KCLç
  • 67. COLLECT STORE PROCESS / ANALYZE Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture ETL CONSUME Consume
  • 68. COLLECT STORE PROCESS / ANALYZE Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture ETL CONSUME Applications & API Consume
  • 69. COLLECT STORE PROCESS / ANALYZE Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture ETL CONSUME Applications & API Analysis and visualization Consume
  • 70. COLLECT STORE PROCESS / ANALYZE Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture ETL CONSUME Applications & API Analysis and visualization Notebooks Consume
  • 71. COLLECT STORE PROCESS / ANALYZE Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture ETL CONSUME Applications & API Analysis and visualization Notebooks IDE Consume
  • 72. Amazon SQS apps Streaming Amazon Kinesis Analytics Amazon KCL apps AWS Lambda Amazon Redshift COLLECT STORE CONSUMEPROCESS / ANALYZE Amazon Machine Learning Presto Amazon EMR Amazon Elasticsearch Service Apache Kafka Amazon SQS Amazon Kinesis Streams Amazon Kinesis Firehose Amazon DynamoDB Amazon S3 Amazon ElastiCache Amazon RDS Amazon DynamoDB Streams HotHotWarm FastSlowFast Batch Message Interactive Stream ML SearchSQLNoSQLCache File Queue Stream Amazon EC2 Amazon EC2 Mobile apps Web apps Devices Messaging Message Sensors & IoT platforms AWS IoT Data centers AWS Direct Connect AWS Import/Export Snowball Logging Amazon CloudWatch AWS CloudTrail RECORDS DOCUMENTS FILES MESSAGES STREAMS Amazon QuickSight Apps & Services Analysis&visualization Notebooks IDE API Reference architecture Logging IoT Applications Transport Messaging ETL
  • 73. Storage Compute •  Decoupled “data bus” •  Completely autonomous components •  Independently scale storage & compute resources on-demand Collect → Store → Process | Analyze → Consume Multi-Stage Decoupled “Data Bus”
  • 74. Multi-Stage Decoupled “Data Bus” Multiple stages Storage decouples multiple processing stages Store Process Store ProcessData process store
  • 75. Multiple Stream Processing Applications Can Read from Amazon Kinesis Amazon Kinesis AWS Lambda Amazon DynamoDB Amazon Kinesis S3 Connector process store Amazon S3
  • 76. Amazon EMR Analysis Frameworks Can Read from Multiple Data Stores Amazon Kinesis AWS Lambda Amazon S3 Data Amazon DynamoDB AnswerSpark Streaming Amazon Kinesis S3 Connector process store Spark SQL
  • 77. Amazon EMR Real-time Analytics Apache Kafka KCL AWS Lambda Spark Streaming Apache Storm Amazon SNS Amazon ML Notifications Amazon ElastiCache (Redis) Amazon DynamoDB Amazon RDS Amazon ES Alert App state Real-time prediction KPI process store DynamoDB Streams Amazon Kinesis Data stream
  • 78. Case Study: Clickstream Analysis Hearst Corporation monitors trending content for over 250 digital properties worldwide and processes more than 30TB of data per day, using an architecture that includes Amazon Kinesis and Spark running on Amazon EMR. Store → Process | Analyze → Answers
  • 79. Interactive & Batch Analytics Amazon S3 Amazon EMR Hive Pig Spark Amazon ML process store Consume Amazon Redshift Amazon EMR Presto Spark Batch Interactive Batch prediction Real-time prediction Amazon Kinesis Firehose
  • 80. Batch layer Amazon Kinesis Data stream process store Amazon Kinesis S3 Connector A p p l i c a t i o n s Amazon Redshift Amazon EMR Presto Hive Pig Spark answer Amazon S3 Speed layer answer Serving layer Amazon ElastiCache Amazon DynamoDB Amazon RDS Amazon ES answer Amazon ML KCL AWS Lambda Storm Lambda Architecture Spark Streaming on Amazon EMR
  • 81. Case Study: On-demand Big Data Analytics Redfin uses Amazon EMR with spot instances – dynamically spinning up & down Apache Hadoop clusters – to perform large data transformations and deliver data to internal and external customers. Store → Process → Store → Analyze → Answers
  • 82. Architectural Principles Decoupled “data bus” •  Data → Store → Process → Store → Analyze → Answers Use the right tool for the job •  Data structure, latency, throughput, access patterns Use Lambda architecture ideas •  Immutable (append-only) log, batch/speed/serving layer Leverage AWS managed services •  Scalable/elastic, available, reliable, secure, no/low admin Big data ≠ big cost
  • 83. Building a Big Data Application
  • 84. Building a Big Data Application web clients mobile clients DBMS corporate data center Getting Started
  • 85. Building a Big Data Application web clients mobile clients DBMS Amazon Redshift Amazon QuickSight AWS cloudcorporate data center Adding a data warehouse
  • 86. Building a Big Data Application web clients mobile clients DBMS Raw data Amazon Redshift Staging Data Amazon QuickSight AWS cloud Bringing in Log Data corporate data center
  • 87. Building a Big Data Application web clients mobile clients DBMS Raw data Amazon Redshift Staging Data Orc/Parquet (Query optimized) Amazon QuickSight AWS cloud Extending your DW to S3 corporate data center
  • 88. Building a Big Data Application web clients mobile clients DBMS Raw data Amazon Redshift Staging Data Orc/Parquet (Query optimized) Amazon QuickSight Kinesis Streams AWS cloud Adding a real-time layer corporate data center
  • 89. Building a Big Data Application web clients mobile clients DBMS Raw data Amazon Redshift Staging Data Orc/Parquet (Query optimized) Amazon QuickSight Kinesis Streams AWS cloud Adding predictive analytics corporate data center
  • 90. Building a Big Data Application web clients mobile clients DBMS Raw data Amazon Redshift Staging Data Orc/Parquet (Query optimized) Amazon QuickSight Kinesis Streams AWS cloud Adding encryption at rest with AWS KMS corporate data center AWS KMS
  • 91. Building a Big Data Application web clients mobile clients DBMS Raw data Amazon Redshift Staging Data Orc/Parquet (Query optimized) Amazon QuickSight Kinesis Streams AWS cloud AWS KMS VPC subnet SSL/TLS SSL/TLS Protecting Data in Transit & Adding Network Isolation corporate data center
  • 92. The Evolution of Big Data Processing Descriptive Real-time Predictive Dashboards; Traditional query & reporting Clickstream analysis; Ad bidding; streaming data Inventory forecasting; Fraud detection; Recommendation engines What happened before What’s happening now Probability of “x” happening Amazon Kinesis Amazon EC2 AWS Lambda Amazon DynamoDB Amazon EMR Amazon Redshift Amazon QuickSight Amazon EMR Amazon Machine Learning Amazon EMR
  • 93. PLACEHOLDER CALL TO ACTION aws.amazon.com/big-data Big Data Quest Learn at your own pace and practice working with AWS services for big data on qwikLABS. (3 Hours | Online)
 Big Data on AWS How to use AWS services to process data with Hadoop & create big data environments (3 Days | Classroom ) Big Data Technology Fundamentals FREE! Overview of AWS big data solutions for architects or data scientists new to big data. (3 Hours | Online) 
 AWS Courses Self-paced Online Labs
  • 94. 2:00 PM Best Practices on Real-time Streaming Analytics 3:00 PM Break 3:15 PM Getting started with Amazon Machine Learning 4:15 PM Building your first big data application on AWS Next…
  • 95. Real-time Streaming Data on AWS Deep Dive & Best Practices Using Amazon Kinesis, Spark Streaming, AWS Lambda, and Amazon EMR Steve Abraham Solutions Architect, AWS
  • 96. Batch Processing Hourly server logs Weekly or monthly bills Daily web-site clickstream Daily fraud reports Stream Processing Real-time metrics Real-time spending alerts/caps Real-time clickstream analysis Real-time detection It’s All About the Pace
  • 97. Streaming Data Scenarios Across Verticals Scenarios/ Verticals Accelerated Ingest- Transform-Load Continuous Metrics Generation Responsive Data Analysis Digital Ad Tech/ Marketing Publisher, bidder data aggregation Advertising metrics like coverage, yield, and conversion User engagement with ads, optimized bid/buy engines IoT Sensor, device telemetry data ingestion Operational metrics and dashboards Device operational intelligence and alerts Gaming Online data aggregation, e.g., top 10 players Massively multiplayer online game (MMOG) live dashboard Leader board generation, player-skill match Consumer Online Clickstream analytics Metrics like impressions and page views Recommendation engines, proactive care
  • 98. Customer Use Cases Sonos runs near real-time streaming analytics on device data logs from their connected hi-fi audio equipment. Analyzing 30TB+ clickstream data enabling real-time insights for Publishers. Glu Mobile collects billions of gaming events data points from millions of user devices in real-time every single day. Nordstorm Nordstorm recommendation team built online stylist using Amazon Kinesis Streams and AWS Lambda.
  • 99. Metering Record Common Log Entry MQTT RecordSyslog Entry { "payerId": "Joe", "productCode": "AmazonS3", "clientProductCode": "AmazonS3", "usageType": "Bandwidth", "operation": "PUT", "value": "22490", "timestamp": "1216674828" } { 127.0.0.1 user- identifier frank [10/Oct/ 2000:13:55:36 -0700] "GET / apache_pb.gif HTTP/1.0" 200 2326 } { “SeattlePublicWa ter/Kinesis/123/ Realtime” – 412309129140 } { <165>1 2003-10-11T22:14:15.003Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"] [examplePriority@32473 class="high"] } Streaming Data Challenges: Variety & Velocity •  Streaming data comes in different types and formats −  Metering records, logs and sensor data −  JSON, CSV, TSV •  Can vary in size from a few bytes to kilobytes or megabytes •  High velocity and continuous
  • 100. Two Main Processing Patterns Stream processing (real time) •  Real-time response to events in data streams Examples: •  Proactively detect hardware errors in device logs •  Notify when inventory drops below a threshold •  Fraud detection Micro-batching (near real time) •  Near real-time operations on small batches of events in data streams Examples: •  Aggregate and archive events •  Monitor performance SLAs
  • 102. Amazon Kinesis Streams •  For Technical Developers •  Build your own custom applications that process or analyze streaming data Amazon Kinesis Firehose •  For all developers, data scientists •  Easily load massive volumes of streaming data into S3, Amazon Redshift and Amazon Elasticsearch Amazon Kinesis Analytics •  For all developers, data scientists •  Easily analyze data streams using standard SQL queries •  Coming soon Amazon Kinesis: Streaming Data Made Easy Services make it easy to capture, deliver and process streams on AWS
  • 103. Amazon Kinesis Firehose Load massive volumes of streaming data into Amazon S3, Amazon Redshift and Amazon Elasticsearch Zero administration: Capture and deliver streaming data into Amazon S3, Amazon Redshift and Amazon Elasticsearch without writing an application or managing infrastructure. Direct-to-data store integration: Batch, compress, and encrypt streaming data for delivery into data destinations in as little as 60 secs using simple configurations. Seamless elasticity: Seamlessly scales to match data throughput w/o intervention Capture and submit streaming data to Firehose Analyze streaming data using your favorite BI tools Firehose loads streaming data continuously into S3, Amazon Redshift and Amazon Elasticsearch
  • 105. • Streams are made of shards • Each shard ingests up to 1MB/sec, and 1000 records/sec • Each shard emits up to 2 MB/sec • All data is stored for 24 hours by default; storage can be extended for up to 7 days • Scale Kinesis streams using scaling util • Replay data inside of 24-hour window
  • 106. Amazon Kinesis Firehose vs. Amazon Kinesis Streams Amazon Kinesis Streams is for use cases that require custom processing, per incoming record, with sub-1 second processing latency, and a choice of stream processing frameworks. Amazon Kinesis Firehose is for use cases that require zero administration, ability to use existing analytics tools based on Amazon S3, Amazon Redshift and Amazon Elasticsearch, and a data latency of 60 seconds or higher.
  • 108. Putting Data into Amazon Kinesis Streams Determine your partition key strategy •  Managed buffer or streaming MapReduce job •  Ensure high cardinality for your shards Provision adequate shards •  For ingress needs •  Egress needs for all consuming applications: if more than two simultaneous applications •  Include headroom for catching up with data in stream
  • 109. Putting Data into Amazon Kinesis Amazon Kinesis Agent – (supports pre-processing) •  http://docs.aws.amazon.com/firehose/latest/dev/writing-with-agents.html Pre-batch before Puts for better efficiency •  Consider Flume, Fluentd as collectors/agents •  See https://github.com/awslabs/aws-fluent-plugin-kinesis Make a tweak to your existing logging •  log4j appender option •  See https://github.com/awslabs/kinesis-log4j-appender AWS IoT •  The Rules Engine can also route messages to AWS endpoints including Amazon Kinesis.
  • 110. •  A provisioned entity called a Stream composed of Shards •  Producers use a PUT call to store data in a Stream. •  Each record <= 1 MB. PutRecord or PutRecords •  A partition key is supplied by producer and used to distribute (MD5 hash) the PUTs across (hash key range) of Shards •  Unique Sequence# returned upon successful PUT call •  Approximate arrival timestamp affixed to each record Putting Data into Amazon Kinesis Streams Simple Put* interface to capture and store data in Streams
  • 111. Amazon Kinesis Producer Library •  Writes to one or more Amazon Kinesis streams with automatic, configurable retry mechanism •  Collects records and uses PutRecords to write multiple records to multiple shards per request •  Aggregates user records to increase payload size and improve throughput •  Integrates seamlessly with KCL to de-aggregate batched records •  Use Amazon Kinesis Producer Library with AWS Lambda (New!) •  Submits Amazon CloudWatch metrics on your behalf to provide visibility into producer performance
  • 112. Record Order and Multiple Shards Unordered processing •  Randomize partition key to distribute events over many shards and use multiple workers Exact order processing •  Control partition key to ensure events are grouped into the same shard and read by the same worker Need both? Use global sequence number Producer Get Global Sequence Unordered Stream Campaign Centric Stream Fraud Inspection Stream Get Event Metadata
  • 113. Sample Code for Scaling Shards java -cp KinesisScalingUtils.jar-complete.jar -Dstream-name=MyStream -Dscaling-action=scaleUp -Dcount=10 -Dregion=eu-west-1 ScalingClient Options: •  stream-name - The name of the stream to be scaled •  scaling-action - The action to be taken to scale. Must be one of "scaleUp”, "scaleDown" or “resize” •  count - Number of shards by which to absolutely scale up or down, or resize See https://github.com/awslabs/amazon-kinesis-scaling-utils
  • 114. Managed Buffer •  Care about a reliable, scalable way to capture data •  Defer all other aggregation to consumer •  Generate Random Partition Keys •  Ensure a high cardinality for Partition Keys with respect to shards, to spray evenly across available shards Putting Data into Amazon Kinesis Streams Streaming Map-Reduce •  Streaming Map-Reduce: leverage partition keys as a natural way to aggregate data •  For e.g. Partition Keys per billing customer, per DeviceId, per stock symbol •  Design partition keys to scale •  Be aware of “hot partition keys or shards ” Determine your partition key strategy
  • 115. Putting Data into Amazon Kinesis Streams Provision adequate shards •  For ingress needs •  Egress needs for all consuming applications: if more than two simultaneous applications •  Include headroom for catching up with data in stream
  • 116. Amazon Kinesis Stream Processing
  • 117. Amazon Kinesis Client Library •  Build Kinesis Applications with Kinesis Client Library (KCL) •  Open source client library available for Java, Ruby, Python, Node.JS dev •  Deploy on your EC2 instances •  KCL Application includes three components: 1.  Record Processor Factory – Creates the record processor 2.  Record Processor – Processor unit that processes data from a shard in Amazon Kinesis Streams 3.  Worker – Processing unit that maps to each application instance
  • 118. State Management with Kinesis Client Library •  One record processor maps to one shard and processes data records from that shard •  One worker maps to one or more record processors •  Balances shard-worker associations when worker / instance counts change •  Balances shard-worker associations when shards split or merge
  • 119. Other Options •  Third-party connectors (for example, Splunk) •  AWS IoT Platform •  AWS Lambda •  Amazon EMR with Apache Spark, Pig or Hive
  • 120. Apache Spark and Amazon Kinesis Streams Apache Spark is an in-memory analytics cluster using RDD for fast processing Spark Streaming can read directly from an Amazon Kinesis stream Amazon software license linking – Add ASL dependency to SBT/MAVEN project, artifactId = spark-streaming-kinesis-asl_2.10 KinesisUtils.createStream(‘twitter-stream’) .filter(_.getText.contains(”Open-Source")) .countByWindow(Seconds(5)) Example: Counting tweets on a sliding window
  • 121. Amazon EMR Amazon Kinesis StreamsStreaming Input Tumbling/Fixed Window Aggregation Periodic Output Amazon Redshift COPY from Amazon EMR Common Integration Pattern with Amazon EMR Tumbling Window Reporting
  • 122. Using Spark Streaming with Amazon Kinesis Streams 1.  Use Spark 1.6+ with EMRFS consistent view option – if you use Amazon S3 as storage for Spark checkpoint 2.  Amazon DynamoDB table name – make sure there is only one instance of the application running with Spark Streaming 3.  Enable Spark-based checkpoints 4.  Number of Amazon Kinesis receivers is multiple of executors so they are load-balanced 5.  Total processing time is less than the batch interval 6.  Number of executors is the same as number of cores per executor 7.  Spark Streaming uses default of 1 sec with KCL
  • 123. Demo Sensors Amazon Kinesis Streams Streaming ProcessData Source Collect / Store Consume - Simulated using EC2 - 20K events/sec - Scalable - Managed -  Millisecond latency -  Highly available -  Scalable -  Managed -  Micro-batch -  Checkpoint in DynamoDB Amazon DynamoDB Device_Id, temperature, timestamp 1,65,2016-03-25 01:01:20 2,68,2016-03-25 01:01:20 3,67,2016-03-25 01:01:20 4,85,2016-03-25 01:01:20
  • 124. Amazon Kinesis Streams with AWS Lambda
  • 125. Build your Streaming Pipeline with Amazon Kinesis, AWS Lambda and Amazon EMR
  • 126. Conclusion •  Amazon Kinesis offers: managed service to build applications, streaming data ingestion, and continuous processing •  Ingest aggregate data using Amazon Producer Library •  Process data using Amazon Connector Library and open source connectors •  Determine your partition key strategy •  Try out Amazon Kinesis at http://aws.amazon.com/kinesis/
  • 127. •  Technical documentation •  Amazon Kinesis Agent •  Amazon Kinesis Streams and Spark Streaming •  Amazon Kinesis Producer Library Best Practice •  Amazon Kinesis Firehose and AWS Lambda •  Building Near Real-Time Discovery Platform with Amazon Kinesis •  Public case studies •  Glu mobile – Real-Time Analytics •  Hearst Publishing – Clickstream Analytics •  How Sonos Leverages Amazon Kinesis •  Nordstorm Online Stylist Reference
  • 128. 3:00 PM Break 3:15 PM Getting started with Amazon Machine Learning 4:15 PM Building your first big data application on AWS Next…
  • 129. Getting Started with Amazon Machine Learning Fabio Silva Solution Architect, AWS
  • 130. Agenda •  Machine learning and the data ecosystem •  Smart applications by example (and counterexample) •  Amazon Machine Learning (Amazon ML) features and benefits •  Developing with Amazon ML •  Q&A
  • 131. Three types of data-driven development Retrospective analysis and reporting Amazon Redshift, Amazon RDS Amazon S3 Amazon EMR
  • 132. Three types of data-driven development Retrospective analysis and reporting Here-and-now real-time processing and dashboards Amazon Kinesis Amazon EC2 AWS Lambda Amazon Redshift, Amazon RDS Amazon S3 Amazon EMR
  • 133. Three types of data-driven development Retrospective analysis and reporting Here-and-now real-time processing and dashboards Predictions to enable smart applications Amazon Kinesis Amazon EC2 AWS Lambda Amazon Redshift, Amazon RDS Amazon S3 Amazon EMR
  • 134. Machine learning and smart applications Machine learning is the technology that automatically finds patterns in your data and uses them to make predictions for new data points as they become available.
  • 135. Machine learning and smart applications Machine learning is the technology that automatically finds patterns in your data and uses them to make predictions for new data points as they become available. Your data + machine learning = smart applications
  • 136. Smart applications by example Based on what you know about the user: Will they use your product?
  • 137. Smart applications by example Based on what you know about the user: Will they use your product? Based on what you know about an order: Is this order fraudulent?
  • 138. Smart applications by example Based on what you know about the user: Will they use your product? Based on what you know about an order: Is this order fraudulent? Based on what you know about a news article: What other articles are interesting?
  • 139. And a few more examples… Fraud detection Detecting fraudulent transactions, filtering spam emails, flagging suspicious reviews,… Personalization Recommending content, predictive content loading, improving user experience,… Targeted marketing Matching customers and offers, choosing marketing campaigns, cross-selling and up-selling,… Content classification Categorizing documents, matching hiring managers and resumes,… Churn prediction Finding customers who are likely to stop using the service, upgrade targeting,… Customer support Predictive routing of customer emails, social media listening,…
  • 140. Smart applications by counterexample Dear Alex, This awesome quadcopter is on sale for just $49.99!
  • 141. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer GROUP BY c.ID HAVING o.date > GETDATE() – 30 We can start by sending the offer to all customers who placed an order in the last 30 days
  • 142. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer GROUP BY c.ID HAVING O.CATEGORY = ‘TOYS’ AND o.date > GETDATE() – 30 …let’s narrow it down to just customers who bought toys
  • 143. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN PRODUCTS P ON P.ID = O.PRODUCT GROUP BY c.ID HAVING o.category = ‘toys’ AND ((P.DESCRIPTION LIKE ‘%HELICOPTER%’ AND O.DATE > GETDATE() - 60) OR (COUNT(*) > 2 AND SUM(o.price) > 200 AND o.date > GETDATE() – 30) ) …and expand the query to customers who purchased other toy helicopters recently, or made several expensive toy purchases
  • 144. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN products p ON p.ID = o.product GROUP BY c.ID HAVING o.category = ‘toys’ AND ((p.description LIKE ‘%COPTER%’ AND o.date > GETDATE() - 60) OR (COUNT(*) > 2 AND SUM(o.price) > 200 AND o.date > GETDATE() – 30) ) …but what about quadcopters?
  • 145. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN products p ON p.ID = o.product GROUP BY c.ID HAVING o.category = ‘toys’ AND ((p.description LIKE ‘%copter%’ AND o.date > GETDATE() - 120) OR (COUNT(*) > 2 AND SUM(o.price) > 200 AND o.date > GETDATE() – 30) ) …maybe we should go back further in time
  • 146. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN products p ON p.ID = o.product GROUP BY c.ID HAVING o.category = ‘toys’ AND ((p.description LIKE ‘%copter%’ AND o.date > GETDATE() - 120) OR (COUNT(*) > 2 AND SUM(o.price) > 200 AND o.date > GETDATE() – 40) ) …tweak the query more
  • 147. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN products p ON p.ID = o.product GROUP BY c.ID HAVING o.category = ‘toys’ AND ((p.description LIKE ‘%copter%’ AND o.date > GETDATE() - 120) OR (COUNT(*) > 2 AND SUM(o.price) > 150 AND o.date > GETDATE() – 40) ) …again
  • 148. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN products p ON p.ID = o.product GROUP BY c.ID HAVING o.category = ‘toys’ AND ((p.description LIKE ‘%copter%’ AND o.date > GETDATE() - 90) OR (COUNT(*) > 2 AND SUM(o.price) > 150 AND o.date > GETDATE() – 40) ) …and again
  • 149. Smart applications by counterexample SELECT c.ID FROM customers c LEFT JOIN orders o ON c.ID = o.customer LEFT JOIN products p ON p.ID = o.product GROUP BY c.ID HAVING o.category = ‘toys’ AND ((p.description LIKE ‘%copter%’ AND o.date > GETDATE() - 90) OR (COUNT(*) > 2 AND SUM(o.price) > 150 AND o.date > GETDATE() – 40) ) Use machine learning technology to learn your business rules from data!
  • 150. Why aren’t there more smart applications? 1.  Machine learning expertise is rare. 2.  Building and scaling machine learning technology is hard. 3.  Closing the gap between models and applications is time-consuming and expensive.
  • 151. Building smart applications today Expertise Technology Operationalization Limited supply of data scientists Many choices, few mainstays Complex and error-prone data workflows Expensive to hire or outsource Difficult to use and scale Custom platforms and APIs Many moving pieces lead to custom solutions every time Reinventing the model lifecycle management wheel
  • 152. What if there were a better way?
  • 153. Introducing Amazon Machine Learning Easy-to-use, managed machine learning service built for developers Robust, powerful machine learning technology based on Amazon’s internal systems Create models using your data already stored in the AWS cloud Deploy models to production in seconds
  • 154. Easy-to-use and developer-friendly Use the intuitive, powerful service console to build and explore your initial models •  Data retrieval •  Model training, quality evaluation, fine-tuning •  Deployment and management Automate model lifecycle with fully featured APIs and SDKs •  Java, Python, .NET, JavaScript, Ruby, PHP Easily create smart iOS and Android applications with AWS Mobile SDK
  • 155. Powerful machine learning technology Based on Amazon’s battle-hardened internal systems Not just the algorithms: •  Smart data transformations •  Input data and model quality alerts •  Built-in industry best practices Grows with your needs •  Train on up to 100 GB of data •  Generate billions of predictions •  Obtain predictions in batches or real-time
  • 156. Integrated with the AWS data ecosystem Access data that is stored in Amazon S3, Amazon Redshift, or MySQL databases in Amazon RDS Output predictions to Amazon S3 for easy integration with your data flows Use AWS Identity and Access Management (IAM) for fine-grained data access permission policies
  • 157. Fully-managed model and prediction services End-to-end service, with no servers to provision and manage One-click production model deployment Programmatically query model metadata to enable automatic retraining workflows Monitor prediction usage patterns with Amazon CloudWatch metrics
  • 158. Pay-as-you-go and inexpensive Data analysis, model training, and evaluation: $0.42/instance hour Batch predictions: $0.10/1000 Real-time predictions: $0.10/1000 + hourly capacity reservation charge
  • 159. Three supported types of predictions Binary classification Predict the answer to a Yes/No question Multiclass classification Predict the correct category from a list Regression Predict the value of a numeric variable
  • 160. Train model Evaluate and optimize Retrieve predictions 1 2 3 Building smart applications with Amazon ML
  • 161. Train model Evaluate and optimize Retrieve predictions 1 2 3 Building smart applications with Amazon ML -  Create a datasource object pointing to your data -  Explore and understand your data -  Transform data and train your model
  • 162. Create a datasource object >>> import boto >>> ml = boto.connect_machinelearning() >>> ds = ml.create_data_source_from_s3( data_source_id = ’my_datasource', data_spec = { 'DataLocationS3': 's3://bucket/input/data.csv', 'DataSchemaLocationS3': 's3://bucket/input/data.schema', ’compute_statistics’: True } )
  • 164. Train your model >>> import boto >>> ml = boto.connect_machinelearning() >>> model = ml.create_ml_model( ml_model_id = ’my_model', ml_model_type = 'REGRESSION', training_data_source_id = 'my_datasource')
  • 165. Train model Evaluate and optimize Retrieve predictions 1 2 3 Building smart applications with Amazon ML -  Measure and understand model quality -  Adjust model interpretation
  • 169. Train model Evaluate and optimize Retrieve predictions 1 2 3 Building smart applications with Amazon ML -  Batch predictions -  Real-time predictions
  • 170. Batch predictions Asynchronous, large-volume prediction generation Request through service console or API Best for applications that deal with batches of data records >>> import boto >>> ml = boto.connect_machinelearning() >>> model = ml.create_batch_prediction( batch_prediction_id = 'my_batch_prediction’, batch_prediction_data_source_id = ’my_datasource’, ml_model_id = ’my_model', output_uri = 's3://examplebucket/output/’)
  • 171. Real-time predictions Synchronous, low-latency, high-throughput prediction generation Request through service API, server, or mobile SDKs Best for interaction applications that deal with individual data records >>> import boto >>> ml = boto.connect_machinelearning() >>> ml.predict( ml_model_id = ’my_model', predict_endpoint = ’example_endpoint’, record = {’key1':’value1’, ’key2':’value2’}) { 'Prediction': { 'predictedValue': 13.284348, 'details': { 'Algorithm': 'SGD', 'PredictiveModelType': 'REGRESSION’ } } }
  • 173. Get predictions with Amazon ML batch API Process data with EMR Raw data in S3 Aggregated data in S3 Predictions in S3 Your application Batch predictions with EMR
  • 174. Structured data In Amazon Redshift Load predictions into Amazon Redshift - or - Read prediction results directly from S3 Predictions in S3 Get predictions with Amazon ML batch API Your application Batch predictions with Amazon Redshift
  • 175. Your application Get predictions with Amazon ML real-time API Amazon ML service Real-time predictions for interactive applications
  • 176. Your application Amazon DynamoDB + Trigger events with Lambda + Get predictions with Amazon ML real-time API Adding predictions to an existing data flow
  • 178. 4:15 PM Building your first big data application on AWS Next…
  • 179. Building Your First Big Data Application on AWS Radhika Ravirala Solutions Architect
  • 180. Your First Big Data Application on AWS PROCESS STORE ANALYZE & VISUALIZE COLLECT
  • 181. Your First Big Data Application on AWS PROCESS: Amazon EMR with Spark & Hive STORE ANALYZE & VISUALIZE: Amazon Redshift and Amazon QuickSight COLLECT: Amazon Kinesis Firehose
  • 183. Setting up the environment
  • 184. Data Storage with Amazon S3 Download all the CLI steps: http://bit.ly/aws-big-data-steps Create an Amazon S3 bucket to store the data collected with Amazon Kinesis Firehose aws s3 mb s3://YOUR-S3-BUCKET-NAME
  • 185. Access Control with IAM Create an IAM role to allow Firehose to write to the S3 bucket firehose-policy.json: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "firehose.amazonaws.com"}, "Action": "sts:AssumeRole" } }
  • 186. Access Control with IAM Create an IAM role to allow Firehose to write to the S3 bucket s3-rw-policy.json: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::YOUR-S3-BUCKET-NAME", "arn:aws:s3:::YOUR-S3-BUCKET-NAME/*" ] } }
  • 187. Access Control with IAM Create an IAM role to allow Firehose to write to the S3 bucket aws iam create-role --role-name firehose-demo --assume-role-policy-document file://firehose-policy.json Copy the value in “Arn” in the output, e.g., arn:aws:iam::123456789:role/firehose-demo aws iam put-role-policy --role-name firehose-demo --policy-name firehose-s3-rw --policy-document file://s3-rw-policy.json
  • 188. Data Collection with Amazon Kinesis Firehose Create a Firehose stream for incoming log data aws firehose create-delivery-stream --delivery-stream-name demo-firehose-stream --s3-destination-configuration RoleARN=YOUR-FIREHOSE-ARN, BucketARN="arn:aws:s3:::YOUR-S3-BUCKET-NAME", Prefix=firehose/, BufferingHints={IntervalInSeconds=60}, CompressionFormat=GZIP
  • 189. Data Processing with Amazon EMR Launch an Amazon EMR cluster with Spark and Hive aws emr create-cluster --name "demo" --release-label emr-4.5.0 --instance-type m3.xlarge --instance-count 2 --ec2-attributes KeyName=YOUR-AWS-SSH-KEY --use-default-roles --applications Name=Hive Name=Spark Name=Zeppelin-Sandbox Record your ClusterId from the output.
  • 190. Data Analysis with Amazon Redshift Create a single-node Amazon Redshift data warehouse: aws redshift create-cluster --cluster-identifier demo --db-name demo --node-type dc1.large --cluster-type single-node --master-username master --master-user-password YOUR-REDSHIFT-PASSWORD --publicly-accessible --port 8192
  • 192. Weblogs – Common Log Format (CLF) 75.35.230.210 - - [20/Jul/2009:22:22:42 -0700] "GET /images/pigtrihawk.jpg HTTP/1.1" 200 29236 "http:// www.swivel.com/graphs/show/1163466" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)"
  • 193. Writing into Amazon Kinesis Firehose Download the demo weblog: http://bit.ly/aws-big-data Open Python and run the following code to import the log into the stream: import boto3 iam = boto3.client('iam') firehose = boto3.client('firehose') with open('weblog', 'r') as f: for line in f: firehose.put_record( DeliveryStreamName='demo-firehose-stream', Record={'Data': line} ) print 'Record added'
  • 195. Apache Spark •  Fast, general purpose engine for large-scale data processing •  Write applications quickly in Java, Scala, or Python •  Combine SQL, streaming, and complex analytics
  • 196. Spark SQL Spark's module for working with structured data using SQL Run unmodified Hive queries on existing data
  • 197. Apache Zeppelin •  Web-based notebook for interactive analytics •  Multiple language backend •  Apache Spark integration •  Data visualization •  Collaboration https://zeppelin.incubator.apache.org/
  • 198. View the Output Files in Amazon S3 After about 1 minute, you should see files in your S3 bucket: aws s3 ls s3://YOUR-S3-BUCKET-NAME/firehose/ --recursive
  • 199. Connect to Your EMR Cluster and Zeppelin aws emr describe-cluster --cluster-id YOUR-EMR-CLUSTER-ID Copy the MasterPublicDnsName. Use port forwarding so you can access Zeppelin at http://localhost:18890 on your local machine. ssh -i PATH-TO-YOUR-SSH-KEY -L 18890:localhost:8890 hadoop@YOUR-EMR-DNS-NAME Open Zeppelin with your local web browser and create a new “Note”: http://localhost:18890
  • 200. Exploring the Data in Amazon S3 using Spark Download the Zeppelin notebook: http://bit.ly/aws-big-data-zeppelin // Load all the files from S3 into a RDD val accessLogLines = sc.textFile("s3://YOUR-S3-BUCKET-NAME/firehose/*/*/*/*") // Count the lines accessLogLines.count // Print one line as a string accessLogLines.first // delimited by space so split them into fields var accessLogFields = accessLogLines.map(_.split(" ").map(_.trim)) // Print the fields of a line accessLogFields.first
  • 201. Combine Fields: “A, B, C” à “A B C” var accessLogColumns = accessLogFields .map( arrayOfFields => { var temp1 =""; for (field <- arrayOfFields) yield { var temp2 = "" if (temp1.replaceAll("[",""").startsWith(""") && !temp1.endsWith(""")) temp1 = temp1 + " " + field.replaceAll("[|]",""") else temp1 = field.replaceAll("[|]",""") temp2 = temp1 if (temp1.endsWith(""")) temp1 = "" temp2 }}) .map( fields => fields.filter(field => (field.startsWith(""") && field.endsWith(""")) || !field.startsWith(""") )) .map(fields => fields.map(_.replaceAll(""","")))
  • 202. Create a Data Frame and Transform the Data import java.sql.Timestamp import java.net.URL case class accessLogs( ipAddress: String, requestTime: Timestamp, requestMethod: String, requestPath: String, requestProtocol: String, responseCode: String, responseSize: String, referrerHost: String, userAgent: String )
  • 203. Create a Data Frame and Transform the Data val accessLogsDF = accessLogColumns.map(line => { var ipAddress = line(0) var requestTime = new Timestamp(new java.text.SimpleDateFormat("dd/MMM/ yyyy:HH:mm:ss Z").parse(line(3)).getTime()) var requestString = line(4).split(" ").map(_.trim()) var requestMethod = if (line(4).toString() != "-") requestString(0) else "" var requestPath = if (line(4).toString() != "-") requestString(1) else "" var requestProtocol = if (line(4).toString() != "-") requestString(2) else "" var responseCode = line(5).replaceAll("-","") var responseSize = line(6).replaceAll("-","") var referrerHost = line(7) var userAgent = line(8) accessLogs(ipAddress, requestTime, requestMethod, requestPath, requestProtocol,responseCode, responseSize, referrerHost, userAgent) }).toDF()
  • 204. Create an External Table Backed by Amazon S3 %sql CREATE EXTERNAL TABLE access_logs ( ip_address String, request_time Timestamp, request_method String, request_path String, request_protocol String, response_code String, response_size String, referrer_host String, user_agent String ) PARTITIONED BY (year STRING,month STRING, day STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't' STORED AS TEXTFILE LOCATION 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
  • 205. Configure Hive Partitioning and Compression // set up Hive's "dynamic partitioning” %sql SET hive.exec.dynamic.partition=true // compress output files on Amazon S3 using Gzip %sql SET hive.exec.compress.output=true %sql SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec %sql SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec %sql SET hive.exec.dynamic.partition.mode=nonstrict;
  • 206. Write Output to Amazon S3 import org.apache.spark.sql.SaveMode accessLogsDF .withColumn("year", year(accessLogsDF("requestTime"))) .withColumn("month", month(accessLogsDF("requestTime"))) .withColumn("day", dayofmonth(accessLogsDF("requestTime"))) .write .partitionBy("year","month","day") .mode(SaveMode.Overwrite) .insertInto("access_logs")
  • 207. Query the Data Using Spark SQL // Check the count of records %sql select count(*) from access_log_processed // Fetch the first 10 records %sql select * from access_log_processed limit 10
  • 208. View the Output Files in Amazon S3 Leave Zeppelin and go back to the console… List the partition prefixes and output files: aws s3 ls s3://YOUR-S3-BUCKET-NAME/access-log-processed/ --recursive
  • 210. Connect to Amazon Redshift Using the PostgreSQL CLI psql -h YOUR-REDSHIFT-ENDPOINT -p 8192 -U master demo Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers or native Amazon Redshift support •  Aginity Workbench for Amazon Redshift •  SQL Workbench/J
  • 211. Create an Amazon Redshift Table to Hold Your Data CREATE TABLE accesslogs ( host_address varchar(512), request_time timestamp, request_method varchar(5), request_path varchar(1024), request_protocol varchar(10), response_code Int, response_size Int, referrer_host varchar(1024), user_agent varchar(512) ) DISTKEY(host_address) SORTKEY(request_time);
  • 212. Loading Data into Amazon Redshift “COPY” command loads files in parallel from Amazon S3: COPY accesslogs FROM 's3://YOUR-S3-BUCKET-NAME/access-log-processed' CREDENTIALS 'aws_iam_role=arn:aws:iam::YOUR-AWS-ACCOUNT-ID:role/ROLE-NAME' DELIMITER 't' MAXERROR 0 GZIP;
  • 213. Amazon Redshift Test Queries -- find distribution of response codes over days SELECT TRUNC(request_time), response_code, COUNT(1) FROM accesslogs GROUP BY 1,2 ORDER BY 1,3 DESC; -- find the 404 status codes SELECT COUNT(1) FROM accessLogs WHERE response_code = 404; -- show all requests for status as PAGE NOT FOUND SELECT TOP 1 request_path,COUNT(1) FROM accesslogs WHERE response_code = 404 GROUP BY 1 ORDER BY 2 DESC;
  • 215. Automating the Big Data Application Amazon Kinesis Firehose Amazon EMR Amazon S3 Amazon Redshift Amazon QuickSight Amazon S3 Event Notification Spark job List of objects from Lambda Write to Amazon Redshift using spark-redshift
  • 216. Learn from big data experts blogs.aws.amazon.com/bigdata
  • 217. Thank you to our Sponsor