Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
© 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Matt Yanchyshyn, Sr. Mgr. Solutions Architecture...
Amazon S3
Amazon Kinesis
Amazon DynamoDB
Amazon RDS (Aurora)
AWS Lambda
KCL Apps
Amazon
EMR
Amazon
Redshift
Amazon Machine...
Your first big data application on AWS
Collect Process Analyze
Store
Data Answers
Collect Process Analyze
Store
Data Answers
Collect Process Analyze
Store
Data Answers
SQL
Set up with the AWS CLI
Amazon Kinesis
Create a single-shard Amazon Kinesis stream for incoming
log data:
aws kinesis create-stream 
--stream-name...
Amazon S3
YOUR-BUCKET-NAME
Amazon EMR
Launch a 3-node Amazon EMR cluster with Spark and Hive:
m3.xlarge
YOUR-AWS-SSH-KEY
Amazon Redshift

CHOOSE-A-REDSHIFT-PASSWORD
Your first big data application on AWS
2. PROCESS: Process data with
Amazon EMR using Spark & Hive
STORE
3. ANALYZE: Analy...
1. Collect
Amazon Kinesis Log4J Appender
In a separate terminal window on your local machine,
download Log4J Appender:
Then download ...
Amazon Kinesis Log4J Appender
Create a file called AwsCredentials.properties with
credentials for an IAM user with permiss...
Log file format
Spark
• Fast, general purpose engine
for large-scale data processing
• Write applications quickly in
Java, Scala, or Pytho...
Amazon Kinesis and Spark Streaming
Log4J
Appender
Amazon
Kinesis
Amazon
S3
Amazon
DynamoDB
Spark-Streaming uses
Kinesis Cl...
Using Spark Streaming on Amazon EMR
-o TCPKeepAlive=yes -o ServerAliveInterval=30 
YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME
On y...
Using Spark Streaming on Amazon EMR
Cut down on console noise:
Start the Spark shell:
spark-shell --jars /usr/lib/spark/ex...
Using Spark Streaming on Amazon EMR
/* import required libraries */
Using Spark Streaming on Amazon EMR
/* Set up the variables as needed */
YOUR-REGION
YOUR-S3-BUCKET
/* Reconfigure the spa...
Reading Amazon Kinesis with Spark Streaming
/* Setup the KinesisClient */
val kinesisClient = new AmazonKinesisClient(new
...
Writing to Amazon S3 with Spark Streaming
/* Merge the worker Dstreams and translate the byteArray to string */
/* Write e...
View the output files in Amazon S3
YOUR-S3-BUCKET
YOUR-S3-BUCKET
yyyy mm dd HH
2. Process
Spark SQL
Spark's module for working with structured data using SQL
Run unmodified Hive queries on existing data
Using Spark SQL on Amazon EMR
YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME
Start the Spark SQL shell:
spark-sql --driver-java-option...
Create a table that points to your Amazon S3 bucket
CREATE EXTERNAL TABLE access_log_raw(
host STRING, identity STRING,
us...
Query the data with Spark SQL
-- return the first row in the stream
-- return count all items in the Stream
-- find the to...
Preparing the data for Amazon Redshift import
We will transform the data that is returned by the query before writing it
t...
Create an external table in Amazon S3
YOUR-S3-BUCKET
Configure partition and compression
-- setup Hive's "dynamic partitioning"
-- this will split output files when writing to...
Write output to Amazon S3
-- convert the Apache log timestamp to a UNIX timestamp
-- split files in Amazon S3 by the hour ...
View the output files in Amazon S3
YOUR-S3-BUCKET
YOUR-S3-BUCKET
3. Analyze
Connect to Amazon Redshift
# using the PostgreSQL CLI
YOUR-REDSHIFT-ENDPOINT
Or use any JDBC or ODBC SQL client with the P...
Create an Amazon Redshift table to hold your data
Loading data into Amazon Redshift
“COPY” command loads files in parallel
COPY accesslogs
FROM 's3://YOUR-S3-BUCKET/access-...
Amazon Redshift test queries
-- find distribution of status codes over days
-- find the 404 status codes
-- show all reque...
Your first big data application on AWS
A favicon would fix 398 of the total 977 PAGE NOT FOUND (404) errors
Visualize the results
• Client-side JavaScript example using Plottable, a library built on D3
• Hosted on Amazon S3 for pe...
…around the same cost as a cup of coffee
Try it yourself on the AWS cloud…
Service Est. Cost*
Amazon Kinesis $1.00
Amazon ...
Learn from AWS big data experts
blogs.aws.amazon.com/bigdata
Remember to complete
your evaluations!
Thank you!
Upcoming SlideShare
Loading in …5
×

(BDT205) Your First Big Data Application On AWS

2,914 views

Published on

Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.

Published in: Technology

(BDT205) Your First Big Data Application On AWS

  1. 1. © 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Matt Yanchyshyn, Sr. Mgr. Solutions Architecture October 2015 BDT205 Building Your First Big Data Application
  2. 2. Amazon S3 Amazon Kinesis Amazon DynamoDB Amazon RDS (Aurora) AWS Lambda KCL Apps Amazon EMR Amazon Redshift Amazon Machine Learning Collect Process Analyze Store Data Collection and Storage Data Processing Event Processing Data Analysis Data Answers
  3. 3. Your first big data application on AWS
  4. 4. Collect Process Analyze Store Data Answers
  5. 5. Collect Process Analyze Store Data Answers
  6. 6. Collect Process Analyze Store Data Answers SQL
  7. 7. Set up with the AWS CLI
  8. 8. Amazon Kinesis Create a single-shard Amazon Kinesis stream for incoming log data: aws kinesis create-stream --stream-name AccessLogStream --shard-count 1
  9. 9. Amazon S3 YOUR-BUCKET-NAME
  10. 10. Amazon EMR Launch a 3-node Amazon EMR cluster with Spark and Hive: m3.xlarge YOUR-AWS-SSH-KEY
  11. 11. Amazon Redshift CHOOSE-A-REDSHIFT-PASSWORD
  12. 12. Your first big data application on AWS 2. PROCESS: Process data with Amazon EMR using Spark & Hive STORE 3. ANALYZE: Analyze data in Amazon Redshift using SQLSQL 1. COLLECT: Stream data into Amazon Kinesis with Log4J
  13. 13. 1. Collect
  14. 14. Amazon Kinesis Log4J Appender In a separate terminal window on your local machine, download Log4J Appender: Then download and save the sample Apache log file:
  15. 15. Amazon Kinesis Log4J Appender Create a file called AwsCredentials.properties with credentials for an IAM user with permission to write to Amazon Kinesis: accessKey=YOUR-IAM-ACCESS-KEY secretKey=YOUR-SECRET-KEY Then start the Amazon Kinesis Log4J Appender:
  16. 16. Log file format
  17. 17. Spark • Fast, general purpose engine for large-scale data processing • Write applications quickly in Java, Scala, or Python • Combine SQL, streaming, and complex analytics
  18. 18. Amazon Kinesis and Spark Streaming Log4J Appender Amazon Kinesis Amazon S3 Amazon DynamoDB Spark-Streaming uses Kinesis Client Library Amazon EMR
  19. 19. Using Spark Streaming on Amazon EMR -o TCPKeepAlive=yes -o ServerAliveInterval=30 YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME On your cluster, download the Amazon Kinesis client for Spark:
  20. 20. Using Spark Streaming on Amazon EMR Cut down on console noise: Start the Spark shell: spark-shell --jars /usr/lib/spark/extras/lib/spark-streaming- kinesis-asl.jar,amazon-kinesis-client-1.6.0.jar --driver-java- options "- Dlog4j.configuration=file:///etc/spark/conf/log4j.properties"
  21. 21. Using Spark Streaming on Amazon EMR /* import required libraries */
  22. 22. Using Spark Streaming on Amazon EMR /* Set up the variables as needed */ YOUR-REGION YOUR-S3-BUCKET /* Reconfigure the spark-shell */
  23. 23. Reading Amazon Kinesis with Spark Streaming /* Setup the KinesisClient */ val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) kinesisClient.setEndpoint(endpointUrl) /* Determine the number of shards from the stream */ val numShards = kinesisClient.describeStream(streamName).getStreamDescription().getShard s().size() /* Create one worker per Kinesis shard */ val ssc = new StreamingContext(sc, outputInterval) val kinesisStreams = (0 until numShards).map { i => KinesisUtils.createStream(ssc, streamName, endpointUrl,outputInterval,InitialPositionInStream.TRIM_HORIZON, StorageLevel.MEMORY_ONLY) }
  24. 24. Writing to Amazon S3 with Spark Streaming /* Merge the worker Dstreams and translate the byteArray to string */ /* Write each RDD to Amazon S3 */
  25. 25. View the output files in Amazon S3 YOUR-S3-BUCKET YOUR-S3-BUCKET yyyy mm dd HH
  26. 26. 2. Process
  27. 27. Spark SQL Spark's module for working with structured data using SQL Run unmodified Hive queries on existing data
  28. 28. Using Spark SQL on Amazon EMR YOUR-AWS-SSH-KEY YOUR-EMR-HOSTNAME Start the Spark SQL shell: spark-sql --driver-java-options "- Dlog4j.configuration=file:///etc/spark/conf/log4j.propertie s"
  29. 29. Create a table that points to your Amazon S3 bucket CREATE EXTERNAL TABLE access_log_raw( host STRING, identity STRING, user STRING, request_time STRING, request STRING, status STRING, size STRING, referrer STRING, agent STRING ) PARTITIONED BY (year INT, month INT, day INT, hour INT, min INT) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|[[^]]*]) ([^ "]*|"[^"]*") (-|[0-9]*) (-|[0-9]*)(?: ([^ "]*|"[^"]*") ([^ "]*|"[^"]*"))?" ) LOCATION 's3://YOUR-S3-BUCKET/access-log-raw'; msck repair table access_log_raw;
  30. 30. Query the data with Spark SQL -- return the first row in the stream -- return count all items in the Stream -- find the top 10 hosts
  31. 31. Preparing the data for Amazon Redshift import We will transform the data that is returned by the query before writing it to our Amazon S3-stored external Hive table Hive user-defined functions (UDF) in use for the text transformations: from_unixtime, unix_timestamp and hour The “hour” value is important: this is what’s used to split and organize the output files before writing to Amazon S3. These splits will allow us to more efficiently load the data into Amazon Redshift later in the lab using the parallel “COPY” command.
  32. 32. Create an external table in Amazon S3 YOUR-S3-BUCKET
  33. 33. Configure partition and compression -- setup Hive's "dynamic partitioning" -- this will split output files when writing to Amazon S3 -- compress output files on Amazon S3 using Gzip
  34. 34. Write output to Amazon S3 -- convert the Apache log timestamp to a UNIX timestamp -- split files in Amazon S3 by the hour in the log lines INSERT OVERWRITE TABLE access_log_processed PARTITION (hour) SELECT from_unixtime(unix_timestamp(request_time, '[dd/MMM/yyyy:HH:mm:ss Z]')), host, request, status, referrer, agent, hour(from_unixtime(unix_timestamp(request_time, '[dd/MMM/yyyy:HH:mm:ss Z]'))) as hour FROM access_log_raw;
  35. 35. View the output files in Amazon S3 YOUR-S3-BUCKET YOUR-S3-BUCKET
  36. 36. 3. Analyze
  37. 37. Connect to Amazon Redshift # using the PostgreSQL CLI YOUR-REDSHIFT-ENDPOINT Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers or native Amazon Redshift support • Aginity Workbench for Amazon Redshift • SQL Workbench/J
  38. 38. Create an Amazon Redshift table to hold your data
  39. 39. Loading data into Amazon Redshift “COPY” command loads files in parallel COPY accesslogs FROM 's3://YOUR-S3-BUCKET/access-log-processed' CREDENTIALS 'aws_access_key_id=YOUR-IAM- ACCESS_KEY;aws_secret_access_key=YOUR-IAM-SECRET-KEY' DELIMITER 't' IGNOREHEADER 0 MAXERROR 0 GZIP;
  40. 40. Amazon Redshift test queries -- find distribution of status codes over days -- find the 404 status codes -- show all requests for status as PAGE NOT FOUND
  41. 41. Your first big data application on AWS A favicon would fix 398 of the total 977 PAGE NOT FOUND (404) errors
  42. 42. Visualize the results • Client-side JavaScript example using Plottable, a library built on D3 • Hosted on Amazon S3 for pennies a month • AWS Lambda function used to query Amazon Redshift
  43. 43. …around the same cost as a cup of coffee Try it yourself on the AWS cloud… Service Est. Cost* Amazon Kinesis $1.00 Amazon S3 (free tier) $0 Amazon EMR $0.44 Amazon Redshift $1.00 Est. Total $2.44 *Estimated costs assumes: use of free tier where available, lower cost instances, dataset no bigger than 10MB and instances running for less than 4 hours. Costs may vary depending on options selected, size of dataset, and usage. $3.50
  44. 44. Learn from AWS big data experts blogs.aws.amazon.com/bigdata
  45. 45. Remember to complete your evaluations!
  46. 46. Thank you!

×