• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
3rd meetup - Intro to Amazon EMR
 

3rd meetup - Intro to Amazon EMR

on

  • 2,345 views

 

Statistics

Views

Total Views
2,345
Views on SlideShare
2,344
Embed Views
1

Actions

Likes
0
Downloads
19
Comments
0

1 Embed 1

http://stxaviershighschoolpatna.dschool.co 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • ----- Meeting Notes (9/13/11 22:55) -----hadoop exp

3rd meetup - Intro to Amazon EMR 3rd meetup - Intro to Amazon EMR Presentation Transcript

  • ‘Amazon EMR’ coming up…by Sujee Maniyam
  • Birmingham Big Data Group
    Amazon Elastic Map Reduce For Startups
    Sujee Maniyam
    s@sujee.net / www.sujee.net
    Sept 14, 2011
  • Hi, I’m Sujee
    10+ years of software development
    enterprise apps  web apps iphone apps  Hadoop
    More : http://sujee.net/tech
  • I am an ‘expert’ 
  • Ah.. Data
  • Nature of Data…
    Primary Data
    Email, blogs, pictures, tweets
    Critical for operation (Gmail can’t loose emails)
    Secondary data
    Wikipedia access logs, Google search logs
    Not ‘critical’, but used to ‘enhance’ user experience
    Search logs help predict ‘trends’
    Yelp can figure out you like Chinese food
  • Data Explosion
    Primary data has grown phenomenally
    But secondary data has exploded in recent years
    “log every thing and ask questions later”
    Used for
    Recommendations (books, restaurants ..etc)
    Predict trends (job skills in demand)
    Show ADS ($$$)
    ..etc
    ‘Big Data’ is no longer just a problem for BigGuys (Google / Facebook)
    Startups are struggling to get on top of ‘big data’
  • Big Guys
  • Startups
  • Startups and bigdata
  • Hadoop to Rescue
    Hadoop can help with BigData
    Hadoop has been proven in the field
    Under active development
    Throw hardware at the problem
    Getting cheaper by the year
    Bleeding edge technology
    Hire good people!
  • Hadoop: It is a CAREER
  • Data Spectrum
  • Who is Using Hadoop?
  • About This Presentation
    Based on my experience with a startup
    5 people (3 Engineers)
    Ad-Serving Space
    Amazon EC2 is our ‘data center’
    Technologies:
    Web stack : Python, Tornado, PHP, mysql , LAMP
    Amazon EMR to crunch data
    Data size : 1 TB / week
  • Story of a Startup
    We served targeted ads
    Tons of click data
    Stored them in mysqldb
    Outgrew mysql pretty quickly
  • Data @ 6 months
    2 TB of data already
    50-100 G new data / day
    And we were operating at 20% of our capacity!
  • Future…
  • Solution?
    Scalable database (NOSQL)
    Hbase
    Cassandra
    Hadoop log processing / Map Reduce
  • What We Evaluated
    1) Hbase cluster
    2) Hadoop cluster
    3) Amazon EMR
  • Hadoop on Amazon EC2
    Two ways to run Hadoop on EC2
    1) Permanent Cluster
    2) On demand cluster (elastic map reduce)
  • 1) Permanent Hadoop Cluster
  • Architecture 1
  • Hadoop Cluster
    7 C1.xlarge machines
    15 TB EBS volumes
    Sqoop exports mysql log tables into HDFS
    Logs are compressed (gz) to minimize disk usage (data locality trade-off)
    All is working well…
  • 2 months later
    Couple of EBS volumes DIE
    Couple of EC2 instances DIE
    Maintaining the hadoop cluster is mechanical job less appealing
    COST!
    Our jobs utilization is about 50%
    But still paying for machines running 24x7
  • Lessons Learned
    C1.xlarge is pretty stable (8 core / 8G memory)
    EBS volumes
    max size 1TB, so string few for higher density / node
    DON’T RAID them; let hadoop handle them as individual disks
    Might fail
    Backup data on S3
    Skip EBS. Use instance store disks, and store data in S3
    Use Apache WHIRR to setup cluster easily
  • Amazon Storage Options
  • Amazon EC2 Cost
  • Hadoop cluster on EC2 cost
    $3,500 = 7 c1.xlarge @ $500 / month
    $1,500 = 15 TB EBS storage @ $0.10 per GB
    $ 500 = EBS I/O requests @ $0.10 per 1 million I/O requests
     $5,500 / month
    $60,000 / year !
  • Buy / Rent ?
    Typical hadoop machine cost : $10-15k
    10 node cluster = $100k
    Plus data center costs
    Plus IT-ops costs
    Amazon Ec2 10 node cluster:
    $500 * 10 = $5,000 / month = $60k / year
  • Buy / Rent
    Amazon EC2 is great, for
    Quickly getting started
    Startups
    Scaling on demand / rapidly adding more servers
    popular social games
    Netflix story
    Streaming is powered by EC2
    Encoding movies ..etc
    Use 1000s of instances
    Not so economical for running clusters 24x7
    http://blog.rapleaf.com/dev/2008/12/10/rent-or-own-amazon-ec2-vs-colocation-comparison-for-hadoop-clusters/
  • Buy vs Rent
  • Amazon’s Elastic Map Reduce
    Basically ‘on demand’ hadoop cluster
    Store data on Amazon S3
    Kick off a hadoop cluster to process data
    Shutdown when done
    Pay for the HOURS used
  • Architecture2 : Amazon EMR
  • Moving parts
    Logs go into Scribe
    Scribe master ships logs into S3, gzipped
    Spin EMR cluster, run job, done
    Using same old Java MR jobs for EMR
    Summary data gets directly updated to a mysql (no output files from reducers)
  • EMR Wins
    Cost  only pay for use
    http://aws.amazon.com/elasticmapreduce/pricing/
    Example: EMR ran on 5 C1.xlarge for 3hrs
    EC2 instances for 3 hrs = $0.68 per hr x 5 inst x 3 hrs = $10.20
    http://aws.amazon.com/elasticmapreduce/faqs/#billing-4
    (1 hour of c1.xlarge = 8 hours normalized compute time)
    EMR cost = 5 instances x 3 hrs x 8 normalized hrs x 0.12 emr = $14.40
    Plus S3 storage cost : 1TB / month = $150
    Data bandwidth from S3 to EC2 is FREE!
     $25 bucks
  • Design Wins
    Bidders now write logs to Scribe directly
    No mysql at web server machines
    Writes much faster!
    S3 has been a reliable storage and cheap
  • EMR Wins
    No hadoop cluster to maintainno failed nodes / disks
  • EMR Wins
    Hadoop clusters can be of any size!
    Spin clusters for each job
    smaller jobs  fewer number of machines
    memory hungry tasks  m1.xlarge
    cpu hungry tasks  c1.xlarge
  • EMR trade-offs
    Lower performance on MR jobs compared to a clusterReduced data throughput (S3 isn’t the same as local disk)
    Streaming data from S3, for each job
    EMR Hadoop is not the latest version
    Missing tools : Oozie
    Right now, trading performance for convenience and cost
  • Lessons Learned
    Debugging a failed MR job is tricky
    Because the hadoop cluster is terminated  no logs files
    Save log files to S3
  • Lessons : Script every thing
    scripts
    to launch jar EMR jobs
    Custom parameters depending on job needs (instance types, size of cluster ..etc)
    monitor job progress
    Save logs for later inspection
    Job status (finished / cancelled)
    https://github.com/sujee/amazon-emr-beyond-basics
  • Saved Logs
  • Map reduce tips: Logfile format
    CSV  JSON
    Started with CSV
    CSV: "2","26","3","07807606-7637-41c0-9bc0-8d392ac73b42","MTY4Mjk2NDk0eDAuNDk4IDEyODQwMTkyMDB4LTM0MTk3OTg2Ng","2010-09-09 03:59:56:000 EDT","70.68.3.116","908105","http://housemdvideos.com/seasons/video.php?s=01&e=07","908105","160x600","performance","25","ca","housemdvideos.com","1","1.2840192E9","0","221","0.60000","NULL","NULL
    20-40 fields… fragile, position dependant, hard to code
    url = csv[18]…counting position numbers gets old after 100th time around)
    If (csv.length == 29) url = csv[28] else url = csv[26]
  • Map reduce tips: Logfile format
    JSON: { exchange_id: 2, url : “http://housemdvideos.com/seasons/video.php?s=01&e=07”….}
    Self-describing, easy to add new fields, easy to process
    url = map.get(‘url’)
    Flatten JSON to fit in ONE LINE
    Compresses pretty well (not much data inflation)
  • Map reduce tips: Incremental Log Processing
    Recent data (today / yesterday / this week) is more relevant than older data (6 months +)
  • Map reduce tips: Incremental Log Processing
    Adding ‘time window’ to our stats
    only process newer logs faster
  • Next steps
    New Software
    Pig, python mrJOB(from Yelp)
    Scribe  Cloudera flume?
    Use work flow tools like Oozie
    Hive?
    Adhoc SQL like queries
  • Next Steps : SPOT instances
    SPOT instances : name your price (ebay style)
    Been available on EC2 for a while
    Just became available for Elastic map reduce!
    New cluster setup:
    10 normal instances + 10 spot instances
    Spots may go away anytime
    That is fine! Hadoop will handle node failures
    Bigger cluster : cheaper & faster
  • Example Price Comparison
  • In summary…
    Amazon EMR could be a great solution
    We are happy!
  • Take a test drive
    Just bring your credit-card 
    http://aws.amazon.com/elasticmapreduce/
    Forum : https://forums.aws.amazon.com/forum.jspa?forumID=52
  • Thanks
    Questions?
    Sujee Maniyam
    http://sujee.net
    hello@sujee.net
    Devil’s slide, Pacifica