7. Hi, I’m Sujee 10+ years of software development enterprise apps web apps iphone apps Hadoop Hands on experience with Hadoop / Hbase/ Amazon ‘cloud’ More : http://sujee.net/tech
10. Nature of Data… Primary Data Email, blogs, pictures, tweets Critical for operation (Gmail can’t loose emails) Secondary data Wikipedia access logs, Google search logs Not ‘critical’, but used to ‘enhance’ user experience Search logs help predict ‘trends’ Yelp can figure out you like Chinese food
11. Data Explosion Primary data has grown phenomenally But secondary data has exploded in recent years “log every thing and ask questions later” Used for Recommendations (books, restaurants ..etc) Predict trends (job skills in demand) Show ADS ($$$) ..etc ‘Big Data’ is no longer just a problem for BigGuys (Google / Facebook) Startups are struggling to get on top of ‘big data’
12. Hadoop to Rescue Hadoop can help with BigData Hadoop has been proven in the field Under active development Throw hardware at the problem Getting cheaper by the year Bleeding edge technology Hire good people!
19. About This Presentation Based on my experience with a startup 5 people (3 Engineers) Ad-Serving Space Amazon EC2 is our ‘data center’ Technologies: Web stack : Python, Tornado, PHP, mysql , LAMP Amazon EMR to crunch data Data size : 1 TB / week
20. Story of a Startup…month-1 Each web serverwrites logs locally Logs were copiedto a log-serverand purged from web servers Log Data size : ~100-200 G
21. Story of a Startup…month-6 More web servers comeonline Aggregate log serverfalls behind
22. Data @ 6 months 2 TB of data already 50-100 G new data / day And we were operating on 20% of our capacity!
29. Hadoop Cluster 7 C1.xlarge machines 15 TB EBS volumes Sqoop exports mysql log tables into HDFS Logs are compressed (gz) to minimize disk usage (data locality trade-off) All is working well…
30. Lessons Learned C1.xlarge is pretty stable (8 core / 8G memory) EBS volumes max size 1TB, so string few for higher density / node DON’T RAID them; let hadoop handle them as individual disks ?? : Skip EBS. Use instance store disks, and store data in S3
32. 2 months later Couple of EBS volumes DIE Couple of EC2 instances DIE Maintaining the hadoop cluster is mechanical job less appealing COST! Our jobs utilization is about 50% But still paying for machines running 24x7
34. Hadoop cluster on EC2 cost $3,500 = 7 c1.xlarge @ $500 / month $1,500 = 15 TB EBS storage @ $0.10 per GB $ 500 = EBS I/O requests @ $0.10 per 1 million I/O requests $5,500 / month $60,000 / year !
35. Buy / Rent ? Typical hadoop machine cost : $10k 10 node cluster = $100k Plus data center costs Plus IT-ops costs Amazon Ec2 10 node cluster: $500 * 10 = $5,000 / month = $60k / year
36. Buy / Rent Amazon EC2 is great, for Quickly getting started Startups Scaling on demand / rapidly adding more servers popular social games Netflix story Streaming is powered by EC2 Encoding movies ..etc Use 1000s of instances Not so economical for running clusters 24x7
42. Moving parts Logs go into Scribe Scribe master ships logs into S3, gzipped Spin EMR cluster, run job, done Using same old Java MR jobs for EMR Summary data gets directly updated to a mysql
43. EMR Launch Scripts scripts to launch jar EMR jobs Custom parameters depending on job needs (instance types, size of cluster ..etc) monitor job progress Save logs for later inspection Job status (finished / cancelled) https://github.com/sujee/amazon-emr-beyond-basics
49. Data joining (x-ref) Data is split across log files, need to x-ref during Map phase Used to load the data in mapper’s memory (data was small and in mysql) Now we use Membase (Memcached) Two MR jobs are chained First one processes logfile_type_A and populates Membase (very quick, takes minutes) Second one, processes logfile_type_B, cross-references values from Membase
51. EMR Wins Cost only pay for use http://aws.amazon.com/elasticmapreduce/pricing/ Example: EMR ran on 5 C1.xlarge for 3hrs EC2 instances for 3 hrs = $0.68 per hr x 5 inst x 3 hrs = $10.20 http://aws.amazon.com/elasticmapreduce/faqs/#billing-4 (1 hour of c1.xlarge = 8 hours normalized compute time) EMR cost = 5 instances x 3 hrs x 8 normalized hrs x 0.12 emr = $14.40 Plus S3 storage cost : 1TB / month = $150 Data bandwidth from S3 to EC2 is FREE! $25 bucks
52. EMR Wins No hadoop cluster to maintainno failed nodes / disks Bonus : Can tailor cluster for various jobs smaller jobs fewer number of machines memory hungry tasks m1.xlarge cpu hungry tasks c1.xlarge
53. Design Wins Bidders now write logs to Scribe directly No mysql at web server machines Writes much faster! S3 has been a reliable storage and cheap
57. Lessons learned : Logfile format CSV JSON Started with CSV CSV: "2","26","3","07807606-7637-41c0-9bc0-8d392ac73b42","MTY4Mjk2NDk0eDAuNDk4IDEyODQwMTkyMDB4LTM0MTk3OTg2Ng","2010-09-09 03:59:56:000 EDT","70.68.3.116","908105","http://housemdvideos.com/seasons/video.php?s=01&e=07","908105","160x600","performance","25","ca","housemdvideos.com","1","1.2840192E9","0","221","0.60000","NULL","NULL 20-40 fields… fragile, position dependant, hard to code url = csv[18]…counting position numbers gets old after 100th time around) If (csv.length == 29) url = csv[28] else url = csv[26] JSON: { exchange_id: 2, url : “http://housemdvideos.com/seasons/video.php?s=01&e=07”….} Self-describing, easy to add new fields, easy to process url = map.get(‘url’)
58. Lessons Learned : Control the amount of Input We get different type of events event A (freq: 10,000) >>> event B (100) >> event C (1) Initially we put them all into a single log file A A A A B A A B C
59. Control Input… So have to process the entire file, even if we are interested only in ‘event C’ too much wasted processing So we split the logs log_A….gz log_B….gz log_C…gz Now only processing fraction of our logs Input : s3://my_bucket/logs/log_B* x-ref using memcache if needed
60. Lessons learned : Incremental Log Processing Recent data (today / yesterday / this week) is more relevant than older data (6 months +) Adding ‘time window’ to our stats only process newer logs faster
61. EMR trade-offs Lower performance on MR jobs compared to a clusterReduced data throughput (S3 isn’t the same as local disk) Streaming data from S3, for each job EMR Hadoop is not the latest version Missing tools : Oozie Right now, trading performance for convenience and cost
62. Next steps : faster processing Streaming S3 data for each MR job is not optimal Spin cluster Copy data from S3 to HDFS Run all MR jobs (make use of data locality) terminate
63. Next Steps : More Processing More MR jobs More frequent data processing Frequent log rolls Smaller delta window
64. Next steps : new software New Software Python, mrJOB(from Yelp) Scribe Cloudera flume? Use work flow tools like Oozie Hive? Adhoc SQL like queries
65. Next Steps : SPOT instances SPOT instances : name your price (ebay style) Been available on EC2 for a while Just became available for Elastic map reduce! New cluster setup: 10 normal instances + 10 spot instances Spots may go away anytime That is fine! Hadoop will handle node failures Bigger cluster : cheaper & faster
67. Next Steps : nosql Summary data goes into mysqlpotential weak-link ( some tables have ~100 million rows and growing) Evaluating nosql solutionsusing Membase in limited capacity Watch out for Amazon’s Hbase offering
68. Take a test drive Just bring your credit-card http://aws.amazon.com/elasticmapreduce/ Forum : https://forums.aws.amazon.com/forum.jspa?forumID=52