Large-Scale Data Processing with Hadoop and PHP (IPC2012SE 2012-06-05)
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Large-Scale Data Processing with Hadoop and PHP (IPC2012SE 2012-06-05)

  • 3,610 views
Uploaded on

Presentation given at International PHP Conference 2012 Spring Edition in Berlin, Germany.

Presentation given at International PHP Conference 2012 Spring Edition in Berlin, Germany.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
3,610
On Slideshare
3,608
From Embeds
2
Number of Embeds
1

Actions

Shares
Downloads
20
Comments
0
Likes
1

Embeds 2

http://www.frankneff.ch 2

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. LARGE-SCALE DATAPROCESSING WITHHADOOP AND PHP
  • 2. David Zuelke
  • 3. David Zülke
  • 4. http://en.wikipedia.org/wiki/File:München_Panorama.JPG
  • 5. Founder
  • 6. Lead Developer
  • 7. @dzuelke
  • 8. THE BIG DATA CHALLENGE Distributed And Parallel Computing
  • 9. we want to process data
  • 10. how much data exactly?
  • 11. SOME NUMBERS• Facebook • Google • New data per day: • Data processed per month: 400 PB (in 2007!) • 200 GB (March 2008) • Average job size: 180 GB • 2 TB (April 2009) • 4 TB (October 2009) • 12 TB (March 2010)
  • 12. what if you have that much data?
  • 13. what if you have just 1% of that amount?
  • 14. “No Problemo”, you say?
  • 15. reading 180 GB sequentially off a disk will take ~45 minutes
  • 16. and you only have 16 to 64 GB of RAM per computer
  • 17. so you cant process everything at once
  • 18. general rule of modern computers:
  • 19. data can be processed much faster than it can be read
  • 20. solution: parallelize your I/O
  • 21. but now you need to coordinate what you’re doing
  • 22. and that’s hard
  • 23. what if a node dies?
  • 24. is data lost?will other nodes in the grid have to re-start? how do you coordinate this?
  • 25. ENTER: OUR HERO Introducing MapReduce
  • 26. in the olden days, the workload was distributed across a grid
  • 27. and the data was shipped around between nodes
  • 28. or even stored centrally on something like an SAN
  • 29. which was fine for small amounts of information
  • 30. but today, on the web, we have big data
  • 31. I/O bottleneck
  • 32. along came a Google publication in 2004
  • 33. MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html
  • 34. now the data is distributed
  • 35. computing happens on the nodes where the data already is
  • 36. processes are isolated and don’t communicate (share-nothing)
  • 37. BASIC PRINCIPLE: MAPPER•A Mapper reads records and emits <key, value> pairs • Example: Apache access.log • Each line is a record • Extract client IP address and number of bytes transferred • Emit IP address as key, number of bytes as value• For hourly rotating logs, the job can be split across 24 nodes* * In pratice, it’s a lot smarter than that
  • 38. BASIC PRINCIPLE: REDUCER•A Reducer is given a key and all values for this specific key • Even if there are many Mappers on many computers; the results are aggregated before they are handed to Reducers • Example: Apache access.log • The Reducer is called once for each client IP (that’s our key), with a list of values (transferred bytes) • We simply sum up the bytes to get the total traffic per IP!
  • 39. EXAMPLE OF MAPPED INPUT IP Bytes 212.122.174.13 18271 212.122.174.13 191726 212.122.174.13 198 74.119.8.111 91272 74.119.8.111 8371 212.122.174.13 43
  • 40. REDUCER WILL RECEIVE THIS IP Bytes 18271 191726 212.122.174.13 198 43 91272 74.119.8.111 8371
  • 41. AFTER REDUCTION IP Bytes212.122.174.13 210238 74.119.8.111 99643
  • 42. PSEUDOCODEfunction  map($line_number,  $line_text)  {    $parts  =  parse_apache_log($line_text);    emit($parts[ip],  $parts[bytes]);}function  reduce($key,  $values)  {    $bytes  =  array_sum($values);    emit($key,  $bytes);}212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /foo  HTTP/1.1"  200  18271212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /bar  HTTP/1.1"  200  191726212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /baz  HTTP/1.1"  200  19874.119.8.111      -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /egg  HTTP/1.1"  200  4374.119.8.111      -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /moo  HTTP/1.1"  200  91272212.122.174.13  -­‐  -­‐  [30/Oct/2009:18:14:32  +0100]  "GET  /yay  HTTP/1.1"  200  8371212.122.174.13  21023874.119.8.111      99643
  • 43. A YELLOW ELEPHANT Introducing Apache Hadoop
  • 44. The name my kid gave a stuffed yellowelephant. Short, relatively easy to spell andpronounce, meaningless and not used elsewhere:those are my naming criteria. Kids are good atgenerating such. Googol is a kid’s term. Doug Cutting
  • 45. Hadoop is a MapReduce framework
  • 46. it allows us to focus on writing Mappers, Reducers etc.
  • 47. and it works extremely well
  • 48. how well exactly?
  • 49. HADOOP AT FACEBOOK (I)• Predominantly used in combination with Hive (~95%)• 8400 cores with ~12.5 PB of total storage•8 cores, 12 TB storage and 32 GB RAM per node• 1x Gigabit Ethernet for each server in a rack• 4x Gigabit Ethernet from rack switch to core Hadoop is aware of racks and locality of nodes http://www.slideshare.net/royans/facebooks-petabyte-scale-data-warehouse-using-hive-and-hadoop
  • 50. HADOOP AT FACEBOOK (II)• Daily stats: • New data per day: • 25 TB logged by Scribe • I/08: 200 GB • 135 TB of compressed • II/09: 2 TB (compressed) data scanned • III/09: 4 TB (compressed) • 7500+ Hive jobs • I/10: 12 TB (compressed) • ~80k compute hours http://www.slideshare.net/royans/facebooks-petabyte-scale-data-warehouse-using-hive-and-hadoop
  • 51. HADOOP AT YAHOO!• Over 25,000 computers with over 100,000 CPUs• Biggest Cluster: • 4000 Nodes • 2x4 CPU cores each • 16 GB RAM each• Over 40% of jobs run using Pig http://wiki.apache.org/hadoop/PoweredBy
  • 52. OTHER NOTABLE USERS• Twitter (storage, logging, analysis. Heavy users of Pig)• Rackspace (log analysis; data pumped into Lucene/Solr)• LinkedIn (contact suggestions)• Last.fm (charts, log analysis, A/B testing)• The New York Times (converted 4 TB of scans using EC2)
  • 53. JOB PROCESSING How Hadoop Works
  • 54. Just like I already described! It’s MapReduce! o/
  • 55. BASIC RULES• Uses Input Formats to split up your data into single records• You can optimize using combiners to reduce locally on a node • Only possible in some cases, e.g. for max(), but not avg()• You can control partitioning of map output yourself • Rarely useful, the default partitioner (key hash) is enough• And a million other things that really don’t matter right now ;)
  • 56. HDFSHadoop Distributed File System
  • 57. HDFS• Stores data in blocks (default block size: 64 MB)• Designed for very large data sets• Designed for streaming rather than random reads• Write-once, read-many (although appending is possible)• Capable of compression and other cool things
  • 58. HDFS CONCEPTS• Large blocks minimize amount of seeks, maximize throughput• Blocks are stored redundantly (3 replicas as default)• Aware of infrastructure characteristics (nodes, racks, ...)• Datanodes hold blocks• Namenode holds the metadata Critical component for an HDFS cluster (HA, SPOF)
  • 59. there’s just one little problem
  • 60. you need to write Java code
  • 61. however, there is hope...
  • 62. STREAMINGHadoop Won’t Force Us To Use Java
  • 63. Hadoop Streaming can use any script as Mapper or Reducer
  • 64. many configuration options (parsers, formats, combining, …)
  • 65. it works using STDIN and STDOUT
  • 66. Mappers are streamed the records (usually by line: <line>n)and emit key/value pairs: <key>t<value>n
  • 67. Reducers are streamed key/value pairs: <keyA>t<value1>n <keyA>t<value2>n <keyA>t<value3>n <keyB>t<value4>n
  • 68. Caution: no separate Reducer processes per key (but keys are sorted)
  • 69. STREAMING WITH PHP Introducing HadooPHP
  • 70. HADOOPHP•A little framework to help with writing mapred jobs in PHP• Takes care of input splitting, can do basic decoding et cetera • Automatically detects and handles Hadoop settings such as key length or field separators• Packages jobs as one .phar archive to ease deployment • Also creates a ready-to-rock shell script to invoke the job
  • 71. written by
  • 72. DEMOHadoop Streaming & PHP in Action
  • 73. !e End
  • 74. RESOURCES• Book: Tom White: Hadoop. The Definitive Guide. O’Reilly, 2009• Cloudera Distribution: http://www.cloudera.com/hadoop/ • Also: http://www.cloudera.com/developers/learn-hadoop/• From this talk: • Logs: http://infochimps.com/datasets/star-wars-kid-data-dump • HadooPHP: http://github.com/dzuelke/hadoophp
  • 75. Questions?
  • 76. THANK YOU! This was http://joind.in/6639 by @dzuelke. Contact me or hire us:david.zuelke@bitextender.com