THE BIG DATA CHALLENGE Distributed And Parallel Computing
we want to process data
how much data exactly?
SOME NUMBERS• Facebook • Google • New data per day: • Data processed per month: 400 PB (in 2007!) • 200 GB (March 2008) • Average job size: 180 GB • 2 TB (April 2009) • 4 TB (October 2009) • 12 TB (March 2010)
what if you have that much data?
what if you have just 1% of that amount?
“No Problemo”, you say?
reading 180 GB sequentially off a disk will take ~45 minutes
and you only have 16 to 64 GB of RAM per computer
so you cant process everything at once
general rule of modern computers:
data can be processed much faster than it can be read
solution: parallelize your I/O
but now you need to coordinate what you’re doing
and that’s hard
what if a node dies?
is data lost?will other nodes in the grid have to re-start? how do you coordinate this?
ENTER: OUR HERO Introducing MapReduce
in the olden days, the workload was distributed across a grid
and the data was shipped around between nodes
or even stored centrally on something like an SAN
which was ﬁne for small amounts of information
but today, on the web, we have big data
along came a Google publication in 2004
MapReduce: Simpliﬁed Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html
now the data is distributed
computing happens on the nodes where the data already is
processes are isolated and don’t communicate (share-nothing)
BASIC PRINCIPLE: MAPPER•A Mapper reads records and emits <key, value> pairs • Example: Apache access.log • Each line is a record • Extract client IP address and number of bytes transferred • Emit IP address as key, number of bytes as value• For hourly rotating logs, the job can be split across 24 nodes* * In pratice, it’s a lot smarter than that
BASIC PRINCIPLE: REDUCER•A Reducer is given a key and all values for this speciﬁc key • Even if there are many Mappers on many computers; the results are aggregated before they are handed to Reducers • Example: Apache access.log • The Reducer is called once for each client IP (that’s our key), with a list of values (transferred bytes) • We simply sum up the bytes to get the total trafﬁc per IP!
EXAMPLE OF MAPPED INPUT IP Bytes 220.127.116.11 18271 18.104.22.168 191726 22.214.171.124 198 126.96.36.199 91272 188.8.131.52 8371 184.108.40.206 43
REDUCER WILL RECEIVE THIS IP Bytes 18271 191726 220.127.116.11 198 43 91272 18.104.22.168 8371
AFTER REDUCTION IP Bytes22.214.171.124 210238 126.96.36.199 99643
The name my kid gave a stuffed yellowelephant. Short, relatively easy to spell andpronounce, meaningless and not used elsewhere:those are my naming criteria. Kids are good atgenerating such. Googol is a kid’s term. Doug Cutting
Hadoop is a MapReduce framework
it allows us to focus on writing Mappers, Reducers etc.
and it works extremely well
how well exactly?
HADOOP AT FACEBOOK (I)• Predominantly used in combination with Hive (~95%)• 8400 cores with ~12.5 PB of total storage•8 cores, 12 TB storage and 32 GB RAM per node• 1x Gigabit Ethernet for each server in a rack• 4x Gigabit Ethernet from rack switch to core Hadoop is aware of racks and locality of nodes http://www.slideshare.net/royans/facebooks-petabyte-scale-data-warehouse-using-hive-and-hadoop
HADOOP AT FACEBOOK (II)• Daily stats: • New data per day: • 25 TB logged by Scribe • I/08: 200 GB • 135 TB of compressed • II/09: 2 TB (compressed) data scanned • III/09: 4 TB (compressed) • 7500+ Hive jobs • I/10: 12 TB (compressed) • ~80k compute hours http://www.slideshare.net/royans/facebooks-petabyte-scale-data-warehouse-using-hive-and-hadoop
HADOOP AT YAHOO!• Over 25,000 computers with over 100,000 CPUs• Biggest Cluster: • 4000 Nodes • 2x4 CPU cores each • 16 GB RAM each• Over 40% of jobs run using Pig http://wiki.apache.org/hadoop/PoweredBy
OTHER NOTABLE USERS• Twitter (storage, logging, analysis. Heavy users of Pig)• Rackspace (log analysis; data pumped into Lucene/Solr)• LinkedIn (friend suggestions)• Last.fm (charts, log analysis, A/B testing)• The New York Times (converted 4 TB of scans using EC2)
JOB PROCESSING How Hadoop Works
Just like I already described! It’s MapReduce! o/
BASIC RULES• Uses Input Formats to split up your data into single records• You can optimize using combiners to reduce locally on a node • Only possible in some cases, e.g. for max(), but not avg()• You can control partitioning of map output yourself • Rarely useful, the default partitioner (key hash) is enough• And a million other things that really don’t matter right now ;)
HDFSHadoop Distributed File System
HDFS• Stores data in blocks (default block size: 64 MB)• Designed for very large data sets• Designed for streaming rather than random reads• Write-once, read-many (although appending is possible)• Capable of compression and other cool things
HDFS CONCEPTS• Large blocks minimize amount of seeks, maximize throughput• Blocks are stored redundantly (3 replicas as default)• Aware of infrastructure characteristics (nodes, racks, ...)• Datanodes hold blocks• Namenode holds the metadata Critical component for an HDFS cluster (HA, SPOF)
there’s just one little problem
you need to write Java code
however, there is hope...
STREAMINGHadoop Won’t Force Us To Use Java
Hadoop Streaming can use any script as Mapper or Reducer
many conﬁguration options (parsers, formats, combining, …)
it works using STDIN and STDOUT
Mappers are streamed the records (usually by line: <line>n)and emit key/value pairs: <key>t<value>n
Reducers are streamed key/value pairs: <keyA>t<value1>n <keyA>t<value2>n <keyA>t<value3>n <keyB>t<value4>n
Caution: no separate Reducer processes per key (but keys are sorted)
STREAMING WITH PHP Introducing HadooPHP
HADOOPHP•A little framework to help with writing mapred jobs in PHP• Takes care of input splitting, can do basic decoding et cetera • Automatically detects and handles Hadoop settings such as key length or ﬁeld separators• Packages jobs as one .phar archive to ease deployment • Also creates a ready-to-rock shell script to invoke the job
DEMOHadoop Streaming & PHP in Action
RESOURCES• http://www.cloudera.com/developers/learn-hadoop/• Tom White: Hadoop. The Deﬁnitive Guide. O’Reilly, 2009• http://www.cloudera.com/hadoop/ • Cloudera Distribution for Hadoop is easy to install and has all the stuff included: Hadoop, Hive, Flume, Sqoop, Oozie, …
THANK YOU! This was http://join.in/3968 by @dzuelke. Contact me or hire us:email@example.com