Презентация с технической секции #BitByte - фестиваля профессионального развития, который прошел 19 мая в Санкт-Петербурге.
Владимир Суворов, Инженер отраслевых решений компании EMC2: «Практическое использование Apache Hadoop - технологии распределенной обработки данных».
3. What is MapReduce?
• Data-parallel programming model for
clusters of commodity machines
• Pioneered by Google
– Processes 20 PB of data per day
• Popularized by open-source Hadoop project
– Used by Yahoo!, Facebook, Amazon, …
4. What is MapReduce used for?
• At Google:
– Index building for Google Search
– Article clustering for Google News
– Statistical machine translation
• At Yahoo!:
– Index building for Yahoo! Search
– Spam detection for Yahoo! Mail
• At Facebook:
– Data mining
– Ad optimization
– Spam detection
7. What is MapReduce used for?
• In research:
– Analyzing Wikipedia conflicts (PARC)
– Natural language processing (CMU)
– Bioinformatics (Maryland)
– Astronomical image analysis (Washington)
– Ocean climate simulation (Washington)
– <Your application here>
8. Outline
• MapReduce architecture
• Fault tolerance in MapReduce
• Sample applications
• Getting started with Hadoop
• Higher-level languages on top of Hadoop:
Pig and Hive
9. MapReduce Design Goals
1. Scalability to large data volumes:
– Scan 100 TB on 1 node @ 50 MB/s = 23 days
– Scan on 1000-node cluster = 33 minutes
2. Cost-efficiency:
– Commodity nodes (cheap, but unreliable)
– Commodity network
– Automatic fault-tolerance (fewer admins)
– Easy to use (fewer programmers)
10. Typical Hadoop Cluster
Aggregation switch
Rack switch
• 40 nodes/rack, 1000-4000 nodes in cluster
• 1 GBps bandwidth in rack, 8 GBps out of rack
• Node specs (Yahoo terasort):
8 x 2.0 GHz cores, 8 GB RAM, 4 disks (= 4 TB?)
11. Typical Hadoop Cluster
Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/aw-apachecon-eu-2009.pdf
12. Challenges
• Cheap nodes fail, especially if you have many
– Mean time between failures for 1 node = 3 years
– MTBF for 1000 nodes = 1 day
– Solution: Build fault-tolerance into system
• Commodity network = low bandwidth
– Solution: Push computation to the data
• Programming distributed systems is hard
– Solution: Data-parallel programming model: users
write “map” and “reduce” functions, system handles
work distribution and fault tolerance
13. Hadoop Components
• Distributed file system (HDFS)
– Single namespace for entire cluster
– Replicates data 3x for fault-tolerance
• MapReduce implementation
– Executes user jobs specified as “map” and
“reduce” functions
– Manages work distribution & fault-tolerance
14. Hadoop Distributed File System
• Files split into 128MB blocks
Namenode
• Blocks replicated across File1
1
several datanodes (usually 3) 2
3
• Single namenode stores 4
metadata (file names, block
locations, etc)
• Optimized for large files,
sequential reads
1 2 1 3
• Files are append-only 2
4
1
3
4
3
2
4
Datanodes
15. MapReduce Programming Model
• Data type: key-value records
• Map function:
(Kin, Vin) list(Kinter, Vinter)
• Reduce function:
(Kinter, list(Vinter)) list(Kout, Vout)
16. Example: Word Count
def mapper(line):
foreach word in line.split():
output(word, 1)
def reducer(key, values):
output(key, sum(values))
17. Word Count Execution
Input Map Shuffle & Sort Reduce Output
the, 1
brown, 1
the quick fox, 1 brown, 2
brown fox Map fox, 2
Reduce how, 1
now, 1
the, 1 the, 3
fox, 1
the, 1
the fox ate
the mouse Map
quick, 1
how, 1
now, 1 ate, 1 ate, 1
brown, 1 mouse, 1 cow, 1
Reduce mouse, 1
how now quick, 1
brown cow Map cow, 1
18. MapReduce Execution Details
• Single master controls job execution on
multiple slaves as well as user scheduling
• Mappers preferentially placed on same node
or same rack as their input block
– Push computation to data, minimize network use
• Mappers save outputs to local disk rather
than pushing directly to reducers
– Allows having more reducers than nodes
– Allows recovery if a reducer crashes
19. An Optimization: The Combiner
• A combiner is a local aggregation function
for repeated keys produced by same map
• For associative ops. like sum, count, max
• Decreases size of intermediate data
• Example: local counting for Word Count:
def combiner(key, values):
output(key, sum(values))
20. Word Count with Combiner
Input Map & Combine Shuffle & Sort Reduce Output
the, 1
brown, 1
the quick fox, 1 brown, 2
brown fox Map fox, 2
Reduce how, 1
now, 1
the, 3
the, 2
fox, 1
the fox ate
the mouse Map
quick, 1
how, 1
now, 1 ate, 1 ate, 1
brown, 1 mouse, 1 cow, 1
Reduce mouse, 1
how now quick, 1
brown cow Map cow, 1
21. Outline
• MapReduce architecture
• Fault tolerance in MapReduce
• Sample applications
• Getting started with Hadoop
• Higher-level languages on top of Hadoop:
Pig and Hive
22. Fault Tolerance in MapReduce
1. If a task crashes:
– Retry on another node
• Okay for a map because it had no dependencies
• Okay for reduce because map outputs are on disk
– If the same task repeatedly fails, fail the job or
ignore that input block (user-controlled)
Note: For this and the other fault tolerance
features to work, your map and reduce
tasks must be side-effect-free
23. Fault Tolerance in MapReduce
2. If a node crashes:
– Relaunch its current tasks on other nodes
– Relaunch any maps the node previously ran
• Necessary because their output files were lost
along with the crashed node
24. Fault Tolerance in MapReduce
3. If a task is going slowly (straggler):
– Launch second copy of task on another node
– Take the output of whichever copy finishes
first, and kill the other one
• Critical for performance in large clusters:
stragglers occur frequently due to failing
hardware, bugs, misconfiguration, etc
25. Takeaways
• By providing a data-parallel programming
model, MapReduce can control job
execution in useful ways:
– Automatic division of job into tasks
– Automatic placement of computation near data
– Automatic load balancing
– Recovery from failures & stragglers
• User focuses on application, not on
complexities of distributed computing
26. Outline
• MapReduce architecture
• Fault tolerance in MapReduce
• Sample applications
• Getting started with Hadoop
• Higher-level languages on top of Hadoop:
Pig and Hive
27. 1. Search
• Input: (lineNumber, line) records
• Output: lines matching a given pattern
• Map:
if(line matches pattern):
output(line)
• Reduce: identify function
– Alternative: no reducer (map-only job)
28. 2. Sort
• Input: (key, value) records
• Output: same records, sorted by key
ant, bee
• Map: identity function Map
Reduce [A-M]
zebra
• Reduce: identify function
aardvark
ant
cow bee
cow
Map elephant
pig
• Trick: Pick partitioning Reduce [N-Z]
aardvark,
function h such that elephant pig
sheep
sheep, yak
k1<k2 => h(k1)<h(k2) Map yak
zebra
29. 3. Inverted Index
• Input: (filename, text) records
• Output: list of files containing each word
• Map:
foreach word in text.split():
output(word, filename)
• Combine: uniquify filenames for each word
• Reduce:
def reduce(word, filenames):
output(word, sort(filenames))
30. Inverted Index Example
hamlet.txt
to, hamlet.txt
to be or be, hamlet.txt
not to be or, hamlet.txt afraid, (12th.txt)
not, hamlet.txt be, (12th.txt, hamlet.txt)
greatness, (12th.txt)
not, (12th.txt, hamlet.txt)
of, (12th.txt)
12th.txt be, 12th.txt or, (hamlet.txt)
not, 12th.txt to, (hamlet.txt)
be not afraid, 12th.txt
afraid of of, 12th.txt
greatness greatness, 12th.txt
31. 4. Most Popular Words
• Input: (filename, text) records
• Output: the 100 words occurring in most files
• Two-stage solution:
– Job 1:
• Create inverted index, giving (word, list(file)) records
– Job 2:
• Map each (word, list(file)) to (count, word)
• Sort these records by count as in sort job
• Optimizations:
– Map to (word, 1) instead of (word, file) in Job 1
– Estimate count distribution in advance by sampling
32. Outline
• MapReduce architecture
• Fault tolerance in MapReduce
• Sample applications
• Getting started with Hadoop
• Higher-level languages on top of Hadoop:
Pig and Hive
33. Getting Started with Hadoop
• Download from hadoop.apache.org
• To install locally, unzip and set JAVA_HOME
• Details: hadoop.apache.org/core/docs/current/quickstart.html
• Three ways to write jobs:
– Java API
– Hadoop Streaming (for Python, Perl, etc)
– Pipes API (C++)
34. Word Count in Java
public static class MapClass extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable ONE = new IntWritable(1);
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
output.collect(new text(itr.nextToken()), ONE);
}
}
}
35. Word Count in Java
public static class Reduce extends MapReduceBase
implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
36. Word Count in Java
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setMapperClass(MapClass.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
FileInputFormat.setInputPaths(conf, args[0]);
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.setOutputKeyClass(Text.class); // out keys are words (strings)
conf.setOutputValueClass(IntWritable.class); // values are counts
JobClient.runJob(conf);
}
37. Word Count in Python with
Hadoop Streaming
Mapper.py: import sys
for line in sys.stdin:
for word in line.split():
print(word.lower() + "t" + 1)
Reducer.py: import sys
counts = {}
for line in sys.stdin:
word, count = line.split("t")
dict[word] = dict.get(word, 0) + int(count)
for word, count in counts:
print(word.lower() + "t" + 1)
38. Outline
• MapReduce architecture
• Fault tolerance in MapReduce
• Sample applications
• Getting started with Hadoop
• Higher-level languages on top of Hadoop:
Pig and Hive
39. Motivation
• MapReduce is great, as many algorithms
can be expressed by a series of MR jobs
• But it’s low-level: must think about keys,
values, partitioning, etc
• Can we capture common “job patterns”?
40. Pig
• Started at Yahoo! Research
• Now runs about 30% of Yahoo!’s jobs
• Features:
– Expresses sequences of MapReduce jobs
– Data model: nested “bags” of items
– Provides relational (SQL) operators
(JOIN, GROUP BY, etc)
– Easy to plug in Java functions
– Pig Pen dev. env. for Eclipse
41. An Example Problem
Suppose you have Load Users Load Pages
user data in one
Filter by age
file, website data in
another, and you Join on name
need to find the top
Group on url
5 most visited
pages by users Count clicks
aged 18 - 25. Order by clicks
Take top 5
Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
42. In MapReduce
i m p o r t j a v a . i o . I O E x c e p t i o n ; r e p o r t e r . s e t S t a t u s ( " O K " ) ; l p . s e t O u t p u t K e y C l a s s ( T e x t . c l a s s ) ;
i m p o r t j a v a . u t i l . A r r a y L i s t ; } l p . s e t O u t p u t V a l u e C l a s s ( T e x t . c l a s s ) ;
i m p o r t j a v a . u t i l . I t e r a t o r ; l p . s e t M a p p e r C l a s s ( L o a d P a g e s . c l a s s ) ;
i m p o r t j a v a . u t i l . L i s t ; / / D o t h e c r o s s p r o d u c t a n d c o l l e c t t F i l e I n p u t F o r m a t .
h e v a l u e s a d d I n p u t P a t h ( l p , n e
g f o r ( S t r i n s 1 : f i r s t ) { P a t hu s e r / g a t e s / p a g e s " ) ) ;
( " /
i m p o r t o r g . a p a c h e . h a d o o p . f s . P a t h ; S f o r ( t r i n g s 2 : s e c o n d ) { F i l e O u t p u t F o r m a t . s e t O u t p u t P a t h ( l p ,
i m p o r t o r g . a p a c h e . h a d o o p . i o . L o n g W r i t a b l e ; t S r i n g o u t v a l = k e y + " , " + s 1 + n e w
" , " P a t h ( " / u
+ s 2 ; s e r / g a t e s / t m p / i n d e x
i m p o r t o r g . a p a c h e . h a d o o p . i o . T e x t ; c o . c o l l e c t ( n u l l , n e w T e x t ( o u t l p . s e t N u m R e d u c e T
v a l ) ) ; a s k s ( 0 ) ;
i m p o r t o r g . a p a c h e . h a d o o p . i o . W r i t a b l e ; e r p o r t e r . s e t S t a t u s ( " O K " ) ; J o b l o a d P a g e s = n e w J o b ( l p ) ;
im p o r t o r g . a p a c h e . h a d o o p . i o . W r i t a b l e C o m p a r a b l e ; }
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . F i l e I n p u t F o r m a t ; } J o b C o n f l f u = n e w J o b C o n f ( M R E x a m p l e
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . F i l e O u t p u t F o r m a t ; } e t J o b N a m e ( " L o a d
l f u . s a n d F i l t e r U s e r s " ) ;
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . J o b C o n f ; } l f u . s e t I n p u t F o r m a t ( T e x t I n p u t F o r m a t .
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . K e y V a l u e T e x t I p u b l i c r s t a t i c
n p u t F o m a t ; c l a s s L o a d J o i n e d e x t e n d s M a p R e d l f u . s e t O u t p u t K e y C l a s s ( T e x t . c l a s s ) ;
u c e B a s e
i m p o r t po r g . a h a d o o p . m a p r e d . M a p p e r ;
a c h e . i m p l e m e n t s M a p p e r < T e x t , T e x t , T e x t , L o n g W l f u . s e t O u t p u t V a l u e C l a s s ( T e x t . c l a s s )
r i t a b l e > {
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . M a p R e d u c e B a s e ; l f u . s e t M a p p e r C l a s s ( L o a d A n d F i l t e r U s e
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . O u t p u t C o l l e c t o r ; p u b l i c v o i d m a p ( F i l e I n p u t F o r m a t . a d d ,
I n p u t P a t h ( l f u n e w
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . R e c o r d R e a d e r ; T e x t k , P a t h ( " / u s e r / g a t e s / u s e r s " ) ) ;
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . R e d u c e r ; T e x t v a l , F i l e O u t p u t F o r m a t . s e t O u t p u t P a t h ( l f u ,
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . R e p o r t e r ; c tO u t p u t C o l l e n g W r i t a b
o r < T e x t , L o l e > o c , n e w P a t h ( " / u s e r / g a t e s / t m p / f i l t e
i m op r t o r g . a p a c h e . h a d o o p . m a p r e d . S e q u e n c e F i l e I n p u t F o r m a t ; R e p o r t e r r e p o r t e r ) t h r o w s I O E x c e p l f u . s e t N u m R e d u c e T a s k s ( 0 ) ;
t i o n {
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . S e q u e n c e F i l e O u t p u t F o r / / t F i n d
m a ; t h e u r l J o b l o a d U s e r s = n e w J o b ( l f u ) ;
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . T e x t I n p u t F o r m a t ; S t r i n g l i n e = v a l . t o S t r i n g ( ) ;
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . j o b c o n t r o l . J o b ; i n t f i r s t C o m m a = l i n e . i n d e x O f ( ' , ' ) ; J o b C o n f j o iM nR E =x a nm ep wl e J. oc bl Ca os ns f) (;
i m p o r t o r g . a p a c h e . h a d o o p .o n t r o l ; j o b c o n t r o l . J o b C
m a p r e d . i n t s e c o n d C o m m a C o m m a ) ;
= l i n e . i n d e x O f ( ' , ' , f j o i n . s e t J o b N a m e ( " J o i n
i r s t U s e r s a n d P a g
i m p o r t o r g . a p a c h e . h a d o o p . m a p r e d . l i b . I d e n t i t y M a p p e r ; S t r i n g k e y = l i n e . s u b s t r i n g ( f i r s t C o m m j o i n . s e t I n p u t F o r m a t ( K e y V a l u e T e x t I n p
a , s e c o n d C o m m a ) ;
/ / d r o p t h e r e s t o f t h e r e c o r d , I d o n j o i n . s e t O u t p u t K e y C l a s s ( T e x t . c l a s s ) ;
' t n e e d i t a n y m o r e ,
p u b l i c c l a s s M R E x a m p l e { / / j u s t p a s s a 1 f o r t h e c o m b i n e r / r e d j o i n . s e t O u t p u t V a l u e C l a s s ( T e x t . c l a s s
u c e r t o s u m i n s t e a d .
p u b l i c s t a t i c c l a s s L o a d P a g e s e x t e n d s M a p R e d u c e B a T e x t
s e o u t K e y = n e w T e x t ( k e y ) ; j o i n . s e t M a p p e pr eC rl .a cs ls a( sI sd )e ;n t i t y M a p
i m p l e m e n t s M a p p e r < L o n g W r i t a b l e , T e x t , T e x t , o c . c o l l e c t ( o u t K e y ,
T e x t > { n e w L o n g W r i t a b l e ( 1 Lj o i n . s e t R e d u c e r C l a s s ( J o i n . c l a s s ) ;
) ) ;
} F i l e I n p u t F o r m a t . a d d I n p u t P a t h ( j o i n ,
p u b l i c v o i d m a p ( L o n g W r i t a b l e k , T e x t } a l ,
v P a t h ( " / u s e r / g a t e s / t m p / i n d e x e d _ p a g e s " ) ) ;
O u t p u t C o l l e c t o r < T e x t , T e x t > o c , u b l i c
p s t a t i c c l a s s R e d u c e U r l s e x t e n d s M a p R e d F i l e I n p u t F o r m a t . a d d I n p u t P a t h ( j o i n ,
u c e B a s e
R e p o r t e r r e p o r t e r ) t h r o w s I O E x c e p i m p l e m e n t s
t i o n { R e d u c e r < T e x t , L o n g W r i t P a t h ( " / u s e r / g a t e s / t m p / f i l t e r e d _ u s e r s " ) ) ;
a b l e , W r i t a b l e C o m p a r a b l e ,
/ / P u l l t h e k e y o u t W r i t a b l e > { F i l e O u t p u t F o r m a t . s e o i n ,
t O u t p u t P a t h ( j n e w
S t r i n g l i n e = v a l . t o S t r i n g ( ) ; P a t h ( " / u s e r / g a t e s / t m p / j o i n e d " ) ) ;
i n t f i r s t C o m m a = l i n e . i n d e x O f ( ' , ' ) ; p u b l i c v o i d r e d u c e ( j o i n . s e t N u m R e d u c e T a s k s ( 5 0 ) ;
S t r i n g tk e y g= 0l i n e . s u b o m m a ) ;
s r i n ( , f i r s t C y , T e x t k e J o b j o i n J o b = n e w J o b ( j o i n ) ;
S t r i n g v a l u e = l i n e . s u b s t r i n g ( f i r s t C o m m a + 1 ) I t e r a t o r < L o n g W r i t a b
; l e > i t e r , j o i n J o b . a d d D e p e n d i n g J o b ( l o a d P a g e s ) ;
T e x t o u t K e y = n e w T e x t ( k e y ) ; O u t p u t C o l l e c t o r < W r i t a b l e C o m p a r a b l j o i n J o b . a d d D e p e n d i n g J o b ( l o a d U s e r s ) ;
e , W r i t a b l e > o c ,
/ / P r e p e n d a n i n d e x t o t h e v a l u e s o w e k n o w w R e p o r t e r e r e p o r t e r )
h i c h f i l t h r o w s I O E x c e p t i o n {
/ / i t c a m e f r o m . / / A d d u p a l l t h e v a l u e s w e s e e J o b C o n f g r o u p x a= m pn le ew . cJ lo ab sC so )n ;f ( M R E
T e x t o u t"V a+l v=a lnueew) ;T e x t ( " 1 g r o u p . s e t J o b N a m e ( " G r o u p U R L s " ) ;
o c . c o l l e c t ( o u t K e y , o u t V a l ) ; l o n g s u m = 0 ; g r o u p . s e t I n p u t F o r m a t ( K e y V a l u e T e x t I n
} i l e (w h e r . h a s N e x t ( ) )
i t { g r o u p . s e t O u t p u t K e y C l a s s ( T e x t . c l a s s )
} s u m + = i t e r . n e x t ( ) . g e t ( ) ; g r o u p . s e t O u t p u t V a l u e C l a s s ( L o n g W r i t a
p u b l i c s t a t i c c l a s s L o a d A n d F i l t e r U s e r s e x t e n d s M a p R e d r e p o r t e r . s e t S t a t u s (
u c e B a s e " O K " ) ; g r o u p . s e t O u t p u t F o r m a t ( S e q u e n c e F i s ) ;
l e O u t p u t F o r m a t . c l a s
i m p l e m e n t s M a p p e r < L o n g W r i t a b l e , T e x t , T e x t , T } x t >
e { g r o u p . s e t M a p p e r C l a s s ( L o a d J o i n e d . c l a
g r o u p . s e t C o m b i n e r C l a s s ( R e d u c e U r l s . c
p u b l i c v o i d m a p ( L o n g W r i t a b l e k , T e x t v a l , o c . c o l l e c t ( k e y , n e w L o n g W r i t a b l e ( s u m ) g r o u p . s e t R e d u c e r C l a s s ( R e d u c e U r l s . c l
) ;
O u t p u t C o l l e c t o r < T e x t , T e x t > o c , } F i l e I n p u t F o r m a t . a d d I n p u t P a t h ( g r o u p ,
R e p o r t e r r e p o r t e r ) t h r o w s I O E } c e p t i o n
x { P a t h ( " / u s e r / g a t e s / t m p / j o i n e d " ) ) ;
/ / P u l l t h e k e y o u t p u b l i c s t a t i c c l a s s L o a d C l i c k s e x t e n d s M a p RF ei dl ue cO eu Bt ap su et F o r m a t . s e t O u t p u t P a t h ( g r o u p ,
S t r i n g l i n e = v a l . t o S t r i n g ( ) ; m p l e m e n t s
i M a p p e r < W r i t a b l e C o m p a r a b l e , P W r i t a b l e , r L o n g W r i t a b l e , o u p e d " ) ) ;
a t h ( " / u s e / g a t e s / t m p / g r
i n t f i r s t C o m m a = l i n e . i n d e x O f T e
( ' x t
, ' > ; {
) g r o u p . s e t N u m R e d u c e T a s k s ( 5 0 ) ;
S t r i n g v a l u e r = t l i n e .
f i s C o m m a s u
+ b s t r
1 ) ; i n g ( J o b g r o u p J o b = n e w J o b ( g r o u p ) ;
i n t a g e = I n t e g e r . p a r s e I n t ( v a l u e ) ; p u b l i c v o i d m a p ( g r o u p J o b . a d d D e p e n d i n g J o b ( j o i n J o b ) ;
i f ( a g e < 1 8 | | a g e > 2 5 ) r e t u r n ; W r i t a b l e C o m p a r a b l e k e y ,
S t r i n g k e y = l i n e . s u b s t r i n g ( 0 , f i r s t C o m m a ) ; W r i t a b l e v a l , J o b C o n f t o p 1 0 0 = n e w J o b C o n f ( M R E x a m
T e x t o u t K e y = n e w T e x t ( k e y ) ; O u t p u t C o l l e c t o r < L o n g W r i t a b l e , T e t o p 1 0 0 . s e t J o b N a m e ( " T o p
x t > o c , 1 0 0 s i t e s " ) ;
/ / P r e p e n d a n e n k n o w
i d e x t w h
o i c h
t h e f
v i l e e
a l u s o w R e pt oh rr to ew rs rI eO pE ox rc te ep rt )i o n { t o p 1 0 0 . s e t I n p u t F o r m a t ( S e q u e n c e F i l e I
/ / i t c a m e f r o m . o c . c o l l e c t ( ( L o n g W r i t a b l e ) v a l , ( T e x t ) k t o p 1 0 0 . s e t O u t p u t K e y C l a s s ( L o n g W r i t a b
e y ) ;
T e x t o u t V a l = n e w T e x t ( " 2 " + v a l u e ) ; } t o p 1 0 0 . s e t O u t p u t V a l u e C l a s s ( T e x t . c l a
o c . c o l l e c t ( o u t K e y , o u t V a l ) ; } t o p 1 0 0 . s e t O u t p u t F o r m a t ( S e q u e n c e F i l e
o r m a t . c l a s s ) ;
} p u b l i c s t a t i c c l a s s L i m i t C l i c k s e x t e n d s M a p R e t o p 1 0 0 . s e t M a p p e r C l a s s ( L o a d C l i c k s . c l
d u c e B a s e
} i m p l e m e n t s R e d u c e r < L o n g W r i t a b l e , T e x t , L o t o p 1 0 0 . s e t C o m b i n e r C l a s s ( L i m i t C l i c k s
n g W r i t a b l e , T e x t > {
p u b l i c s t a t i c c l a s s J o i n e x t e n d s M a p R e d u c e B a s e t o p 1 0 0 . s e t R e d u c e r C l a s s ( L i m i t C l i c k s .
i m p l e m e n t s R e d u c e r < T e x t , T e x t , T e x t , T e x t i n t
> { c o u n t = 0 ; F i l e I n p u t F o r m a t . a d d I n p u t P a t h ( t o p 1 0 0
p u b l i c e d u c e (
v o i d r P a t h ( " / u s e r / g a t e s / t m p / g r o u p e d " ) ) ;
p u b l i c v o i d r e d u c e ( T e x t k e y , L o n g W r i t a b l e k e y , F i l e O u t p u t F o r m a t . s e t O u t p u t P a t h ( t o p 1 0 0 ,
I t e r a t o r < T e x t > i t e r , I t e r a t o r < T e x t > i t e r , P a t h ( " / u s e r / g a t e s / t o p 1 0 0 s i t e s f o r u s e r s 1 8 t o 2 5
O u t p u t C o l l e c t o r < T e x t , T e x t > o c , O u t p u t C o l l e c t o r < L o n g W r i t a b l e , T e x t > o t o p 1 0 0 . s e t N u m R e d u c e T a s k s ( 1 ) ;
c ,
R e p o r t e r r e p o r t e r ) t h r o w s I O E x c e p t i o n R e p o r t e r
{ r e p o r t e r ) t h r o w s I O E x c e p t i o n J o b { l i m i t = n e w J o b ( t o p 1 0 0 ) ;
/ / F o r e a c h v a l u e , f i g u r e o u t w h i c h f i l e i t ' s f r o m a n d l i m i t . a d d D e p e n d i n g J o b ( g r o u p J o b ) ;
s t o r e i t / / O n l y o u t p u t t h e f i r s t 1 0 0 r e c o r d s
/ / a c c o r d i n g l y . w h i l e 0 ( c o u n t e r . h a s N e x t ( ) )
< 1 0 & & i t { J o b C o n t r o l j c = n e w 0J o b C o n t r o l ( " F i n
1 0 s i t e s f o r u s
L i s t < S t r i n g > f i r s t = n e w A r r a y L i s t < S t r i n g > ( ) ; o c . c o l l e c t ( k e y , i t e r . n e x t 1 8 ) t o
( ) ; 2 5 " ) ;
L i s t < S t r i n g > s e c o n d = n e w A r r a y L i s t < S t r i n g > ( ) c o u n t + + ;
; j c . a d d J o b ( l o a d P a g e s ) ;
} j c . a d d J o b ( l o a d U s e r s ) ;
w h i l e ( i t e r . h a s N e x t ( ) ) { } j c . a d d J o b ( j o i n J o b ) ;
T e x t t = i t e r . n e x t ( ) ; } j c . a d d J o b ( g r o u p J o b ) ;
S t rS it nr gi n vg a( l) u; e = t . t o p u b l i c s t a t i c v o i d m a i n ( S t r i n g [ ] a r g s ) t h r o w s j c . a d d J o b ( l i m i t ) ;
I O E x c e p t i o n {
i f ( v a l u e . c h a r A t ( 0 ) = = ' 1 ' ) J o b C o n f l p = n e w J o b C o n f ( M R E x a m p l e . c l a s s ) j c . r u n ( ) ;
;
f i r s t . a d d ( v a l u e . s u b s t r i n g ( 1 ) ) ; t J o b N a m e ( " L o a d
l p . s e P a g e s " ) ; }
e l s e s e c o n d . a d d ( v a l u e . s u b s t r i n g ( 1 l p . s e t I n p u t F o r m a t ( T e x t I n p u t F o r m a t } c l a s s ) ;
) ) ; .
Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
43. In Pig Latin
Users = load ‘users’ as (name, age);
Filtered = filter Users by
age >= 18 and age <= 25;
Pages = load ‘pages’ as (user, url);
Joined = join Filtered by name, Pages by user;
Grouped = group Joined by url;
Summed = foreach Grouped generate group,
count(Joined) as clicks;
Sorted = order Summed by clicks desc;
Top5 = limit Sorted 5;
store Top5 into ‘top5sites’;
Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
44. Ease of Translation
Notice how naturally the components of the job translate into Pig Latin.
Load Users Load Pages
Users = load …
Filter by age
Fltrd = filter …
Pages = load …
Join on name Joined = join …
Group on url
Grouped = group …
Summed = … count()…
Count clicks Sorted = order …
Top5 = limit …
Order by clicks
Take top 5
Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
45. Ease of Translation
Notice how naturally the components of the job translate into Pig Latin.
Load Users Load Pages
Users = load …
Filter by age
Fltrd = filter …
Pages = load …
Join on name Joined = join …
Job 1 Grouped = group …
Group on url
Job 2 Summed = … count()…
Count clicks Sorted = order …
Top5 = limit …
Order by clicks
Job 3
Take top 5
Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
46. Hive
• Developed at Facebook
• Used for majority of Facebook jobs
• “Relational database” built on Hadoop
– Maintains list of table schemas
– SQL-like query language (HQL)
– Can call Hadoop Streaming scripts from HQL
– Supports table partitioning, clustering, complex
data types, some optimizations
47. Creating a Hive Table
CREATE TABLE page_views(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'User IP address')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
STORED AS SEQUENCEFILE;
• Partitioning breaks table into separate files
for each (dt, country) pair
Ex: /hive/page_view/dt=2008-06-08,country=US
/hive/page_view/dt=2008-06-08,country=CA
48. Simple Query
• Find all page views coming from xyz.com
on March 31st:
SELECT page_views.*
FROM page_views
WHERE page_views.date >= '2008-03-01'
AND page_views.date <= '2008-03-31'
AND page_views.referrer_url like '%xyz.com';
• Hive only reads partition 2008-03-01,*
instead of scanning entire table
49. Aggregation and Joins
• Count users who visited each page by gender:
SELECT pv.page_url, u.gender, COUNT(DISTINCT u.id)
FROM page_views pv JOIN user u ON (pv.userid = u.id)
GROUP BY pv.page_url, u.gender
WHERE pv.date = '2008-03-03';
• Sample output:
page_url gender count(userid)
home.php MALE 12,141,412
home.php FEMALE 15,431,579
photo.php MALE 23,941,451
photo.php FEMALE 21,231,314
50. Using a Hadoop Streaming
Mapper Script
SELECT TRANSFORM(page_views.userid,
page_views.date)
USING 'map_script.py'
AS dt, uid CLUSTER BY dt
FROM page_views;
51. Conclusions
• MapReduce’s data-parallel programming model
hides complexity of distribution and fault tolerance
• Principal philosophies:
– Make it scale, so you can throw hardware at problems
– Make it cheap, saving hardware, programmer and
administration costs (but requiring fault tolerance)
• Hive and Pig further simplify programming
• MapReduce is not suitable for all problems, but
when it works, it may save you a lot of time
52. Yahoo! Super Computer Cluster - M45
Yahoo!’s cluster is part of the Open Cirrus’ Testbed created by HP, Intel, and Yahoo! (see
press release at http://research.yahoo.com/node/2328).
The availability of the Yahoo! cluster was first announced in November 2007 (see press
release at http://research.yahoo.com/node/1879).
The cluster has approximately 4,000 processor-cores and 1.5 petabytes of disks.
The Yahoo! cluster is intended to run the Apache open source software Hadoop and Pig.
Each selected university will share the partition with up to three other universities. The
initial duration of use is 6 months, potentially renewable for another 6 months upon
written agreement.
For further Information, please contact:
http://cloud.citris-uc.org/
Dr. Masoud Nikravesh
CITRIS and LBNL, Executive Director, CSE
Nikravesh@eecs.berkeley.edu
Phone: (510) 643-4522