Java MapReduce
Programming on
Apache Hadoop
Aaron T. Myers, aka ATM
with thanks to Sandy Ryza
Introductions
● Software Engineer/Tech Lead for HDFS at
Cloudera
● Committer/PMC Member on the Apache
Hadoop project
● My ...
What is MapReduce?
● A distributed programming paradigm
What is a distributed programming
paradigm?
Help!
What is a distributed programming
paradigm?
Distributed Systems are Hard
● Monitoring
● RPC protocols, serialization
● Fault tolerance
● Deployment
● Scheduling/Resou...
Writing Data Parallel Programs
Should Not Be
MapReduce to the Rescue
● You specify map(...) and reduce(...)
functions
○ map = (list(k, v) -> list(k, v))
○ reduce = (k,...
Map
apple apple banana
a happy airplane
airplane on the runway
runway apple runway
rumple on the apple
apple apple banana
...
Reduce
reduce()
reduce()
reduce()
reduce()
reduce()
reduce()
reduce()
reduce()
a - 1
airplane - 1
apple - 4
banana - 1
on ...
What is (Core) Hadoop?
● An open source platform for storing,
processing, and analyzing enormous
amounts of data
● Consist...
What is Hadoop?
Traditional Operating System
Storage:
File System
Execution/Scheduling:
Processes
What is Hadoop?
Hadoop
(Distributed operating system)
Storage:
Hadoop Distributed
File System (HDFS)
Execution/Scheduling:...
HDFS (briefly)
● Distributed file system that runs on all nodes
in the cluster
○ Co-located with Hadoop MapReduce daemons
...
Writing MapReduce programs in
Java
● Interface to MapReduce in Hadoop is Java
API
● WordCount!
Word Count Map Function
public class WordCountMapper extends MapReduceBase
implements Mapper<LongWritable, Text, Text, Int...
Word Count Reduce Function
public static class WordCountReducer extends MapReduceBase
implements Reducer<Text, IntWritable...
Word Count Driver
InputFormats
● TextInputFormat
○ Each line becomes <LongWritable, Text> = <byte
offset in file, whole line>
● KeyValueText...
Serialization
● Writables
○ Native to Hadoop
○ Implement serialization for higher level structures
yourself
● Avro
○ Exten...
Writables
public class MyNumberAndStringWritable implements Writable {
private int number;
private String str;
public void...
Avro
protocol MyMapReduceObjects {
record MyNumberAndString {
string str;
int number;
}
}
Testing MapReduce Programs
● First, write unit tests (duh) with MRUnit
● LocalJobRunner
○ Runs job in single process
● Sin...
MRUnit
@Test
public void testMapper() throws IOException {
MapDriver<LongWritable, Text, Text, IntWritable> mapDriver=
new...
MRUnit
@Test
public void testReducer() {
ReduceDriver<Text, IntWritable, Text, IntWritable> reduceDriver=
new MapDriver<Te...
Counters
Map-Reduce Framework
Map input records=183
Map output records=183
Map output bytes=533563
Map output materialized...
Counters
if (record.isUgly()) {
context.getCounter("Ugly Record Counters",
"Ugly Records").increment(1);
}
Counters
Map-Reduce Framework
Map input records=183
Map output records=183
Map output bytes=533563
Map output materialized...
Distributed Cache
We need some data and libraries on all the
nodes.
Distributed Cache
Map or
Reduce Task
Map or
Reduce Task
Local
Copy
HDFS
Distributed
CacheMap or
Reduce Task
Map or
Reduce ...
Distributed Cache
In our driver:
DistributedCache .addCacheFile(
new URI("/some/path/to/ourfile.txt" ), conf);
In our mapp...
Java
Technologies
Built on
MapReduce
Crunch
● Library on top of MapReduce that makes it
easy to write pipelines of jobs in Java
● Contains capabilities like jo...
Crunch
public class WordCount {
public static void main(String[] args) throws Exception {
Pipeline pipeline = new MRPipeli...
Mahout
● Machine Learning on Hadoop
○ Collaborative Filtering
○ User and Item based recommenders
○ K-Means, Fuzzy K-Means ...
Non-Java technologies that use
MapReduce
● Hive
○ SQL -> M/R translator, metadata manager
● Pig
○ Scripting DSL -> M/R tra...
Thanks!
● Questions?
Upcoming SlideShare
Loading in...5
×

Hadoop - Introduction to map reduce programming - Reunião 12/04/2014

1,414

Published on

Published in: Technology, Education
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,414
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
68
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Hadoop - Introduction to map reduce programming - Reunião 12/04/2014

  1. 1. Java MapReduce Programming on Apache Hadoop Aaron T. Myers, aka ATM with thanks to Sandy Ryza
  2. 2. Introductions ● Software Engineer/Tech Lead for HDFS at Cloudera ● Committer/PMC Member on the Apache Hadoop project ● My work focuses primarily on HDFS and Hadoop security
  3. 3. What is MapReduce? ● A distributed programming paradigm
  4. 4. What is a distributed programming paradigm? Help!
  5. 5. What is a distributed programming paradigm?
  6. 6. Distributed Systems are Hard ● Monitoring ● RPC protocols, serialization ● Fault tolerance ● Deployment ● Scheduling/Resource Management
  7. 7. Writing Data Parallel Programs Should Not Be
  8. 8. MapReduce to the Rescue ● You specify map(...) and reduce(...) functions ○ map = (list(k, v) -> list(k, v)) ○ reduce = (k, list(v) -> k, v) ● The framework does the rest ○ Split up the data ○ Run several mappers over the splits ○ Shuffle the data around for the reducers ○ Run several reducers ○ Store the final results
  9. 9. Map apple apple banana a happy airplane airplane on the runway runway apple runway rumple on the apple apple apple banana a happy airplane airplane on the runway runway apple runway rumple on the apple apple - 1 apple - 1 banana - 1 a - 1 happy - 1 airplane - 1 on - 1 the - 1 runway - 1 runway - 1 runway - 1 apple - 1 rumple - 1 on - 1 the - 1 apple - 1 map() map() map() map() map() Map Inputs Map OutputsInput Data Map Function Shuffle
  10. 10. Reduce reduce() reduce() reduce() reduce() reduce() reduce() reduce() reduce() a - 1 airplane - 1 apple - 4 banana - 1 on - 2 runway - 3 rumple - 1 the - 2 a - 1, 1 airplane - 1 apple - 1, 1, 1, 1 banana - 1 on - 1, 1 runway - 1, 1, 1 rumple - 1 the - 1, 1 Shuffle Reduce Output
  11. 11. What is (Core) Hadoop? ● An open source platform for storing, processing, and analyzing enormous amounts of data ● Consists of… ○ A distributed file system (HDFS) ○ An implementation of the Map/Reduce paradigm (Hadoop MapReduce) ● Written in Java!
  12. 12. What is Hadoop? Traditional Operating System Storage: File System Execution/Scheduling: Processes
  13. 13. What is Hadoop? Hadoop (Distributed operating system) Storage: Hadoop Distributed File System (HDFS) Execution/Scheduling: MapReduce
  14. 14. HDFS (briefly) ● Distributed file system that runs on all nodes in the cluster ○ Co-located with Hadoop MapReduce daemons ● Looks like a pretty normal Unix file system ○ hadoop fs -ls /user/atm/ ○ hadoop fs -cp /user/atm/data.txt /user/atm/data2.txt ○ hadoop fs -rm /user/atm/data.txt ○ … ● Don’t use the normal Java File API ○ Instead use org.apache.hadoop.fs.FileSystem API
  15. 15. Writing MapReduce programs in Java ● Interface to MapReduce in Hadoop is Java API ● WordCount!
  16. 16. Word Count Map Function public class WordCountMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one= new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable>output, Reporter reporter) throws IOException { String line = value.toString(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); output.collect(word, one); } } }
  17. 17. Word Count Reduce Function public static class WordCountReducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable>output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } output.collect(key, new IntWritable(sum)); } }
  18. 18. Word Count Driver
  19. 19. InputFormats ● TextInputFormat ○ Each line becomes <LongWritable, Text> = <byte offset in file, whole line> ● KeyValueTextInputFormat ○ Splits lines on delimiter into Text key and Text value ● SequenceFileInputFormat ○ Reads key/value pairs from SequenceFile, a Hadoop format ● DBInputFormat ○ Uses JDBC to connect to a database ● Many more, or write your own!
  20. 20. Serialization ● Writables ○ Native to Hadoop ○ Implement serialization for higher level structures yourself ● Avro ○ Extensible ○ Cross-language ○ Handles serialization of higher level structures for you ● And others… ○ Parquet, Thrift, etc.
  21. 21. Writables public class MyNumberAndStringWritable implements Writable { private int number; private String str; public void write(DataOutput out) throws IOException { out.writeInt(number); out.writeUTF(str); } public void readFields(DataInput in) throws IOException { number = in.readInt(); str = in.readUTF(); } }
  22. 22. Avro protocol MyMapReduceObjects { record MyNumberAndString { string str; int number; } }
  23. 23. Testing MapReduce Programs ● First, write unit tests (duh) with MRUnit ● LocalJobRunner ○ Runs job in single process ● Single-node cluster (Cloudera VM!) ○ Multiple processes on the same machine ● On the real cluster
  24. 24. MRUnit @Test public void testMapper() throws IOException { MapDriver<LongWritable, Text, Text, IntWritable> mapDriver= new MapDriver<LongWritable, Text, Text, IntWritable>(new WordCountMapper()); String line = "apple banana banana carrot"; mapDriver.withInput(new LongWritable(0), new Text(line)); mapDriver.withOutput(new Text("apple"), new IntWritable(1)); mapDriver.withOutput(new Text("banana"), new IntWritable(1)); mapDriver.withOutput(new Text("banana"), new IntWritable(1)); mapDriver.withOutput(new Text("carrot"), new IntWritable(1)); mapDriver.runTest(); }
  25. 25. MRUnit @Test public void testReducer() { ReduceDriver<Text, IntWritable, Text, IntWritable> reduceDriver= new MapDriver<Text, IntWritable, Text, IntWritable>(new WordCountReducer()); reduceDriver.withInput(new Text("apple"), Arrays.asList(new IntWritable(1), new IntWritable(2))); reduceDriver.withOutput(new Text("apple"), new IntWritable("3")); reduceDriver.runTest(); }
  26. 26. Counters Map-Reduce Framework Map input records=183 Map output records=183 Map output bytes=533563 Map output materialized bytes=534190 Input split bytes=144 Combine input records=0 Combine output records=0 Reduce input groups=183 Reduce shuffle bytes=0 Reduce input records=183 Reduce output records=183 Spilled Records=366 Shuffled Maps =0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=7 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 File System Counters FILE: Number of bytes read=1844866 FILE: Number of bytes written=1927344 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 File Input Format Counters Bytes Read=655137 File Output Format Counters Bytes Written=537484
  27. 27. Counters if (record.isUgly()) { context.getCounter("Ugly Record Counters", "Ugly Records").increment(1); }
  28. 28. Counters Map-Reduce Framework Map input records=183 Map output records=183 Map output bytes=533563 Map output materialized bytes=534190 Input split bytes=144 Combine input records=0 Combine output records=0 Reduce input groups=183 Reduce shuffle bytes=0 Reduce input records=183 Reduce output records=183 Spilled Records=366 Shuffled Maps =0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=7 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 File System Counters FILE: Number of bytes read=1844866 FILE: Number of bytes written=1927344 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 File Input Format Counters Bytes Read=655137 File Output Format Counters Bytes Written=537484 Ugly Record Counters Ugly Records=1024
  29. 29. Distributed Cache We need some data and libraries on all the nodes.
  30. 30. Distributed Cache Map or Reduce Task Map or Reduce Task Local Copy HDFS Distributed CacheMap or Reduce Task Map or Reduce Task Local Copy
  31. 31. Distributed Cache In our driver: DistributedCache .addCacheFile( new URI("/some/path/to/ourfile.txt" ), conf); In our mapper or reducer: @Override public void setup(Context context) throws IOException, InterruptedException { Configuration conf = context.getConfiguration(); localFiles = DistributedCache .getLocalCacheFiles(conf); }
  32. 32. Java Technologies Built on MapReduce
  33. 33. Crunch ● Library on top of MapReduce that makes it easy to write pipelines of jobs in Java ● Contains capabilities like joins and aggregation functions to save programmers from writing these for each job
  34. 34. Crunch public class WordCount { public static void main(String[] args) throws Exception { Pipeline pipeline = new MRPipeline(WordCount.class); PCollection<String> lines = pipeline.readTextFile(args[0]); PCollection<String> words = lines.parallelDo("my splitter", new DoFn<String, String>() { public void process(String line, Emitter<String> emitter) { for (String word : line.split("s+")) { emitter.emit(word); } } }, Writables.strings()); PTable<String, Long> counts= Aggregate.count(words); pipeline.writeTextFile(counts, args[1]); pipeline.run(); } }
  35. 35. Mahout ● Machine Learning on Hadoop ○ Collaborative Filtering ○ User and Item based recommenders ○ K-Means, Fuzzy K-Means clustering ○ Dirichlet process clustering ○ Latent Dirichlet Allocation ○ Singular value decomposition ○ Parallel Frequent Pattern mining ○ Complementary Naive Bayes classifier ○ Random forest decision tree based classifier
  36. 36. Non-Java technologies that use MapReduce ● Hive ○ SQL -> M/R translator, metadata manager ● Pig ○ Scripting DSL -> M/R translator ● Distcp ○ HDFS tool to bulk copy data from one HDFS cluster to another
  37. 37. Thanks! ● Questions?
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×