• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Introduction to MapReduce and Hadoop
 

Introduction to MapReduce and Hadoop

on

  • 176 views

This was the first session about Hadoop and MapReduce. It introduces what Hadoop is and its main components. It also covers the how to program your first MapReduce task and how to run it on pseudo ...

This was the first session about Hadoop and MapReduce. It introduces what Hadoop is and its main components. It also covers the how to program your first MapReduce task and how to run it on pseudo distributed Hadoop installation.

This session was given in Arabic and i may provide a video for the session soon.

Statistics

Views

Total Views
176
Views on SlideShare
174
Embed Views
2

Actions

Likes
1
Downloads
15
Comments
0

2 Embeds 2

http://www.slideee.com 1
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • https://sites.google.com/site/hadoopintroduction/home/what-is-hadoop
  • Lucene is a full featured text indexer and searching library. <br /> Nutch was trying to build a complete web search engine with Lucene, it has web crawler and HTML parser and so on.. <br /> Problem: There are billions of web pages there!! What can the poor Nutch do? <br /> > Google announced GFS and MapReduce 2004, they said that they are using these techniques in their search engine … realy? :/ &lt; <br /> Doug and his team used these techniques for nutch and then Hadoop was born. <br /> <br /> Doug Cutting
  • Challenges in processing Large Data in a distributed way. <br />
  • Accessible—Hadoop runs on large clusters of commodity machines or on cloud computing services such as Amazon’s Elastic Compute Cloud (EC2). <br /> <br /> Robust—Because it is intended to run on commodity hardware, Hadoop is archi­tected with the assumption of frequent hardware malfunctions. It can gracefully <br /> handle most such failures. <br /> <br /> Scalable—Hadoop scales linearly to handle larger data by adding more nodes to the cluster. <br /> <br /> Simple—Hadoop allows users to quickly write efficient parallel code. <br /> <br /> <br /> Hadoop in Action section 1.2
  • REF: <br /> https://sites.google.com/site/hadoopintroduction/home/comparing-sql-databases-and-hadoop
  • REF: <br /> https://developer.yahoo.com/hadoop/tutorial/module4.html
  • Table from “Hadoop In Action” <br /> Images source: <br /> https://developer.yahoo.com/hadoop/tutorial/module4.html
  • Pseudo-code for map and reduce functions for word counting <br /> Source: Hadoop In Action
  • We now know a general overview about mapreduce, let’s see how hadoop works
  • Hadoop In Action Figure 2.1
  • Local (standalone) mode. <br /> No HDFS. <br /> No Hadoop Daemons. <br /> Debugging and testing the logic of MapReduce program. <br /> Pseudo-distributed mode. <br /> All daemons running on a single machine. <br /> Debugging your code, allowing you to examine memory usage, HDFS input/out­put issues, and other daemon interactions. <br /> Fully distributed mode.
  • http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
  • This slide is initially left blank.
  • https://developer.yahoo.com/hadoop/tutorial/module4.html
  • This slide is initially left blank.
  • When the reducer task receives the output from the various mappers, it sorts the <br /> incoming data on the key of the (key/value) pair and groups together all values of <br /> the same key. <br />
  • When the reducer task receives the output from the various mappers, it sorts the <br /> incoming data on the key of the (key/value) pair and groups together all values of <br /> the same key. <br />

Introduction to MapReduce and Hadoop Introduction to MapReduce and Hadoop Presentation Transcript

  • Expected … what to be said! ● History. ● What is Hadoop. ● Hadoop vs SQl. ● MapReduce. ● Hadoop Building Blocks. ● Installing, Configuring and Running Hadoop. ● Anatomy of MapReduce program.
  • Hadoop Series Resources
  • How hadoop was born? Doug Cutting
  • Challenges of Distributed Processing of Large Data ● How to distribute the work? ● How to store and distribute the data itself? ● How to overcome failures? ● How to balance the load? ● How to deal with unstructured data? ● ...
  • Hadoop tackles these challenges! So, what’s Hadoop?
  • What is Hadoop? Hadoop is an open source framework for writing and running distributed applications that process large amounts of data. Key distinctions of Hadoop: ● Accessible ● Robust ● Scalable ● Simple
  • Hadoop vs SQL ● Structured and Unstructured data. ● Datastore and Data Analysis. ● Scale-out and Scale-up. ● Offline batch processing and Online transactions.
  • Hadoop Uses MapReduce What is MapReduce?...
  • ● Parallel programming model for clusters of commodity machines. ● MapReduce provides: o Automatic parallelization & distribution. o Fault tolerance. o Locality of data. What is MapReduce?
  • MapReduce … Map then Reduce
  • Keys and Values ● Key/Value pairs. ● Keys divide Reduce Space. Input Output Map <k1, v1> list(<k2, v2>) Reduce <k2, list(v2)> list(<k3, v3>)
  • WordCount in Action Input: foo.txt: “This is the foo file” bar.txt: “And this is the bar one” 1 is 1 the 1 foo 1 file 1 and 1 this 1 is 1 the 1 Reduce#2: Input: Output: is, [1, 1] is, 2 Reduce#1: Input: Output: this, [1, 1] this, 2 Reduce#3: Input: Output: foo, [1] foo, 1. . Final output: this 2 is 2 the 2 foo 1 file 1 and 1 bar 1 one 1
  • WordCount with MapReduce map(String filename, String document) { List<String> T = tokenize(document); for each token in T { emit ((String)token, (Integer) 1); } } reduce(String token, List<Integer> values) { Integer sum = 0; for each value in values { sum = sum + value; } emit ((String)token, (Integer) sum); }
  • Hadoop Building Blocks How does hadoop work?...
  • Hadoop Building Blocks 1. NameNode 2. DataNode 3. Secondary NameNode 4. JobTracker 5. TaskTracker
  • HDFS: NameNode and DataNodes
  • JobTracker and TaskTracker
  • Typical Hadoop Cluster
  • Running Hadoop Three modes to run Hadoop: 1. Local (standalone) mode. 2. Pseudo-distributed mode “cluster of one” . 3. Fully distributed mode.
  • An Action Running Hadoop on Local Machine
  • Actions ... 1. Installing Hadoop. 2. Configuring Hadoop (Pseudo-distributed mode). 3. Running WordCount example. 4. Web-based cluster UI.
  • HDFS 1. HDFS is a filesystem designed for large-scale distributed data processing. 2. HDFS isn’t a native Unix filesystem. Basic File Commands: $ hadoop fs -cmd <args> $ hadoop fs –ls $ hadoop fs –mkdir /user/chuck $ hadoop fs -copyFromLocal
  • Anatomy of a MapReduce program MapReduce and beyond
  • Hadoop 1. Data Types 2. Mapper 3. Reducer 4. Partitioner 5. Combiner 6. Reading and Writing a. InputFormat b. OutputFormat
  • Anatomy of a MapReduce program
  • Hadoop Data Types ● Certain defined way of serializing key/value pairs. ● Values should implement Writable Interface. ● Keys should implement WritableComparable interface. ● Some predefined classes: o BooleanWritable. o ByteWritable. o IntWritable o ...
  • Mapper
  • Mapper 1. Mapper<K1,V1,K2,V2> 2. Override method: void map(K1 key, V1 value, Context context) 3. Use context.write(K2, V2) to emit key/value pairs.
  • WordCount Mapperpublic static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context){ String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); }
  • Predefined Mappers
  • Reducer
  • Reducer 1. Extends Reducer<K1,V1,K2,V2> 2. Overrides method: void reduce(K2, Iterable<V2>, Context context) 3. Use context.write(K2, V2) to emit key/value pairs.
  • WordCount Reducer public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context){ int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } }
  • Predefined Reducers
  • Partitioner
  • Partitioner The partitioner decides which key goes where class WordSizePartitioner extends Partitioner<Text, IntWritable> { @Override public int getPartition(Text word, IntWritable count, int numOfPartions) { return 0; } }
  • Combiner
  • Combiner It’s a local Reduce Task at Mapper. WordCout Mapper Output: 1. Without Combiner:<the, 1>, <file, 1>, <the, 1>, … 2. With Combiner:<the, 2>, <file, 2>, ...
  • Reading and Writing
  • Reading and Writing 1. Input data usually resides in large files. 2. MapReduce’s processing power is the splitting of the input data into chunks(InputSplit). 3. Hadoop’s FileSystem provides the class FSDataInputStream for file reading. It extends DataInputStream with random read access.
  • InputFormat Classes ● TextInputFormat o <offset, line> ● KeyValueTextInputFormat o keytvaue => <key, value> ● NLineInputFormat o <offset, nLines> You can define your own InputFormat class ...
  • 1. The output has no splits. 2. Each reducer generates output file named part-nnnnn, where nnnnn is the partition ID of the reducer. Predefined OutputFormat classes: > TextOutputFormat <k, v> => ktv OutputFormat
  • Recap
  • END OF SESSION #1
  • Q