Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Map reduce and Hadoop on windows

3,146 views

Published on

Map Reduce Introduction And Its Implementation.

Published in: Technology
  • Be the first to comment

Map reduce and Hadoop on windows

  1. 1. Map Reduce <br />Muhammad UsmanShahid<br />Software Engineer Usman.shahid.st@hotmail.com<br />10/17/2011<br />1<br />
  2. 2. Parallel Programming<br />Used for performance and efficiency.<br />Processing is broken up into parts and done concurrently.<br />Instruction of each part run on a separate CPU while many processors are connected.<br />Identification of set of tasks which can run concurrently is important.<br />A Fibonacci function is Fk+2 = Fk + Fk+1.<br />It is clear that Fibonacci function can not be parallelized as each computed value depends on previous.<br />Now consider a huge array which can be broken up into sub-arrays.<br />10/17/2011<br />2<br />
  3. 3. Parallel Programming<br />10/17/2011<br />3<br />If each element required some processing, with no dependencies in the computation, we have an ideal parallel computing opportunity.<br />
  4. 4. Google Data Center<br />Google believes buy cheap computers but numerous in number.<br />Google has parallel processing concept in its data centers.<br />Map Reduce is a parallel and distributed approach developed by Google for processing large data sets. <br />10/17/2011<br />4<br />
  5. 5. Map Reduce Introduction<br />Map Reduce has two key components. Map and Reduce.<br />Map function is used on input values to calculate a set of key/Value pairs.<br />Reduce aggregates this data into a scalar.<br />10/17/2011<br />5<br />
  6. 6. Data Distribution <br />Input files are split into M pieces on distributed file systems.<br />Intermediate files are created from map tasks are written to local disks.<br />Output files are written to distributed file systems.<br />10/17/2011<br />6<br />
  7. 7. Data Distribution<br />10/17/2011<br />7<br />
  8. 8. Map Reduce Function<br />Map Reduce function by an example see the query “Select Sum(stuMarks) from student group by studentSection”.<br />In above query “select” phase is doing the same as Map do and “Group By” same as Reduce Phase.<br />10/17/2011<br />8<br />
  9. 9. Classical Example<br />The classical example of Map Reduce is the log file analysis.<br />Big log files are split and mapper search for different web pages which are accessed.<br />Every time a web page is found in the log a key/value pair is emitted to the reducer in such way that key = web page and value = 1.<br />The reducer aggregates the number for a certain web pages. <br />Result is the count of total hits for each web page.<br />10/17/2011<br />9<br />
  10. 10. Reverse Web Link Graph<br />In this example Map function outputs (URL target, source) from an input web page (source).<br />Reduce function concatenates the list of all source URL(s) with a give target of URL and returns (target, list(source)).<br />10/17/2011<br />10<br />
  11. 11. Other Examples <br />Map Reduce can be used for the lot of problems.<br />For Example the Google used the Map Reduce for the calculation of page ranks.<br />Word count in large set of documents can also be resolved by Map Reduce very efficiently.<br />Google library for the Map Reduce is not open source but an implementation in java called hadoop is an open source.<br />10/17/2011<br />11<br />
  12. 12. Implementation of Example<br />Word Count is a simple application that counts the number of occurrences of words in a given set of inputs.<br />Hadoop library is used for its implementation.<br />Code is given in the below attached file.<br />10/17/2011<br />12<br />
  13. 13. Usage of Implementation<br />For example the input files are $ bin/hadoopdfs -ls /usr/joe/wordcount/input/ /usr/joe/wordcount/input/file01 /usr/joe/wordcount/input/file02 <br />$ bin/hadoopdfs -cat /usr/joe/wordcount/input/file01 Hello World Bye World <br />$ bin/hadoopdfs -cat /usr/joe/wordcount/input/file02 Hello Hadoop Goodbye Hadoop<br />Run the application.<br />Word Count is straight forward problem.<br />10/17/2011<br />13<br />
  14. 14. Walk Through Implementation<br />The Mapper implementation (lines 14-26), via the map method (lines 18-25), processes one line at a time, as provided by the specified TextInputFormat (line 49). It then splits the line into tokens separated by whitespaces, via the StringTokenizer, and emits a key-value pair of < <word>, 1>.<br />For the given sample input the first map emits:< Hello, 1> < World, 1> < Bye, 1> < World, 1> <br />The second map emits:< Hello, 1> < Hadoop, 1> < Goodbye, 1> < Hadoop, 1> <br />10/17/2011<br />14<br />
  15. 15. Walk Through Implementation<br />WordCount also specifies a combiner (line 46). Hence, the output of each map is passed through the local combiner (which is same as the Reducer as per the job configuration) for local aggregation, after being sorted on the keys.<br />The output of the first map:< Bye, 1> < Hello, 1> < World, 2> <br />The output of the second map:< Goodbye, 1> < Hadoop, 2> < Hello, 1> <br />10/17/2011<br />15<br />
  16. 16. Walk Through Implementation<br />The Reducer implementation (lines 28-36), via the reduce method (lines 29-35) just sums up the values, which are the occurence counts for each key (i.e. words in this example).<br />Thus the output of the job is:< Bye, 1> < Goodbye, 1> < Hadoop, 2> < Hello, 2> < World, 2> <br />The run method specifies various facets of the job, such as the input/output paths (passed via the command line), key/value types, input/output formats etc., in theJobConf. It then calls the JobClient.runJob (line 55) to submit the and monitor its progress.<br />10/17/2011<br />16<br />
  17. 17. Execution Overview<br />10/17/2011<br />17<br />
  18. 18. Map Reduce Execution<br />Map Reduce library is the user program that first splits the input files in M pieces. Then it start ups many copies of the program on cluster of machines.<br />One of the copy is special – The Master other are the workers. There are M Map tasks and R Reduce tasks to assign. The master picks the idle workers and assign them the Map task or Reduce Task.<br />A worker who is assigned Map task reads the contents of corresponding input split. It parses the key value pair and pass it to user defined Map function this generates the intermediate key/value pairs buffered in the memory.<br />Periodically, the buffered pairs are written to local disks. The locations of these buffered pairs on local disks are passed back to the master, who is responsible for forwarding them to the reducer workers.<br />10/17/2011<br />18<br />
  19. 19. Map Reduce Execution <br />When master notify a reduce worker about these location, it uses RPC to access this local data, then it sorts the data.<br />The reduce worker iterates over the sorted intermediate data, for each unique key it passes the key and values to the reduce function. The output is appended to the final output file.<br />Many associated issues are handled by the library like<br />Parallelization<br />Fault Tolerance <br />Data Distribution<br />Load Balancing<br />10/17/2011<br />19<br />
  20. 20. Debugging<br />Offer human readable status info on http server, user can see jobs In progress, Completed etc.<br />Allows use of GDB and other debugging tools.<br />10/17/2011<br />20<br />
  21. 21. Conclusions<br />Simplifies large scale computations that fit this model.<br />Allows user to focus on the problem without worrying about the details.<br />It is being used by renowned companies like Google and Yahoo.<br />Google library for Map Reduce is not open source but a project of Apache called hadoop is an open source library for Map Reduce.<br />10/17/2011<br />21<br />

×