• Like
MapReduce
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

MapReduce

  • 939 views
Published

MapReduce: Simplified Data Processing on Large Clusters

MapReduce: Simplified Data Processing on Large Clusters

Published in Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
939
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
67
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Programming model for processing and generating large data sets.MapReduce was used to completely regenerate Google's index of the World Wide Web.At 10,000’s of nodes, some will fail every dayWhy MapReduce?Several computations were implemented by Google to process large amounts of raw data and compute various kinds of derived data.Major issues handled by MapReduceParallelizing computationDistributing dataFailuresHigh performance on large cluster of commodity PC’sHighly scalable
  • Functional program –Lisp,MP(Metaprogramming)Map,Reduce function written by user
  • Upper-Case MapperExplode MapperFilter MapperChanging Mapper
  • Sum reducerIdentity reducerSame matchign can handle different keys coz there are many words in english langaeg and we can not find much of matchingOnly key is ordered
  • We cant start reducing until the mapping phase is completely finished –barrier in middlemap() functions run in parallel, creating different intermediate values from different input data setsreduce() functions also run in parallel, each working on a different output keyAll values are processed independently
  • We cant start reducing until the mapping phase is completely finished –barrier in middle
  • Here are some examples that can be easily expresses as mapReduce computationss
  • Create M splits of input dataUser provides R i.e # of partitions or # of output files
  • The input splits can be processed in parallel by different machinesReduce invocations are distributed by partitioning the intermediate keyspace into R pieces using a partitioning function (e.g.,hash(key) mod R). The number of partitions (R) andthe partitioning function are specied by the user.Output of ExecutionR output files, one per reduce task, with file name specified by the programmer.Typically, programmers do not combine R output files into one file – they pass these as input to another MapReduce call (or use them with another distributed application that is able to deal with input that is partitioned into multiple files).
  • Master Data Structure: Keeps track of state of each map and reduce task.
  • Completed map tasks - output stored on local disk of the failed machine and is inaccessible.Completed reduce tasks - output stored on global file system.Completed map tasks in failure matching re-excuteCompleted reduce tasks in failure matching do not have to re-xcute
  • Master program divides up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rackWhen running largeMapReduce operations on a signicant fraction of theworkers in a cluster, most input data is read locally andconsumes no network bandwidth.
  • Load balancing
  • No load balancingOne of the common causes that lengthens the total timetaken for a MapReduce operation is a .straggler.:The cluster scheduling systemmay have scheduled other tasks on the machine,causing it to execute the MapReduce code more slowlydue to competition for CPU, memory, local disk, or networkbandwidth.We have a general mechanism to alleviate the problemof stragglers. When a MapReduce operation is closeto completion, the master schedules backup executionsof the remaining in-progress tasks
  • single straggler slows down entire system
  • and we want all entries for a single host to end up in the same output file.
  • difference between a reduce function anda combiner function isWord count exampleTo save bandwidth
  • Auxiliary-supporting files
  • The usual course of action is to x the bug,but sometimes this is not feasible; perhaps the bug is ina third-party library for which source code is unavailable.
  • Startup includes:Propagation of the program to all worker machines,Delays interacting with GFS to open the set of 1000 input files,Information needed for the locality optimization.
  • Ease of use (Programmers need no experience in parallel and distributed computing)Hides details of parallelization, fault-tolerance, locality optimization and load balancing.

Transcript

  • 1. Tilani Gunawardena
  • 2.  Motivation Introduction Programming Model Implementation Refinements Performance Experience Related Work Conclusions Q&A
  • 3.  Large-Scale Data Processing ◦ Want to process lots of data ( > 1 TB) Size of web > 400 TB ◦ Want to parallelize across 100/ 1000 „s of CPUs ◦ But don‟t want hassle of managing things MapReduce provides ◦ Automatic parallelization & distribution ◦ Fault tolerance ◦ Monitoring & status updates tools ◦ A clear abstraction for programmers
  • 4.  Borrows from functional program Users imlement interface of 2 function ◦ Map ◦ Reduce Map( in-key,in-value) (Out-key,intermediate-value) list Reduce(Out-key,intermediate-value list) out_value list
  • 5.  Records from the data source(lines out of files,row of a database,etc ) are fed into the map function as key*value pairs ◦ Ex: (filename,line) Map() produces one or more intermediate values along with an output key from the input
  • 6.  let map(k,v) =emit (k.toUpper(), v.toUpper() ) ◦ (“foo”, “bar”) -> (“FOO”,”BAR”) ◦ (“key2”,”data”) -> (“KEY2”,”DATA”) let map(k,v)= foreach char c in v :emit (k,c) ◦ (“A”,”cats”)->(“A”,”c”),(“A”,”a”),(“A”,”t”),(“A”,”s”) ◦ (“B”,”hi”) ->(“B”,”hi”), (“B”,”i”) let map(k,v)= if (isPrime(v)) then emit (k,v) ◦ (“foo”,7) -> (“foo”,7) ◦ (“test”,10) -> (nothing) let map(k,v)= emit(v.length,v) ◦ (“hi”,”test”)->(4,”test”) ◦ (“x”,”quux”) ->(4,”quux”)
  • 7.  After the map phase is over, all the intermediate values for a given output key are combined together into a list Reduce() combines those intermediate values into one or more final values for that same output key (in practice ,usually only one final value per key)
  • 8.  let reduce(k,vals)= sum=0 foreach int v in vals: sum +=v emit(k,sum) ◦ (“A”,[42,100,312])-> (“A”,454) ◦ (“B”,[12,6,-2])->(“B”,16)
  • 9.  Page 1: the weather is good  Page 2: today is good  Page 3: good weather is good.  Map output ◦ Worker 1: (the 1), (weather 1), (is 1), (good 1). ◦ Worker 2: (today 1), (is 1), (good 1). ◦ Worker 3:(good 1), (weather 1), (is 1), (good 1). Reduce Input ◦ Worker 1:(the 1) ◦ Worker 2: (is 1), (is 1), (is 1) ◦ Worker 3:(weather 1), (weather 1) ◦ Worker 4:(today 1) ◦ Worker 5:(good 1), (good 1), (good 1), (good 1)  Reduce Output ◦ Worker 1:(the 1) ◦ Worker 2:(is 3) ◦ Worker 3:(weather 2) ◦ Worker 4:(today 1) ◦ Worker 5:(good 4)
  • 10.  Distributed Grep ◦ Input consists of (url+offset, single line) ◦ map(key=url+offset, val=line):  If contents matches regexp, emit (line, “1”) ◦ reduce(key=line, values=uniq_counts):  Don‟t do anything; just emit line Count of URL Access Frequency Reverse Web-Link Graph Term-Vector per Host Inverted Index Distributed Sort
  • 11.  Several different implementations of Map Reduce interface are possible depending on the environment. The implementation targeted to the computing environment at Google ◦ Large cluster of PC‟s ◦ Dual x86 processors ◦ Networking hardware – 100 MB/s or 1 Gb/s ◦ Scheduling system
  • 12. When the user program calls the MapReduce function, the following sequence of actions occurs :1) The MapReduce library in the user program first splits the input files into M pieces – 16MB to 64MB per piece. It then starts up many copies of program on a cluster of machines.2) One of the copies of program is master. The rest are workers that are assigned work by the master.
  • 13. 3) A worker who is assigned a map task :  reads the contents of the corresponding input split  parses key/value pairs out of the input data and passes each pair to the user-defined Map function. The intermediate key/value pairs produced by the Map function are buffered in memory(RAM).4) The buffered pairs are written to local disk, partitioned into R regions by the partitioning function. The location of these buffered pairs on the local disk are passed back to the master, who forwards these locations to the reduce workers.
  • 14. 5)When a reduce worker is notified by the master about these locations, it reads the buffered data from the local disks of the map workers. When a reduce worker has read all intermediate data, it sorts it by the intermediate keys so that all occurrences of the same key are grouped together.6) The reduce worker iterates over the sorted intermediate data and for each unique intermediate key, it passes the key and the corresponding set of intermediate values to the user‟s Reduce function. The output of the Reduce function is appended to a final output file.7) When all map tasks and reduce tasks have been completed, the master wakes up the user program. At this point, MapReduce call in the user program returns back to the user code. After successful completion, output of the mapreduce execution is available in the R output files.
  • 15.  For every map and reduce task it stores ◦ State (idle, in-progress, or completed) ◦ Identity of worker machine Location of intermediate files propagated from map tasks to reduce tasks through the master. For every completed map task, ◦ location of the R intermediate files(from map task) ◦ size of R intermediate file
  • 16.  Failure detection mechanism: Master pings workers periodically. Re-executes completed & in-progress map() tasks o All output was stored locally Re-executes in-progress reduce() tasks o All output stored in the global file system
  • 17.  Master failure unlikely Create a checkpoint and note the state of Master Data Structure Write the state to GFS filesystem New master recovers and continues
  • 18.  If Map and Reduce operators are deterministic functions of their input values Relay on Atomic commits of map and reduce task outputs ◦ When a map task completes, the worker sends a message to the master and includes the name of the R temporary files in the message. ◦ If the master receives a completion message for an already completed map task, it ignores the message. Otherwise, it records the names of R files in a master data structure (for use by the reduce tasks). ◦ Output of Reduce task is stored in the GFS. High availability via replication.  The filename of the output produced by a reduce task is deterministic.  When a reduce task completes, the reduce worker atomically renames its temporary output file to the final output file.  If the same reduce task executes on multiple machines, multiple renames calls will be executed for the same output file. If Map and Reduce operators are NOT deterministic functions of their input values:  In this case, MapReduce provides weaker but reasonable semantics.
  • 19. Asks GFS for locations of replicas of input file blocks Map tasks typically split into 64MB (GFS block size) Map tasks scheduled so GFS input block replica are on same machine or same rackEffect :Thousands of machines read input at localdisk speed
  • 20.  M and R should be much larger than the number of worker machines. Better dynamic load balancing Speeds up recovery when a worker fails
  • 21.  Straggler – machine that takes unusually long time to complete one of the last few map or reduce tasks in the computation. Straggler-Causes:  Bad disk  Cluster scheduling system  Bug in machine initialization code Straggler-Solution:  When MapReduce operation is close to completion, master schedules backup executions of the remaining in-progress tasks.  Task is marked as complete when the primary or backup execution completes.
  • 22. Nodes slow (stragglers)  run backuptasks Node 1 Node 2
  • 23. Few extensions that are useful to the Map andReduce functions: Partitioning Function:  Users of MapReduce can specify the number of reduce tasks/output files that they desire (R).  Default : “hash(key)mod R”  Can be customized:  Ex: “hash(Hostname(urlkey)) mod R” Ordering Guarantees:  Guaranteed that in a partition, intermediate key/value pairs processed in increasing key order.
  • 24.  Combiner Function: Run on mapper nodes after map phase “Mini reduce" only on local map output save bandwidth Same code used to implement combiner and reduce function. Difference based on how Map-Reduce handles output of the function – • Output of reduce is written to final output file. • Output of combiner is written to intermediate file (i.e. sent to a reduce task).Use: Speeds up certain classes of Map-Reduce operations
  • 25.  Input and Output Types:  Map Reduce library provides support for reading input data in several formats.  Users can add support for a new input type by providing an implementation of a simple reader interface.  Reader can read records from a database or from data structures mapped in memory. Side-effects :  Auxiliary files are produced as additional outputs from either map and/or reduce operators.  Application writer makes such side effects-atomic and idempotent.  Application writes to a temporary file and atomically renames this file once it is generated.
  • 26.  Skipping Bad Records:  Map/Reduce functions crash deterministically due to Bugs in user code on some records.  Best solution is to debug & fix o Not always possible ~ third-party source libraries  On segmentation fault: o Send UDP packet to master from signal handler o Include sequence number of record being processed  If master sees two failures for same record: o Next worker is told to skip the record Local Execution :  Debugging in Map-Reduce function is complex  Computations occurs in distributed environment.  Worker processes dynamically allocated by master.  Hence to enable debugging, profiling and small scale testing use Local Execution.  This implementation causes sequential execution of all work for a Map Reduce operation on a local machine.  Controls are provided to the user so that Computation limited to a particular map task.
  • 27.  Status Information : Master runs internal HTTP server and provides set of status pages to user. The status pages show progress of the computation, such as:  No. of tasks completed  No. of tasks in progress  Bytes of input, intermediate and output data  Processing rates  Links to standard error and standard output files These information on the page can be used to predict  how long the computation is going to take ?  Should more resources be added to computation?  Is the computation much slower than expected? Top Level Status Page:  Which worker failed?  Which Map-reduce they were working when they failed? Use:  Easy to detect bugs.  Master can order re-execution for failed process
  • 28.  Counters: Map Reduce library has counter facility to count occurrences of various events. Eg. Counting total no. of words processed.
  • 29.  MR_GrepScan -Scan 10^10 100-byte records to extract records matching a rare pattern (92K matching records) : 150 seconds. MR_SortSort -Sort 10^10 100-byte records (modeled after TeraSort benchmark) : normal 839 seconds.
  • 30.  A cluster consisting of 1800PCs: ◦ 2 GHz Intel Xeon processors ◦ 4 GB of memory  1-1.5 GB reserved for other tasks sharing the nodes. ◦ 320 GB storage: two 160 GB IDE disks
  • 31.  scan through 1010 100 byte records.(~ 1 TB)  3-character pattern to be matched ( pattern occ ured in ~ 1 lakh records)  M = 15000, R = 1  Input data chunk size = 64 MB  Execution time is 150S 1764 workers are assigned! Map Task finishedTime to scheduletasks; startup.
  • 32.  Map function extracts a 10-byte sorting key from a text line, emitting the key and the original text line as the intermediate key/value pair. ◦ Each intermediate key/value pair will be sorted 1800 machines used 1010 100 byte records.(~ 1 TB) M = 15000 R = 4000 Input data chunk size = 64 MB 2 TB of final output (GFS maintains 2 copies)
  • 33. Normal No backup tasks 200 processes killed Backup tasks reduce job completion time a lot! System deals well with failures
  • 34.  MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at Google. Indexing code rewritten using MapReduce. Code is simpler, smaller, readable. MapReduce handles failures, slow machines
  • 35.  Programming model inspired by functional language primitives Partitioning/shuffling similar to many large-scale sorting systems ◦ NOW-Sort [1] Re-execution for fault tolerance ◦ BAD-FS [5] and TACC [7] Locality optimization has parallels with Active Disks/Diamond work ◦ Active Disks [12,15] Backup tasks similar to Eager Scheduling in Charlotte system ◦ Charlotte [3] Dynamic load balancing solves similar problem as Rivers distributed queues ◦ River [2]
  • 36.  MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at Google Fun to use: ◦ focus on problem, ◦ let library deal with messy details