∗ Parallel Data Processing with MapReduce: A Survey Kyong-Ha Lee Hyunsik Choi Bongki Moon Yoon-Joon Lee Yon Dohn Chung Department of Computer Department of Computer Department of Computer Science Science Science and Engineering University of Arizona KAIST Korea University email@example.com firstname.lastname@example.org email@example.com firstname.lastname@example.org email@example.comABSTRACT approaches to improving the MapReduce framework.A prominent parallel data processing tool MapReduce is gain- In Section 5 and 6, we overview major application do-ing signiﬁcant momentum from both industry and academia mains where the MapReduce framework is adopted andas the volume of data to analyze grows rapidly. While MapRe- discuss open issues and challenges. Finally, Section 7duce is used in many areas where massive data analysis is re- concludes this survey.quired, there are still debates on its performance, efﬁciencyper node, and simple abstraction. This survey intends to 2. ARCHITECTUREassist the database and open source communities in under- MapReduce is a programming model as well as astanding various technical aspects of the MapReduce frame- framework that supports the model. The main ideawork. In this survey, we characterize the MapReduce frame- of the MapReduce model is to hide details of parallelwork and discuss its inherent pros and cons. We then intro- execution and allow users to focus only on data pro-duce its optimization strategies reported in the recent litera- cessing strategies. The MapReduce model consists ofture. We also discuss the open issues and challenges raised two primitive functions: Map and Reduce. The inputon parallel data analysis with MapReduce. for MapReduce is a list of (key1, value1) pairs and Map() is applied to each pair to compute intermedi-1. INTRODUCTION ate key-value pairs, (key2, value2). The intermediate In this age of data explosion, parallel processing is key-value pairs are then grouped together on the key-essential to processing a massive volume of data in a equality basis, i.e. (key2, list(value2)). For each key2,timely manner. MapReduce, which has been popular- Reduce() works on the list of all values, then producesized by Google, is a scalable and fault-tolerant data zero or more aggregated results. Users can deﬁne theprocessing tool that enables to process a massive vol- Map() and Reduce() functions however they want theume of data in parallel with many low-end computing MapReduce framework works.nodes[44, 38]. By virtue of its simplicity, scalability, MapReduce utilizes the Google File System(GFS) asand fault-tolerance, MapReduce is becoming ubiqui- an underlying storage layer to read input and store out-tous, gaining signiﬁcant momentum from both industry put. GFS is a chunk-based distributed ﬁle systemand academia. However, MapReduce has inherent lim- that supports fault-tolerance by data partitioning anditations on its performance and eﬃciency. Therefore, replication. Apache Hadoop is an open-source Javamany studies have endeavored to overcome the limita- implementation of MapReduce. We proceed ourtions of the MapReduce framework[10, 15, 51, 32, 23]. explanation with Hadoop since Google’s MapReduce The goal of this survey is to provide a timely remark code is not available to the public for its proprietaryon the status of MapReduce studies and related work use. Other implementations (such as DISCO writtenfocusing on the current research aimed at improving in Erlang) are also available, but not as popular asand enhancing the MapReduce framework. We give an Hadoop. Like MapReduce, Hadoop consists of two lay-overview of major approaches and classify them with ers: a data storage layer called Hadoop DFS(HDFS)respect to their strategies. The rest of the survey is or- and a data processing layer called Hadoop MapReduceganized as follows. Section 2 reviews the architecture Framework. HDFS is a block-structured ﬁle systemand the key concepts of MapReduce. Section 3 dis- managed by a single master node like Google’s GFS.cusses the inherent pros and cons of MapReduce. Sec- Each processing job in Hadoop is broken down to astion 4 presents the classiﬁcation and details of recent many Map tasks as input data blocks and one or more Reduce tasks. Figure 1 illustrates an overview of the∗To appear in SIGMOD Record, November 14, 2011 Hadoop architecture.
A single MapReduce(MR) job is performed in twophases: Map and Reduce stages. The master picks idleworkers and assigns each one a map or a reduce taskaccording to the stage. Before starting the Map task,an input ﬁle is loaded on the distributed ﬁle system. Atloading, the ﬁle is partitioned into multiple data blockswhich have the same size, typically 64MB, and eachblock is triplicated to guarantee fault-tolerance. Eachblock is then assigned to a mapper, a worker which isassigned a map task, and the mapper applies Map() toeach record in the data block. The intermediate out-puts produced by the mappers are then sorted locallyfor grouping key-value pairs sharing the same key. Afterlocal sort, Combine() is optionally applied to performpre-aggregation on the grouped key-value pairs so thatthe communication cost taken to transfer all the inter-mediate outputs to reducers is minimized. Then themapped outputs are stored in local disks of the map-pers, partitioned into R, where R is the number of Re- Figure 1: Hadoop Architectureduce tasks in the MR job. This partitioning is basically reassigning tasks of failed nodes to other healthy nodesdone by a hash function e.g. , hash(key) mod R. in the cluster. Nodes which have completed their tasks When all Map tasks are completed, the MapReduce are assigned another input block. This scheme natu-scheduler assigns Reduce tasks to workers. The inter- rally achieves load balancing in that faster nodes willmediate results are shuﬄed and assigned to reducers via process more input chunks and slower nodes processHTTPS protocol. Since all mapped outputs are already less inputs in the next wave of execution. Furthermore,partitioned and stored in local disks, each reducer per- MapReduce scheduler utilizes a speculative and redun-forms the shuﬄing by simply pulling its partition of the dant execution. Tasks on straggling nodes are redun-mapped outputs from mappers. Basically, each record dantly executed on other idle nodes that have ﬁnishedof the mapped outputs is assigned to only a single re- their assigned tasks, although the tasks are not guar-ducer by one-to-one shuﬄing strategy. Note that this anteed to end earlier on the new assigned nodes thandata transfer is performed by reducers’ pulling interme- on the straggling nodes. Map and Reduce tasks arediate results. A reducer reads the intermediate results executed with no communication between other tasks.and merge them by the intermediate keys, i.e. key2, so Thus, there is no contention arisen by synchronizationthat all values of the same key are grouped together. and no communication cost between tasks during a MRThis grouping is done by external merge-sort. Then job execution.each reducer applies Reduce() to the intermediate val-ues for each key2 it encounters. The output of reducers 3. PROS AND CONSare stored and triplicated in HDFS. Note that the number of Map tasks does not depend 3.1 Debateson the number of nodes, but the number of input blocks.Each block is assigned to a single Map task. However, As suggested by many researchers, commercial DBMSsall Map tasks do not need to be executed simultaneously have adopted “one size ﬁts all” strategy and are notand neither are Reduce tasks. For example, if an input suited for solving extremely large scale data processingis broken down into 400 blocks and there are 40 mappers tasks. There has been a demand for special-purposein a cluster, the number of map tasks are 400 and the data processing tools that are tailored for such problemsmap tasks are executed through 10 waves of task runs. [79, 50, 72]. While MapReduce is referred to as a newThis behavior pattern is also reported in . way of processing big data in data-center computing The MapReduce framework executes its tasks based , it is also criticized as a “major step backwards” inon runtime scheduling scheme. It means that MapRe- parallel data processing in comparison with DBMS [10,duce does not build any execution plan that speciﬁes 15]. However, many MapReduce proponents in indus-which tasks will run on which nodes before execution. try argue that MapReduce is not a DBMS and suchWhile DBMS generates a query plan tree for execution, an apple-to-orange comparison is unfair. As the techni-a plan for executions in MapReduce is determined en- cal debate continued, ACM recently invited both sidestirely at runtime. With the runtime scheduling, MapRe- in January edition of CACM, 2010 [51, 39]. Panels induce achieves fault tolerance by detecting failures and DOLAP’10 also discussed pros and cons of MapReduce and relational DB for data warehousing .
Pavlo et al ’s comparison show that Hadoop is 2∼50 continue to work in spite of an average of 1.2 fail-times slower than parallel DBMS except in the case of ures per analysis job at Google[44, 38].data loading . Anderson et al also criticize that thecurrent Hadoop system is scalable, but achieves very High scalability The best advantage of using MapRe-low eﬃciency per node, less than 5MB/s processing duce is high scalability. Yahoo! reported thatrates, repeating a mistake that previous studies on high- their Hadoop gear could scale out more than 4,000performance systems often made by “focusing on scal- nodes in 2008.ability but missing eﬃciency” . This poor eﬃciency 3.3 Pitfallsinvolves many issues such as performance, total costof ownership(TCO) and energy. Although Hadoop won Despite many advantages, MapReduce lacks some ofthe 1st position in GraySort benchmark test for 100 TB the features that have proven paramount to data analy-sorting(1 trillion 100-byte records) in 2009, its winning sis in DBMS. In this respect, MapReduce is often char-was achieved with over 3,800 nodes . MapReduce acterized as an Extract-Transform-Load (ETL) tool.or Hadoop would not be a cheap solution if the cost We itemize the pitfalls of the MapReduce frameworkfor constructing and maintaining a cluster of that size below, compared with DBMS.was considered. Other studies on the performance ofHadoop are also found in literature [28, 61]. Analysis No high-level language MapReduce itself does notof 10-months of MR logs from Yahoo’s M45 Hadoop support any high-level language like SQL in DBMScluster and MapReduce usage statistics at Google are and any query optimization technique. Users shouldalso available [60, 9]. code their operations in Map and Reduce func- The studies exhibit a clear tradeoﬀ between eﬃciency tions.and fault-tolerance. MapReduce increases the fault tol- No schema and no index MapReduce is schema-freeerance of long-time analysis by frequent checkpoints of and index-free. An MR job can work right aftercompleted tasks and data replication. However, the fre- its input is loaded into its storage. However, thisquent I/Os required for fault-tolerance reduce eﬃciency. impromptu processing throws away the beneﬁts ofParallel DBMS aims at eﬃciency rather than fault tol- data modeling. MapReduce requires to parse eacherance. DBMS actively exploits pipelining intermediate item at reading input and transform it into dataresults between query operators. However, it causes a objects for data processing, causing performancepotential danger that a large amount of operations need degradation [15, 11].be redone when a failure happens. With this fundamen-tal diﬀerence, we categorize the pros and cons of the A Single ﬁxed dataﬂow MapReduce provides the easeMapReduce framework below. of use with a simple abstraction, but in a ﬁxed dataﬂow. Therefore, many complex algorithms are3.2 Advantages hard to implement with Map and Reduce only in MapReduce is simple and eﬃcient for computing ag- an MR job. In addition, some algorithms that re-gregate. Thus, it is often compared with “ﬁltering then quire multiple inputs are not well supported sincegroup-by aggregation” query processing in a DBMS. Here the dataﬂow of MapReduce is originally designedare major advantages of the MapReduce framework for to read a single input and generate a single output.data processing. Low eﬃciency With fault-tolerance and scalability asSimple and easy to use The MapReduce model is sim- its primary goals, MapReduce operations are not ple but expressive. With MapReduce, a program- always optimized for I/O eﬃciency. (Consider for mer deﬁnes his job with only Map and Reduce example sort-merge based grouping, materializa- functions, without having to specify physical dis- tion of intermediate results and data triplication tribution of his job across nodes. on the distributed ﬁle system.) In addition, MapFlexible MapReduce does not have any dependency on and Reduce are blocking operations. A transi- data model and schema. With MapReduce a pro- tion to the next stage cannot be made until all grammer can deal with irregular or unstructured the tasks of the current stage are ﬁnished. Conse- data more easily than they do with DBMS. quently, pipeline parallelism may not be exploited. Moreover, block-level restarts, a one-to-one shuf-Independent of the storage MapReduce is basically ﬂing strategy, and a simple runtime scheduling can independent from underlying storage layers. Thus, also lower the eﬃciency per node. MapReduce MapReduce can work with diﬀerent storage layers does not have speciﬁc execution plans and does not such as BigTable and others. optimize plans like DBMS does to minimize dataFault tolerance MapReduce is highly fault-tolerant. transfer across nodes. Therefore, MapReduce of- For example, it is reported that MapReduce can ten shows poorer performance than DBMS. In
addition, the MapReduce framework has a latency As described in Section 3.3, MapReduce does not problem that comes from its inherent batch pro- provide any schema support. Thus, the MapReduce cessing nature. All of inputs for an MR job should framework parses each data record at reading input, be prepared in advance for processing. causing performance degradation [15, 51, 11]. Mean- while, Jiang et al report that only immutable decodingVery young MapReduce has been popularized by Google that transforms records into immutable data objects since 2004. Compared to over 40 years of DBMS, severely causes performance degradation, rather than codes are not mature yet and third-party tools record parsing . available are still relatively few. While MapReduce itself does not provide any schema support, data formats such as Google’s Protocol Buﬀers,4. VARIANTS AND IMPROVEMENTS XML, JSON, Apache’s Thrift, or other formats can be We present details of approaches to improving the used for checking data integrity . One notable thingpitfalls of the MapReduce framework in this section. about the formats is that they are self-describing for- mats that support a nested and irregular data model,4.1 High-level Languages rather than the relational model. A drawback of the use of the formats is that data size may grow as data con- Microsoft SCOPE, Apache Pig[22, 18], and Apache tains schema information in itself. Data compression isHive[16, 17] all aim at supporting declarative query lan- considered to address the data size problem .guages for the MapReduce framework. The declarativequery languages allow query independence from pro-gram logics, reuse of the queries and automatic query 4.3 Flexible Data Flowoptimization features like SQL does for DBMS. SCOPE There are many algorithms which are hard to directlyworks on top of the Cosmos system, a Microsoft’s clone map into Map and Reduce functions. For example,of MapReduce, and provides functionality similar to some algorithms require global state information dur-SQL views. It is similar to SQL but comes with C# ex- ing their processing. Loop is a typical example thatpressions. Operators in SCOPE are the same as Map, requires the state information for execution and ter-Reduce and Merge supported in . mination. However, MapReduce does not treat state Pig is an open source project that is intended to sup- information during execution. Thus, MapReduce readsport ad-hoc analysis of very large data, motivated by the same data iteratively and materializes intermedi-Sawzall, a scripting language for Google’s MapRe- ate results in local disks in each iteration, requiring lotsduce. Pig consists of a high-level dataﬂow language of I/Os and unnecessary computations. HaLoop,called Pig Latin and its execution framework. Pig Latin Twister, and Pregel are examples of systems thatsupports a nested data model and a set of pre-deﬁned support loop programs in MapReduce.UDFs that can be customized . The Pig execution HaLoop and Twister avoid reading unnecessary dataframework ﬁrst generates a logical query plan from a Pig repeatedly by identifying and keeping invariant dataLatin program. Then it compiles the logical plan down during iterations. Similarly, Lin et al propose an in-into a series of MR jobs. Some optimization techniques mapper combining technique that preserves mapped out-are adopted to the compilation, but not described in puts in a memory buﬀer across multiple map calls, anddetail. Pig is built on top of Hadoop framework, emits aggregated outputs at the last iteration . Inand its usage requires no modiﬁcation to Hadoop. addition, Twister avoids instantiating workers repeat- Hive is an open-source project that aims at providing edly during iterations. Previously instantiated workersdata warehouse solutions on top of Hadoop, supporting are reused for the next iteration with diﬀerent inputsad-hoc queries with an SQL-like query language called in Twister. HaLoop is similar to Twister, and it alsoHiveQL. Hive compiles a HiveQL query into a directed allows to cache both each stage’s input and output toacyclic graph(DAG) of MR jobs. The HiveQL includes save more I/Os during iterations. Vanilla Hadoop alsoits own type system and data deﬁnition language(DDL) supports task JVM reuse to avoid the overhead of start-to manage data integrity. It also contains a system ing a new JVM for each task . Pregel mainly tar-catalog, containing schema information and statistics, gets to process graph data. Graph data processing aremuch like DBMS engines. Hive currently provides only usually known to require lots of iterations. Pregel im-a simple, naive rule-based optimizer. plements a programming model motivated by the Bulk Similarly, DryadLINQ[71, 49] is developed to trans- Synchronous Parallel(BSP) model. In this model, eachlate LINQ expressions of a program into a distributed node has each own input and transfers only some mes-execution plan for Dryad, Microsoft’s parallel data pro- sages which are required for next iteration to othercessing tool . nodes. MapReduce reads a single input. However, many im-4.2 Schema Support portant relational operators are binary operators that
require two inputs. Map-Reduce-Merge addresses the MapReduce Online is devised to support online ag-support of the relational operators by simply adding a gregation and continuous queries in MapReduce. Itthird merge stage after reduce stage . The merge raises an issue that pull-based communication and check-stage combines two reduced outputs from two diﬀerent points of mapped outputs limit pipelined processing. ToMR jobs into one. promote pipelining between tasks, they modify MapRe- Clustera, Dryad and Nephele/PACT allow more ﬂex- duce architecture by making Mappers push their dataible dataﬂow than MapReduce does [31, 48, 30, 26]. temporarily stored in local storage to Reducers period-Clustera is a cluster management system that is de- ically in the same MR job. Map-side pre-aggregation issigned to handle a variety of job types including MR- also used to reduce communication volumes further.style jobs . Job scheduler in Clustera handles MapRe- Li et al and Jiang et al have found that the merge sortduce, workﬂow and SQL-type jobs, and each job can be in MapReduce is I/O intensive and dominantly aﬀectsconnected to form a DAG or a pipeline for complex the performance of MapReduce [21, 28]. This leads tocomputations. the use of hash tables for better performance and also Dryad is a notable example of distributed data-parallel incremental processing . In the study, as soon astool that allows to design and execute a dataﬂow graph each map task outputs its intermediate results, the re-as users’ wish . The dataﬂow in Dryad has a form of sults are hashed and pushed to hash tables held by re-DAG that consists of vertices and channels. Each vertex ducers. Then, reducers perform aggregation on the val-represents a program and a channel connects the ver- ues in each bucket. Since each bucket in the hash tabletices. For execution, a logical dataﬂow graph is mapped holds all values which correspond to a distinct key, noonto physical resources by a job scheduler at runtime. grouping is required. In addition, reducers can performA vertex runs when all its inputs are ready and outputs aggregation on the ﬂy even when all mappers are notits results to the neighbor vertices via channels as de- completed yet.ﬁned in the dataﬂow graph. The channels can be eitherof ﬁles, TCP pipes, or shared-memory. Job executions 4.5 I/O Optimizationare controlled by a central job scheduler. Redundantexecutions are also allowed to handle apparently very There are also approaches to reducing I/O cost inslow vertices, like MapReduce. Dryad also allows to MapReduce by using index structures, column-orienteddeﬁne how to shuﬄe intermediate data speciﬁcally. storage, or data compression. Nephele/PACT is another parallel execution engine Hadoop++ provides an index-structured ﬁle formatand its programming model[30, 26]. The PACT model to improve the I/O cost of Hadoop . However, as itextends MapReduce to support more ﬂexible dataﬂows. needs to build an index for each ﬁle partition at dataIn the model, each mapper can have a separate input loading stage, loading time is signiﬁcantly increased.and a user can specify its dataﬂow with more various If the input data are processed just once, the addi-stages including Map and Reduce. Nephele transforms tional cost given by building index may not be justiﬁed.a PACT program into a physical DAG then executes the HadoopDB also beneﬁts from DB indexes by leveragingDAG across nodes. Executions in Nephele are scheduled DBMS as a storage in each node .at runtime, like MapReduce. There are many studies that describe how column- oriented techniques can be leveraged to improve MapRe- duce’s performance dramatically [35, 62, 68, 12, 69].4.4 Blocking Operators Google’s BigTable proposes the concept of column fam- Map and Reduce functions are blocking operations in ily that groups one or more columns as a basic workingthat all tasks should be completed to move forward to unit. Google’s Dremel is a nested column-orientedthe next stage or job. The reason is that MapReduce storage that is designed to complement MapReduce.relies on external merge sort for grouping intermediate The read-only nested data in Dremel are modeled withresults. This property causes performance degradation Protocol Buﬀers . The data in Dremel are split intoand makes it diﬃcult to support online processing. multiple columns and records are assembled via ﬁnite Logothetis et al address this problem for the ﬁrst time state machines for record-oriented requests. Dremel iswhen they build MapReduce abstraction onto their dis- also known to support ad-hoc queries like Hive .tributed stream engine for ad-hoc data processing. Record Columnar File(RCFile), developed by Face-Their incremental MapReduce framework processes data book and adopted by Hive and Pig, is a column-orientedlike streaming engines. Each task runs continuously ﬁle format on HDFS . Data placement in HDFS iswith a sliding window. Their system generates MR determined by the master node at runtime. Thus, it isoutputs by reading the items within the window. This argued that if each column in a relation is independentlystream-based MapReduce processes arriving increments stored in a separate ﬁle on HDFS, all related ﬁelds inof update tuples, avoiding recomputation of all the tu- the same record cannot guarantee to be stored in theples from the beginning. same node. To get around this, a ﬁle format that rep-
resents all values of a relation column-wise in a single the indicator ignore other shorter paths when estimat-ﬁle is devised. A RCFile consists of a set of row groups, ing progress since the longest path would contribute thewhich are acquired by partitioning a relation horizon- overall execution time. Besides, it is reported that thetally. Then in each row group, values are enumerated more data blocks to be scheduled, the more cost thein column-wise, similar to PAX storage scheme . scheduler will pay . Thus, a rule of thumb in in- Llama shows how column-wise data placement helps dustry – making the size of data block bigger makesjoin processing . A column-oriented ﬁle in Llama Hadoop work faster – is credible.stores a particular column data with optional index in- We now look into multi-user environment wherebyformation. It also witnesses that late materialization users simultaneously execute their jobs in a cluster.which delays record reconstruction until the column Hadoop implements two scheduling schemes: fair schedul-is necessary during query processing is no better than ing and capacity scheduling. The default fair schedulingearly materialization in many cases. works with a single queue of jobs. It assigns physical Floratou et al propose a binary column-oriented stor- resources to jobs such that all jobs get an equal shareage that boosts the performance of Hadoop by an or- of resources over time on average. In this schedulingder of magnitude. Their storage format stores each scheme, if there is only a single MR job running in acolumn in a separate ﬁle but co-locate associated col- cluster, The job solely uses entire resources in the clus-umn ﬁles in the same node by changing data placement ter. Capacity sharing supports designing more sophis-policy of Hadoop. They also suggest that late mate- ticated scheduling. It provides multiple queues each ofrialization with skiplist shows better performance than which is guaranteed to possess a certain capacity of theearly materialization, contrary to the result of RCFile. cluster.Both Floratou’s work and RCFile also use a column- MRShare is a remarkable work for sharing multiplewise data compression in each row group, and adopt a query executions in MapReduce . MRShare, in-lazy decompression technique to avoid unnecessary de- spired by multi query optimization techniques in database,compression during query execution. Hadoop also sup- ﬁnds an optimal way of grouping a set of queries usingports the compression of mapped outputs to save I/Os dynamic programming. They suggest three sharing op-during the checkpoints. portunities across multiple MR jobs in MapReduce, like found in Pig : scan sharing, mapped outputs shar-4.6 Scheduling ing, and Map function sharing. They also introduce a cost model for MR jobs and validate this with ex- MapReduce uses a block-level runtime scheduling with periments. Their experiments show that intermediatea speculative execution. A separate Map task is created result sharing improves the execution time signiﬁcantly.to process a single data block. A node which ﬁnishes its In addition, they have found that sharing all scans yieldtask early gets more tasks. Tasks on a straggler node poorer performance as the size of intermediate resultsare redundantly executed on other idle nodes. increases, because of the complexity of the merge-sort Hadoop scheduler implements the speculative task operation in MapReduce. Suppose that |D| is the sizescheduling with a simple heuristic method which com- of input data that n MR jobs share. When sharing allpares the progress of each task to the average progress. scans, the cost of scanning inputs is reduced by |D|,Tasks with the lowest progress compared to the average compared to n · |D| for no sharing scans. However, as aare selected for re-execution. However, this heuristic result, the complexity of sorting the combined mappedmethod is not well suited in a heterogeneous environ- output of all jobs will be O(n · |D|log(n · |D|)) sincement where each node has diﬀerent computing power. each job can generate its own mapped output with sizeIn this environment, even a node whose task progresses O(|D|). This cost can be bigger than the total cost offurther than others may be the last if the node’s com- sorting n diﬀerent jobs, O(n · |D|log|D|) in some cases.puting power is inferior to others. Longest Approxi-mate Time to End(LATE) scheduling is devised to im-prove the response time of Hadoop in heterogeneous en- 4.7 Joinsvironments . This scheduling scheme estimates the Join is a popular operator that is not so well dealttask progress with the progress rate, rather than simple with by Map and Reduce functions. Since MapReduceprogress score. is designed for processing a single input, the support of Parallax is devised to estimate job progress more pre- joins that require more than two inputs with MapRe-cisely for a series of jobs compiled from a Pig pro- duce has been an open issue. We roughly classify joingram . it pre-runs with sampled data for estimating methods within MapReduce into two groups: Map-sidethe processing speeds of each stage. ParaTimer is an ex- join and Reduce-side join. We also borrow some oftended version of Parallax for DAG-style jobs written in terms from Blanas et al ’s study, which compares manyPig . ParaTimer identiﬁes a critical path that takes join techniques for analysis of clickstream logs at Face-longer than others in a parallel query plan. It makes book , for explaining join techniques.
Map-side Join each mapped outputs to multiple joiners at a time.Map-Merge join is a common map-side join that works Other Join Typessimilarly to sort-merge join in DBMS. Map-Merge join Joins may have more than two relations. If relationsperforms in two steps. First, two input relations are are simply hash-partitioned and fed to reducers, eachpartitioned and sorted on the join keys. Second, map- reducer takes a diﬀerent portion of the relations. How-pers read the inputs and merge them . Broadcast ever, the same relation must be copied to all reducersjoin is another map-side join method, which is applica- to avoid generating incomplete join results in the cases.ble when the size of one relation is small [57, 7]. The For example, given a multi-way join that reads 4 re-smaller relation is broadcast to each mapper and kept lations and with 4 reducers, we can split only 2 rela-in memory. This avoids I/Os for moving and sorting on tions making 4 partitions in total. The other relationsboth relations. Broadcast join uses in-memory hash ta- need to be copied to all reducers. If more relationsbles to store the smaller relation and to ﬁnd matches via are involved into less reducers, we spend more commu-table lookup with values from the other input relation. nication costs. Afrati et al focus on how to minimizeReduce-side Join the sum of the communication cost of data that areRepartition join is the most general reduce-side join [81, transferred to Reducers for multi-way join . They57]. Each mapper tags each row of two relations to suggest a method based on Lagrangean multipliers toidentify which relation the row come from. After that, properly select which columns and how many of therows of which keys have the same key value are copied columns should be partitioned for minimizing the sumto the same reducer during shuﬄing. Finally, each re- of the communication costs. Lin et al propose the con-ducer joins the rows on the key-equality basis. This way current join that performs a multi-way join in parallelis akin to hash-join in DBMS. An improved version of with MapReduce .the repartition join is also proposed to ﬁx the buﬀer- In addition to binary equal-join, other join types haveing problem that all records for a given key need to be been widely studied. Okcan et al propose how to eﬃ-buﬀered in memory during the joining process. ciently perform θ-join with a single MR job only . Lin et al propose a scheme called ”schimmy” to save Their algorithm uses a Reducer-centered cost modelI/O cost during reduce-side join. The basic concept that calculates the total cost of Cartesian product ofof the scheme is to separate messages from graph struc- mapped output. With the cost model, they assignsture data, and shuﬄe only the message to avoid shuf- mapped output to reducers that minimizes job com-ﬂing data, similar to Pregel . In this scheme, map- pletion time. The support of Semi-join, e.g. R S, ispers emit only messages. Reducers read graph structure proposed in . Vernica et al propose how to eﬃcientlydata directly from HDFS and do reduce-side merge join parallelize set-similarity joins with Mapreduce . Theybetween the data and the messages. utilize preﬁx ﬁltering to ﬁlter out non-candidates beforeMapReduce Variants actual comparison. It requires to extract common pre-Map-Reduce-Merge is the ﬁrst that attempts to address ﬁxes sorted in a global order of frequency from tuples,join problem in the MapReduce framework . To each of which consists of a set of items.support binary operations including join, Map-Reduce-Merge extends MapReduce model by adding Merge stage 4.8 Performance Tuningafter Reduce stage. Map-Join-Reduce is another variant of MapReduce Most of MapReduce programs are written for dataframework for one-phase joining . The authors pro- analysis and they usually take much time to be ﬁnished.pose a ﬁltering-join-aggregation model that adds Join Thus, it is straightforward to provide the feature of au-stage before Reduce stage to perform joining within a tomatic optimization for MapReduce programs. Babusingle MR job. Each mapper reads tuples from a sep- et al suggest an automatic tuning approach to ﬁndingarate relation which take part in a join process. After optimal system parameters for given input data . Itthat, the mapped outputs are shuﬄed and moved to is based on speculative pre-runs with sampled data.joiners for actual joining, then the Reduce() function Jahani et al suggest a static analysis approach calledis applied. Joiners and reducers are actually run inside MANIMAL for automatic optimization of a single Map-the same reduce task. An alternative that runs Map- Reduce job . In their approach, an analyzer exam-Join-Reduce with two consecutive MR jobs is also pro- ines program codes before execution without any run-posed to avoid modifying MapReduce framework. For time information. Based on the rules found during themulti-way join, join chains are represented as a left-deep analysis, it creates a pre-computed B+ -tree index andtree. Then previous joiners transfer joined tuples to the slices input data column-wise for later use. In addition,next joiner that is the parent operation of the previous some semantic-aware compression techniques are usedjoiners in the tree. For this, Map-Join-Reduce adopts for reducing I/O. Its limitation is that the optimiza-one-to-many shuﬄing scheme that shuﬄes and assigns tion considers only selection and projection which are primarily implemented in Map function.
4.9 Energy Issues 5. APPLICATIONS AND ADAPTATIONS Energy issue is important especially in this data-centercomputing era. Since the energy cost of data centers 5.1 Applicationshits 23% of the total amortized monthly operating ex- Mahout is an Apache project that aims at buildingpenses, it is prudent to devise an energy-eﬃcient way scalable machine learning libraries which are executedto control nodes in a data center when the nodes are in parallel by virtue of Hadoop . RHIPE and Ricardoidle. In this respect, two extreme strategies for project are tools that integrate R statistical tool andenergy management in MapReduce clusters are exam- Hadoop to support parallel data analysis [73, 58]. Chee-ined [74, 43]. Covering-Set approach designates in ad- tah is a data warehousing tool built on MapReduce withvance some nodes that should keep at least a replica of virtual view on top of the star or snowﬂake schemas andeach data block, and the other nodes are powered down with some optimization techniques like data compres-during low-utilization periods . Since the dedicated sion and columnar store . Osprey is a shared-nothingnodes always have more than one replica of data, all database system that supports MapReduce-style faultdata across nodes are accessible in any cases except for tolerance . Osprey does not directly use MapReducemultiple node failures. On the contrary, All-In strat- or GFS. However, the fact table in star schema is parti-egy saves energy in an all-or-nothing fashion . In tioned and replicated like GFS, and tasks are scheduledthe strategy, all MR jobs are queued until it reaches a by a central runtime scheduler like MapReduce. A dif-threshold predetermined. If it exceeds, all nodes in the ference is that Osprey does not checkpoint intermedi-cluster run to ﬁnish all MR jobs and then all the nodes ate outputs. Instead, it uses a technique called chainedare powered down until new jobs are queued enough. declustering which limits data unavailability when nodeLang et al concluded that All-In strategy is superior failures happen.to Covering-Set in that it does not require changing The use of MapReduce for data intensive scientiﬁcdata placement policy and response time degradation. analysis and bioinformatics are well studied in [41, 80].However, All-In strategy may not support an instant CloudBLAST parallelizes NCBI BLAST2 algorithm us-execution because of its batch nature. Similarly, Chen ing Hadoop . They break their input sequenceset al discuss the computation versus I/O tradeoﬀs when down into multiple blocks and run an instance of theusing data compressions in a MapReduce cluster in terms vanilla version of NCBI BLAST2 for each block, us-of energy eﬃciency . ing the Hadoop Streaming utility . CloudBurst is a parallel read-mapping tool that maps NGS read se- quencing data to a reference genome for genotyping in4.10 Hybrid Systems parallel . HadoopDB is a hybrid system that connects multi- 5.2 Adaptations to Intra-node Parallelismple single-node DBMS with MapReduce for combin-ing MapReduce-style scalability and the performance Some studies use the MapReduce model for simplify-of DBMS . HadoopDB utilizes MapReduce as a dis- ing complex multi-thread programming on many-coretributing system which controls multiple nodes which systems such as multi-core[24, 65], GPU[20, 19], andrun single-node DBMS engines. Queries are written Cell processors. In the studies, mapped outputs arein SQL, and distributed via MapReduce across nodes. transferred to reducers via shared-memory rather thanData processing is boosted by the features of single- disks. In addition, a task execution is performed by anode DBMS engines as workload is assigned to the DBMS single core rather than a node. In this intra-node paral-engines as much as possible. lelism, fault-tolerance can be ignored since all cores are SQL/MapReduce is another hybrid framework that located in a single system. A combination of intra-nodeenables to execute UDF functions in SQL queries across and inter-node parallelism by the MapReduce model ismultiple nodes in MapReduce-style . UDFs extend a also suggested .DBMS with customizing DB functionality. SQL/Map-Reduce presents an approach to implementing UDF that 6. DISCUSSION AND CHALLENGEScan be executed across multiple nodes in parallel by MapReduce is becoming ubiquitous, even though itsvirtue of MapReduce. Greenplum also provides the eﬃciency and performance are controversial. There isability to write MR functions in their parallel DBMS. nothing new about principles used in MapReduce [10,Teradata makes its eﬀort to combine Hadoop with their 51]. However, MapReduce shows that many problemsparallel DBMS . The authors describe their three can be solved in the model at scale unprecedented be-eﬀorts toward tight and eﬃcient integration of Hadoop fore. Due to frequent checkpoints and runtime schedul-and Teradata EDW: parallel loading of Hadoop data to ing with speculative execution, MapReduce reveals lowEDW, retrieving EDW data from MR programs, and eﬃciency. However, such methods would be necessaryaccessing Hadoop data from SQL via UDFs. to achieve high scalability and fault tolerance in massive
data processing. Thus, how to increase eﬃciency guar- ACM symposium on Cloud computing, pagesanteeing the same level of scalability and fault tolerance 137–142, 2010.is a major challenge. The eﬃciency problem is expected  Nokia Research Center. Disco: Massive data-to be overcome in two ways: improving MapReduce it- minimal code. http://discoproject.org, 2010.self and leveraging new hardware. How to utilize the  S. Chen. Cheetah: a high performance, customfeatures of modern hardware has not been answered in data warehouse on top of MapReduce.many areas. However, modern computing devices such Proceedings of the VLDB, 3(1-2):1459–1468, 2010.as chip-level multiprocessors and Solid State Disk(SSD)  M. de Kruijf and K. Sankaralingam. Mapreducecan help reduce computations and I/Os in MapReduce for the cell broadband engine architecture. IBMsigniﬁcantly. The use of SSD in Hadoop with simple Journal of Research and Development,installation is brieﬂy examined, but not in detail . 53(5):10:1–10:12, 2009.Self-tuning and job scheduling in multi-user environ-  J. Dean. Designs, lessons and advice fromments are another issues that have not been well address building large distributed systems. Keynote fromyet. The size of MR clusters is continuously increas- LADIS, 2009.ing. A 4,000-node cluster is not surprising any more.  D. DeWitt and M. Stonebraker. MapReduce: AHow to eﬃciently manage resources in the clusters of major step backwards. The Database Column, 1,that size in multi-user environment is also challenging. 2008.Yahoo’s M45 cluster reportedly shows only 5∼10% re-  A. Abouzeid et al . HadoopDB: An architecturalsource utilization . Energy eﬃciency in the clusters hybrid of MapReduce and DBMS technologies forand achieving high utilizations of MR clusters are also analytical workloads. Proceedings of the VLDBimportant problems that we consider for achieving bet- Endowment, 2(1):922–933, 2009.ter TCO and return on investments in practice.  A. Floratou et al . Column-Oriented Storage Techniques for MapReduce. Proceedings of the7. CONCLUSION VLDB, 4(7), 2011. We discussed pros and cons of MapReduce and clas-  A. Matsunaga et al . Cloudblast: Combiningsiﬁed its improvements. MapReduce is simple but pro- mapreduce and virtualization on distributedvides good scalability and fault-tolerance for massive resources for bioinformatics applications. Indata processing. However, MapReduce is unlikely to Fourth IEEE International Conference onsubstitute DBMS even for data warehousing. Instead, eScience, pages 222–229, 2008.we expect that MapReduce complements DBMS with  A. Okcan et al . Processing Theta-Joins usingscalable and ﬂexible parallel processing for various data MapReduce. In Proceedings of the 2011 ACManalysis such as scientiﬁc data processing. Nonetheless, SIGMOD, 2011.eﬃciency, especially I/O costs of MapReduce still need  A. Pavlo et al . A comparison of approaches toto be addressed for successful implications. large-scale data analysis. In Proceedings of the ACM SIGMOD, pages 165–178, 2009.Acknowledgement  A. Thusoo et al . Hive: a warehousing solution over a map-reduce framework. Proceedings of theThis work is supported by the National Research Fund VLDB Endowment, 2(2):1626–1629, 2009.(NRF) grant funded by Korea government(MEST)(No.  A. Thusoo et al . Hive-a petabyte scale data2011-0016282). warehouse using Hadoop. In Proceedings of the 26th IEEE ICDE, pages 996–1005, 2010.8. REFERENCES  A.F. Gates et al . Building a high-level dataﬂow  Mahout: Scalable machine-learning and system on top of Map-Reduce: the Pig data-mining library. http://mapout.apache.org, experience. Proceedings of the VLDB Endowment, 2010. 2(2):1414–1425, 2009.  F.N. Afrati and J.D. Ullman. Optimizing joins in  B. Catanzaro et al . A map reduce framework for a map-reduce environment. In Proceedings of the programming graphics processors. In Workshop 13th EDBT, pages 99–110, 2010. on Software Tools for MultiCore Systems, 2008.  A. Ailamaki, D.J. DeWitt, M.D. Hill, and  B. He et al . Mars: a MapReduce framework on M. Skounakis. Weaving relations for cache graphics processors. In Proceedings of the 17th performance. The VLDB Journal, pages 169–180, PACT, pages 260–269, 2008. 2001.  B. Li et al . A Platform for Scalable One-Pass  A. Anand. Scaling Hadoop to 4000 nodes at Analytics using MapReduce. In Proceedings of the Yahoo! http://goo.gl/8dRMq, 2008. 2011 ACM SIGMOD, 2011.  S. Babu. Towards automatic optimization of  C. Olston et al . Pig latin: a not-so-foreign mapreduce programs. In Proceedings of the 1st
language for data processing. In Proceedings of 1029–1040, 2007. the ACM SIGMOD, pages 1099–1110, 2008.  J. Dean et al . MapReduce: Simpliﬁed data C. Ordonez et al . Relational versus processing on large clusters. Communications of Non-Relational Database Systems for Data the ACM, 51(1):107–113, 2008. Warehousing. In Proceedings of the ACM  J. Dean et al . MapReduce: a ﬂexible data DOLAP, pages 67–68, 2010. processing tool. Communications of the ACM, C. Ranger et al . Evaluating mapreduce for 53(1):72–77, 2010. multi-core and multiprocessor systems. In  J. Dittrich et al . Hadoop++: Making a yellow Proceedings of the 2007 IEEE HPCA, pages elephant run like a cheetah (without it even 13–24, 2007. noticing). Proceedings of the VLDB Endowment, C. Yang et al . Osprey: Implementing 3(1-2):515–529, 2010. MapReduce-style fault tolerance in a  J. Ekanayake et al . Mapreduce for data intensive shared-nothing distributed database. In scientiﬁc analyses. In the 4th IEEE International Proceedings of the 26th IEEE ICDE, 2010. Conference on eScience, pages 277–284, 2008. D. Battr´ et al . Nephele/PACTs: a programming e  J. Ekanayake et al . Twister: A runtime for model and execution framework for web-scale iterative MapReduce. In Proceedings of the 19th analytical processing. In Proceedings of the 1st ACM HPDC, pages 810–818, 2010. ACM symposium on Cloud computing, pages  J. Leverich et al . On the energy (in) eﬃciency of 119–130, 2010. hadoop clusters. ACM SIGOPS Operating D. Jiang et al . Map-join-reduce: Towards scalable Systems Review, 44(1):61–65, 2010. and eﬃcient data analysis on large clusters. IEEE  Jeﬀrey Dean et al . Mapreduce: Simpliﬁed data Transactions on Knowledge and Data processing on large clusters. In In Proceedings of Engineering, 2010. the 6th USENIX OSDI, pages 137–150, 2004. D. Jiang et al . The performance of mapreduce:  K. Morton et al . Estimating the Progress of An in-depth study. Proceedings of the VLDB MapReduce Pipelines. In Proceedings of the the Endowment, 3(1-2):472–483, 2010. 26th IEEE ICDE, pages 681–684, 2010. D. Logothetis et al . Ad-hoc data processing in  K. Morton et al . Paratimer: a progress indicator the cloud. Proceedings of the VLDB Endowment, for mapreduce dags. In Proceedings of the 2010 1(2):1472–1475, 2008. ACM SIGMOD, pages 507–518, 2010. D. Warneke et al . Nephele: eﬃcient parallel data  Kenton et al . Protocol Buﬀer- Google’s data processing in the cloud. In Proceedings of the 2nd interchange format. MTAGS, pages 1–10, 2009. http://code.google.com/p/protobuf/. D.J. DeWitt et al . Clustera: an integrated  M. Isard et al . Dryad: distributed data-parallel computation and data management system. programs from sequential building blocks. In Proceedings of the VLDB Endowment, 1(1):28–41, Proceedings of the 2nd ACM SIGOPS/EuroSys 2008. European Conference on Computer Systems 2007, E. Anderson et al . Eﬃciency matters! ACM pages 59–72, 2007. SIGOPS Operating Systems Review, 44(1):40–45,  M. Isard et al . Distributed data-parallel 2010. computing using a high-level programming E. Friedman et al . SQL/MapReduce: A practical language. In Proceedings of the ACM SIGMOD, approach to self-describing, polymorphic, and pages 987–994, 2009. parallelizable user-deﬁned functions. Proceedings  M. Stonebraker et al . One size ﬁts all? Part 2: of the VLDB Endowment, 2(2):1402–1413, 2009. Benchmarking results. In Conference on E. Jahani et al . Automatic Optimization for Innovative Data Systems Research (CIDR), 2007. MapReduce Programs. Proceedings of the VLDB,  M. Stonebraker et al . MapReduce and parallel 4(6):385–396, 2011. DBMSs: friends or foes? Communications of the F. Chang et al . Bigtable: A distributed storage ACM, 53(1):64–71, 2010. system for structured data. ACM Transactions on  M. Zaharia et al . Improving mapreduce Computer Systems (TOCS), 26(2):1–26, 2008. performance in heterogeneous environments. In G. Malewicz et al . Pregel: a system for Proceedings of the 8th USENIX OSDI, pages large-scale graph processing. In Proceedings of the 29–42, 2008. 2010 ACM SIGMOD, pages 135–146, 2010.  R. Chaiken et al . Scope: easy and eﬃcient H. Yang et al . Map-reduce-merge: simpliﬁed parallel processing of massive data sets. relational data processing on large clusters. In Proceedings of the VLDB Endowment, Proceedings of the 2007 ACM SIGMOD, pages 1(2):1265–1276, 2008.
 R. Farivar et al . Mithra: Multiple data Storage for Scalable Join Processing in the independent tasks on a heterogeneous resource MapReduce Framework. In Proceedings of the architecture. In IEEE International Conference 2011 ACM SIGMOD, 2011. on Cluster Computing and Workshops, pages  Y. Xu et al . Integrating Hadoop and parallel 1–10, 2009. DBMS. In Proceedings of the ACM SIGMOD, R. Pike et al . Interpreting the data: Parallel pages 969–974, 2010. analysis with Sawzall. Scientiﬁc Programming,  Y. Yu et al . DryadLINQ: A system for 13(4):277–298, 2005. general-purpose distributed data-parallel R. Vernica et al . Eﬃcient parallel set-similarity computing using a high-level language. In joins using mapreduce. In Proceedings of the 2010 Proceedings of the 8th USENIX OSDI, pages ACM SIGMOD, pages 495–506. ACM, 2010. 1–14, 2008. S. Blanas et al . A comparison of join algorithms  D. Florescu and D. Kossmann. Rethinking cost for log processing in MaPreduce. In Proceedings of and performance of database systems. ACM the 2010 ACM SIGMOD, pages 975–986, 2010. SIGMOD Record, 38(1):43–48, 2009. S. Das et al . Ricardo: integrating R and Hadoop.  Saptarshi Guha. RHIPE- R and Hadoop In Proceedings of the 2010 ACM SIGMOD, pages Integrated Processing Environment. 987–998, 2010. http://www.stat.purdue.edu/ sguha/rhipe/, 2010. S. Ghemawat et al . The google ﬁle system. ACM  W. Lang and J.M. Patel. Energy management for SIGOPS Operating Systems Review, 37(5):29–43, MapReduce clusters. Proceedings of the VLDB, 2003. 3(1-2):129–139, 2010. S. Kavulya et al . An analysis of traces from a  J. Lin and C. Dyer. Data-intensive text processing production mapreduce cluster. In 10th with MapReduce. Synthesis Lectures on Human IEEE/ACM CCGrid, pages 94–103, 2010. Language Technologies, 3(1):1–177, 2010. S. Loebman et al . Analyzing massive  O. O Malley and A.C. Murthy. Winning a 60 astrophysical datasets: Can Pig/Hadoop or a second dash with a yellow elephant. Proceedings relational DBMS help? In IEEE International of sort benchmark, 2009. Conference on Cluster Computing and  D.A. Patterson. Technical perspective: the data Workshops, pages 1–10, 2009. center is the computer. Communications of the S. Melnik et al . Dremel: interactive analysis of ACM, 51(1):105–105, 2008. web-scale datasets. Proceedings of the VLDB  M.C. Schatz. CloudBurst: highly sensitive read Endowment, 3(1-2):330–339, 2010. mapping with MapReduce. Bioinformatics, T. Condie et al . MapReduce online. In 25(11):1363, 2009. Proceedings of the 7th USENIX conference on  M. Stonebraker and U. Cetintemel. One size ﬁts Networked systems design and implementation, all: An idea whose time has come and gone. In pages 21–21, 2010. Proceedings of the 21st IEEE ICDE, pages 2–11. T. Nykiel et al . MRshare: Sharing across multiple IEEE, 2005. queries in mapreduce. Proceedings of the VLDB,  R. Taylor. An overview of the 3(1-2):494–505, 2010. Hadoop/MapReduce/HBase framework and its W. Jiang et al . A Map-Reduce System with an current applications in bioinformatics. BMC Alternate API for Multi-core Environments. In bioinformatics, 11(Suppl 12):S1, 2010. Proceedings of the 10th IEEE/ACM CCGrid,  T. White. Hadoop: The Deﬁnitive Guide. Yahoo pages 84–93, 2010. Press, 2010. Y. Bu et al . HaLoop: Eﬃcient iterative data processing on large clusters. Proceedings of the VLDB Endowment, 3(1-2):285–296, 2010. Y. Chen et al . To compress or not to compress - compute vs. IO tradeoﬀs for mapreduce energy eﬃciency. In Proceedings of the ﬁrst ACM SIGCOMM workshop on Green networking, pages 23–28. ACM, 2010. Y. He et al . RCFile: A Fast and Space-eﬃcient Data Placement Structure in MapReduce-based Warehouse Systems. In Proceedings of the 2011 IEEE ICDE, 2011. Y. Lin et al . Llama: Leveraging Columnar