Beyond Map/Reduce: 
Getting Creative with Parallel         Processing
              "          Ed Kohlwey          @ekohlw...
Overview"•  Within the last year:  –  Two cluster schedulers have been released  –  Two BSP frameworks have been released ...
What About…"•  This talk covers a few specific frameworks•  There’s lots more out there
Motivations for Schedulers" The cornerstone of new cluster   computing environments
Different Tasks Have Different               Needs"        Host 7           Host 5        Host 3           Host 2CPU RAM H...
Clusters Often Donʼt Accommodate               This"   Percentage of Cluster            Expense of Hosts Required         ...
This is How It Should Look"  Percentage of Cluster          Expense of Hosts Required         Load                        ...
Economic Reasons"Power Consumption                           Load
Simple Example: a Work Queue"•  Data scientists execute serial   implementations of machine learning   algorithms•  Some a...
Advantages for Moving to a Thin       Client/Cluster Model"•  Scalability  –  All analyst capabilities can be enhances by ...
Desirable Scheduler Features"                                                                  YARN	     Mesos	  Operate	 ...
New Compute Environments"  BSP, In-Memory Map/Reduce,   and Streaming Processing
(Hadoop) Map/Reduce Pros &             Cons"•  Map/Reduce implements partitioned,   parallel sorting  –  Many algorithms (...
In-Memory Map/Reduce"•  Memory is fast•  Often, after the map phase, a whole data   set can fit in the memory of the cluste...
In-Memory Performance"                                        Logistic Regression Performance Comparison           4000   ...
Spark Wordcount"val file = spark.textFile("hdfs://...”) 	file.flatMap(line => line.split(" "))	    .map(word => (word, 1))...
Hadoop Wordcount"public class WordCount {	                                                 public static void main(String[...
Streaming Processing: Accumulo"•  Accumulo is a BigTable implementation•  Idea: accumulate values in a column    –  “map” ...
BSP"Generalizing Map/Reduce for     graph processing
BSP"•  First proposed by Valiant in 1990•  Good at expressing iterative computation•  Good at expressing graph algorithms•...
MR Graph Traversal"Map	                                                Sort	  +	     Reduce	                              ...
MR Graph Traversal"Map	                                                Sort	  +	     Reduce	              I want to send a...
MR Graph Traversal"Map	                                                               Sort	  +	     Reduce	               ...
MR Graph Traversal"Map	                                                               Sort	  +	     Reduce	               ...
MR Graph Traversal"Map	                                                               Sort	  +	                           ...
MR Graph Traversal"Map	                                                               Sort	  +	                           ...
MR Graph Traversal"Map	                                                               Sort	  +	                           ...
MR Graph Traversal"Map	                                                               Sort	  +	                           ...
The BSP Version"Compute	                                                                                           Exchang...
The BSP Version"                                                 No9ce	  A	  and	  C’s	  message	  Compute	               ...
The BSP Version"            Also,	  no9ce	  we	  don’t	  necessarily	  Compute	   have	  to	  copy	  the	  en9re	  graph	 ...
BSP Implementations"•  Giraph  –  Currently an Apache Incubator project  –  Has a growing community  –  Runs during the Ha...
Contact Info"Ed KohlweyBooz | Allen | Hamilton@ekohlweykohlwey_edmund@bah.com
Upcoming SlideShare
Loading in...5
×

Beyond Map/Reduce: Getting Creative With Parallel Processing

2,351

Published on

While Map/Reduce is an excellent environment for some parallel computing tasks, there are many ways to use a cluster beyond Map/Reduce. Within the last year, the YARN and NextGen Map/Reduce has been contributed into the Hadoop trunk, Mesos has been released as an open source project, and a variety of new parallel programming environments have emerged such as Spark, Giraph, Golden Orb, Accumulo, and others.

We will discuss the features of YARN and Mesos, and talk about obvious yet relatively unexplored uses of these cluster schedulers as simple work queues. Examples will be provided in the context of machine learning. Next, we will provide an overview of the Bulk-Synchronous-Parallel model of computation, and compare and contrast the implementations that have emerged over the last year. We will also discuss two other alternative environments: Spark, an in-memory version of Map/Reduce which features a Scala-based interpreter; and Accumulo, a BigTable-style database that implements a novel model for parallel computation and was recently released by the NSA.

Published in: Technology, Education

Beyond Map/Reduce: Getting Creative With Parallel Processing

  1. 1. Beyond Map/Reduce: 
Getting Creative with Parallel Processing
 " Ed Kohlwey @ekohlwey kohlwey_edmund@bah.com
  2. 2. Overview"•  Within the last year: –  Two cluster schedulers have been released –  Two BSP frameworks have been released –  An in-memory Map/Reduce has been released –  Accumulo has been released•  More importantly –  We have been given the tools to program in something besides Map/Reduce and MPI
  3. 3. What About…"•  This talk covers a few specific frameworks•  There’s lots more out there
  4. 4. Motivations for Schedulers" The cornerstone of new cluster computing environments
  5. 5. Different Tasks Have Different Needs" Host 7 Host 5 Host 3 Host 2CPU RAM Host 1 CPU RAM Host 1 CPU RAM Task A Task B Task C
  6. 6. Clusters Often Donʼt Accommodate This" Percentage of Cluster Expense of Hosts Required Load to Execute Task Task A Task B Task C Task A Task B Task C Types of Hosts In Cluster Type 1
  7. 7. This is How It Should Look" Percentage of Cluster Expense of Hosts Required Load to Execute Task Task A Task B Task C Task A Task B Task C Types of Hosts In Cluster Type 1 Type 2
  8. 8. Economic Reasons"Power Consumption Load
  9. 9. Simple Example: a Work Queue"•  Data scientists execute serial implementations of machine learning algorithms•  Some are expensive, some are not•  Scientists aren’t running analyses all the time•  Solution 1: –  Give all the analysts a big workstation•  Solution 2: –  Give the analysts all thin clients and let them share a cluster
  10. 10. Advantages for Moving to a Thin Client/Cluster Model"•  Scalability –  All analyst capabilities can be enhances by adding one host•  Increases resource utilization –  Workstations are expensive, and will be highly under-utilized•  Increase availability –  Using a distributed file system to store data
  11. 11. Desirable Scheduler Features" YARN   Mesos  Operate  on  heterogeneous  clusters   Y   Y  Highly  Available   Y   Y  Pluggable  scheduling  policies   Y   Y  Authen9ca9on   Y   N  Task  ar9fact  distribu9on   Y   P  Scheduling  policy  based  on  mul9ple  resources   N   Y  (RAM,  CPU)  Mul9ple  Queues   Y   N  Fast  accept/reject  model   N   P  Reusable  method  of  describing  resource   Y   N  requirements  Pluggable  Isola9on   N   Y  “Compute  Units”   N   N  
  12. 12. New Compute Environments" BSP, In-Memory Map/Reduce, and Streaming Processing
  13. 13. (Hadoop) Map/Reduce Pros & Cons"•  Map/Reduce implements partitioned, parallel sorting –  Many algorithms (relational) express well –  Creates O(n lg(n)) runtime constraints for some problems that wouldn’t otherwise have them•  Hadoop M/R is good for bulk jobs
  14. 14. In-Memory Map/Reduce"•  Memory is fast•  Often, after the map phase, a whole data set can fit in the memory of the cluster•  Spark provides this, as well as a very succinct programming environment courtesy of Scala and it’s closures
  15. 15. In-Memory Performance" Logistic Regression Performance Comparison 4000 3000Time (s) 2000 Hadoop 1000 Spark 0 5 10 20 30 Iterations *Numbers taken from http://spark-project.org
  16. 16. Spark Wordcount"val file = spark.textFile("hdfs://...”)  file.flatMap(line => line.split(" "))     .map(word => (word, 1))     .reduceByKey(_ + _)
  17. 17. Hadoop Wordcount"public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); public static class TokenizerMapper String[] otherArgs = new GenericOptionsParser(conf, extends Mapper<Object, Text, Text, IntWritable>{ args).getRemainingArgs(); if (otherArgs.length != 2) { private final static IntWritable one = new IntWritable(1); System.err.println("Usage: wordcount <in> <out>"); private Text word = new Text(); System.exit(2); } public void map(Object key, Text value, Context context Job job = new Job(conf, "word count"); ) throws IOException, InterruptedException { job.setJarByClass(WordCount.class); StringTokenizer itr = new StringTokenizer(value.toString()); job.setMapperClass(TokenizerMapper.class); while (itr.hasMoreTokens()) { job.setCombinerClass(IntSumReducer.class); word.set(itr.nextToken()); job.setReducerClass(IntSumReducer.class); context.write(word, one); job.setOutputKeyClass(Text.class); } job.setOutputValueClass(IntWritable.class); } FileInputFormat.addInputPath(job, new Path(otherArgs[0])); } FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); public static class IntSumReducer } extends Reducer<Text,IntWritable,Text,IntWritable> { } private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
  18. 18. Streaming Processing: Accumulo"•  Accumulo is a BigTable implementation•  Idea: accumulate values in a column –  “map” using the ETL process•  Summarize values (stored in sorted order) at read-time –  “reduce” process•  No control over partitioning outside a row –  Accumulo doesn’t suffer from the column family problem that HBase has, so this is ok•  Less consistent than Map/Reduce because race conditions can occur with respect to the scan cursor•  Iterator programming environment allows you to compose “reduce” operations•  Implementing streaming Map/Reduce over a BigTable implementation is a hybrid of in-memory and disk based approaches•  Allows revision of figures due to data provenance issues
  19. 19. BSP"Generalizing Map/Reduce for graph processing
  20. 20. BSP"•  First proposed by Valiant in 1990•  Good at expressing iterative computation•  Good at expressing graph algorithms•  Concerned with passing messages between virtual processors•  Perhaps the most famous implementation is Pregel
  21. 21. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle  A n                  è  B n                  è  C n                è  
  22. 22. MR Graph Traversal"Map   Sort  +   Reduce   I want to send a Shuffle   message  to  C!A n                  è  B n                  è  C n                è  
  23. 23. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle  A n                  è   A C n, m  B n                  è   B   nC n                è   C   n
  24. 24. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle  A n                  è   A C n, m  B n                  è   B   nC n                è   C   n
  25. 25. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle  A n                  è   A C n, m   An                          B n                  è   B   n B n                          C n                è   C   n CC n, m  
  26. 26. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle  A n                  è   A C n, m   An                        è   A   nB n                  è   B   n B n                        è   B   n I got it!C n                è   C   n CC n, mè   C   n
  27. 27. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle   O((n+m)  lg(n+m)  )  A n                  è   A C n, m   A n                        è   A   nB n                  è   B   n B n                        è   B   nC n                è   C   n CC n, mè   C   n
  28. 28. MR Graph Traversal"Map   Sort  +   Reduce   Shuffle  This  can  be  op9mized  to  O(m)  A n                  è   A C n, m   A n                        è   A   nB n                  è   B   n B n                        è   B   nC n                è   C   n CC n, mè   C   n
  29. 29. The BSP Version"Compute   Exchange   Synchronize   Messages    A   n   C   m                                 A   nB n                     B   n                      C B n                è                   m   C   n
  30. 30. The BSP Version" No9ce  A  and  C’s  message  Compute   Exchange   Synchronize   exchange  isn’t  closely   Messages   coupled,  providing  beEer  I/O     u9liza9on  A   n   C   m                                 A   n            B n                     B   nC B n                è                   m   C   n
  31. 31. The BSP Version" Also,  no9ce  we  don’t  necessarily  Compute   have  to  copy  the  en9re  graph   Exchange   Synchronize   state.  We  just  send  whatever   Messages   messages  need  to  be  sent    A  n   C   m                                 A   n            B n                     B   nC B n                è                   m   C   n
  32. 32. BSP Implementations"•  Giraph –  Currently an Apache Incubator project –  Has a growing community –  Runs during the Hadoop Map phase•  GoldenOrb –  Not actively maintained since the summer•  Both implementations are in-memory, modeled after Pregel
  33. 33. Contact Info"Ed KohlweyBooz | Allen | Hamilton@ekohlweykohlwey_edmund@bah.com
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×