HUG August 2010: Mesos


Published on

Mesos: A Flexible Cluster Resource Manager

Mesos is a platform for sharing clusters between multiple parallel applications, such as Hadoop, MPI, and web services. It allows organizations to consolidate workloads from multiple applications onto a single cluster, which increases utilization and simplifies management, while ensuring that each application gets suitable resources to run on (e.g. that Hadoop gets data locality). Hadoop users can leverage Mesos to run multiple isolated instances of Hadoop (e.g. production and experimental instances) on the same cluster, to run multiple versions of Hadoop concurrently (e.g. to test patches), and to run MPI and other applications alongside Hadoop. In addition, due to its simple design, Mesos is highly scalable (to tens of thousands of nodes) and fault tolerant (using ZooKeeper). In this talk, we'll provide an introduction to Mesos for Hadoop users. We have also been working to test Mesos at Twitter and Facebook, and we'll present experience and results from those deployments.
Spark: A Framework for Iterative and Interactive Cluster Computing

Although the MapReduce programming model has been highly successful, it is not suitable for all applications. We present Spark, a framework optimized for one such type of applications - iterative jobs where a dataset is shared across multiple parallel operations, as is common in machine learning algorithms. Spark provides a functional programming model similar to MapReduce, but also lets users cache data between iterations, leading to up to 10x better performance than Hadoop. Spark also makes programming jobs easy by integrating into the Scala programming language. Finally, the ability of Spark to load a dataset into memory and query it repeatedly makes it especially suitable for interactive analysis of big datasets. We have modified the Scala interpreter to make it possible to use Spark interactively, providing a much more responsive experience than Hive and Pig. Spark is built on top of Mesos and can therefore run alongside Hadoop.
Project URL:

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Mention it’s a research project but has recently gone open source
  • Mention Hama, Pregel, Dremel, etc; also mention Spark***Mention how we want to enable NEW frameworks too!!***
  • Mention how we want to enable NEW frameworks too!
  • Note here that even users of Hadoop want Mesos for the isolation it provides
  • Explain that Hadoop port uses pluggable scheduler, hence few changes to Hadoop and no changes to job clients
  • Point out that Scala is a modern PL etcMention DryadLINQ (but we go beyond it with RDDs)Point out that interactive use and iterative use go hand in hand because both require small tasks and dataset reuse
  • Note that dataset is reused on each gradient computation
  • This is for a 29 GB dataset on 20 EC2 m1.xlarge machines (4 cores each)
  • This slide has to say we’re committed to making Nexus “real”, and that it will be open sourced
  • HUG August 2010: Mesos

    1. 1. Mesos<br />A Resource Management Platform for Hadoop and Big Data Clusters<br />Benjamin Hindman, Andy Konwinski,MateiZaharia,<br />Ali Ghodsi, Anthony D. Joseph, Randy Katz,<br />Scott Shenker, Ion Stoica<br />UC Berkeley<br />
    2. 2. Challenges in Hadoop Cluster Management<br />Isolation (both fault and resource)<br />E.g. if co-locating production and experimental jobs<br />Testing and deploying upgrades<br />JobTracker scalability (in larger clusters)<br />Running non-Hadoop jobs<br />E.g. MPI, streaming MapReduce, etc<br />
    3. 3. Mesos<br />Mesosis a cluster resource manager over which multiple instances of Hadoop and other distributed applications can run<br />Hadoop<br />(production)<br />Hadoop<br />(ad-hoc)<br />MPI<br />…<br />Mesos<br />Node<br />Node<br />Node<br />Node<br />
    4. 4. What’s Different About It?<br />Other resource managers exist today, including<br />Hadoop on Demand<br />Batch schedulers (e.g. Torque)<br />VM schedulers (e.g. Eucalyptus)<br />However, have 2 problems with Hadoop-like workloads:<br />Data locality compromised due to static partitioning of nodes<br />Utilization hurt because jobs hold nodes for their full duration<br />Mesos addresses these through two features:<br />Fine-grained sharing at the level of tasks<br />Two-level scheduling model where jobs control placement<br />
    5. 5. Fine-Grained Sharing<br />Coarse-Grained (HOD, etc)<br />Fine-Grained (Mesos)<br />Hadoop 3<br />Hadoop 3<br />Hadoop 2<br />Hadoop 1<br />Hadoop 1<br />Hadoop 1<br />Hadoop 2<br />Hadoop 2<br />Hadoop 1<br />Hadoop 3<br />Hadoop 1<br />Hadoop 2<br />Hadoop 2<br />Hadoop 3<br />Hadoop 3<br />Hadoop 2<br />Hadoop 2<br />Hadoop 3<br />Hadoop 2<br />Hadoop 3<br />Hadoop 1<br />Hadoop 3<br />Hadoop 1<br />Hadoop 2<br />
    6. 6. Hadoop job<br />Hadoopjob<br />MPI job<br />Hadoop 0.20 scheduler<br />Hadoop 0.19 scheduler<br />MPI<br />scheduler<br />Mesosmaster<br />Mesosslave<br />Mesosslave<br />Mesosslave<br />MPI<br />executor<br />MPI<br />executor<br />Hadoop 19 executor<br />Hadoop 20 executor<br />Hadoop 19 executor<br />task<br />task<br />task<br />task<br />Mesos Architecture<br />task<br />
    7. 7. Design Goals<br />Scalability (to 10,000’s of nodes)<br />Robustness (even to master failure)<br />Flexibility (support wide variety of frameworks)<br />Resulting design: simple, minimal core that pushes resource selection logic to frameworks <br />
    8. 8. Other Features<br />Master fault tolerance using ZooKeeper<br />Resource isolation using Linux Containers<br />Isolate CPU and memory between tasks on each node<br />In newest kernels, can isolate network & disk IO too<br />Web UI for viewing cluster state<br />Deploy scripts for private clusters and EC2<br />
    9. 9. Mesos Status<br />Prototype in 10000 lines of C++<br />Ported frameworks:<br />Hadoop (0.20.2), MPI (MPICH2), Torque<br />New frameworks:<br />Spark, Scala framework for iterative & interactive jobs<br />Test deployments at Twitter and Facebook<br />
    10. 10. Results<br />Dynamic Resource Sharing<br />Data Locality<br />Scalability<br />
    11. 11. Mesos Demo<br />
    12. 12. Spark<br />A framework for iterative and interactive cluster computing<br />Matei Zaharia, MosharafChowdhury,<br />Michael Franklin, Scott Shenker, Ion Stoica<br />
    13. 13. Spark Goals<br />Support iterative applications<br />Common in machine learning but problematic forMapReduce, Dryad, etc<br />Retain MapReduce’s fault tolerance & scalability<br />Experiment with programmability<br />Integrate into Scala programming language<br />Support interactive use from Scala interpreter<br />
    14. 14. Key Abstraction<br />Resilient Distributed Datasets (RDDs)<br />Collections of elements distributed across cluster that can persist across parallel operations<br />Can be stored in memory, on disk, etc<br />Can be transformed with map, filter, etc<br />Automatically rebuilt on failure<br />
    15. 15. Example: Log Mining<br />Load error messages from a log into memory, then interactively search for various patterns<br />Cache 1<br />Base RDD<br />Transformed RDD<br />lines = spark.textFile(“hdfs://...”)<br />errors = lines.filter(_.startsWith(“ERROR”))<br />messages =‘ ’)(2))<br />cachedMsgs = messages.cache()<br />Worker<br />results<br />tasks<br />Block 1<br />Driver<br />Cached RDD<br />Parallel operation<br />cachedMsgs.filter(_.contains(“foo”)).count<br />Cache 2<br />cachedMsgs.filter(_.contains(“bar”)).count<br />Worker<br />. . .<br />Cache 3<br />Block 2<br />Worker<br />Block 3<br />
    16. 16. Example: Logistic Regression<br />Goal: find best line separating two sets of points<br />random initial line<br />+<br />+<br />+<br />+<br />+<br />+<br />–<br />+<br />+<br />–<br />–<br />+<br />–<br />+<br />–<br />–<br />–<br />–<br />–<br />–<br />target<br />
    17. 17. Logistic Regression Code<br />val data = spark.textFile(...).map(readPoint).cache()<br />varw = Vector.random(D)<br />for (i <- 1 to ITERATIONS) {<br />val gradient = => {<br />val scale = (1/(1+exp(-p.y*(w dot p.x))) - 1) * p.y<br /> scale * p.x<br /> }).reduce(_ + _)<br />w -= gradient<br />}<br />println("Finalw: " + w)<br />
    18. 18. Logistic Regression Performance<br />127 s / iteration<br />first iteration 174 s<br />further iterations 6 s<br />
    19. 19. Spark Demo<br />
    20. 20. Conclusion<br />Mesos provides a stable platform to multiplex resources among diverse cluster applications<br />Spark is a new cluster programming framework for iterative & interactive jobs enabled by Mesos<br />Both are open-source (but in very early alpha!)<br /><br /><br />