1. 1 UC Berkeley Cloud Computing: Past, Present, and Future Professor Anthony D. Joseph*, UC BerkeleyReliable Adaptive Distributed Systems Lab RWTH Aachen 22 March 2010 http://abovetheclouds.cs.berkeley.edu/ *Director, Intel Research Berkeley
2. RAD Lab 5-year Mission Enable 1 person to develop, deploy, operate next -generation Internet application Key enabling technology: Statistical machine learning debugging, monitoring, pwr mgmt, auto-configuration, perfprediction, ... Highly interdisciplinary faculty & students PI’s: Patterson/Fox/Katz (systems/networks), Jordan (machine learning), Stoica (networks & P2P), Joseph (security), Shenker (networks), Franklin (DB) 2 postdocs, ~30 PhD students, ~6 undergrads Grad/Undergrad teaching integrated with research
4. Nexus: A common substrate for cluster computing Joint work with Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Scott Shenker, and Ion Stoica
5. Recall: Hadoop on HDFS namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon datanode daemon datanode daemon Linux file system Linux file system Linux file system … … … slave node slave node slave node Adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed under Creation Commons Attribution 3.0 License)
6. Problem Rapid innovation in cluster computing frameworks No single framework optimal for all applications Energy efficiency means maximizing cluster utilization Want to run multiple frameworks in a single cluster
7. What do we want to run in the cluster? Pregel Apache Hama Dryad Pig
8. Why share the cluster between frameworks? Better utilization and efficiency (e.g., take advantage of diurnal patterns) Better data sharing across frameworks and applications
9. Solution Nexus is an “operating system” for the cluster over which diverse frameworks can run Nexus multiplexes resources between frameworks Frameworks control job execution
10. Goals Scalable Robust (i.e., simple enough to harden) Flexible enough for a variety of different cluster frameworks Extensible enough to encourage innovative future frameworks
11. Question 1: Granularity of Sharing Option: Coarse-grained sharing Give framework a (slice of) machine for its entire duration Data locality compromised if machine held for long time Hard to account for new frameworks and changing demands -> hurts utilization and interactivity Hadoop 1 Hadoop 2 Hadoop 3
12. Question 1: Granularity of Sharing Nexus: Fine-grained sharing Support frameworks that use smaller tasks (in time and space) by multiplexing them across all available resources Hadoop 3 Hadoop 3 Hadoop 2 Hadoop 1 Frameworks can take turns accessing data on each node Can resize frameworks shares to get utilization & interactivity Hadoop 1 Hadoop 2 Hadoop 2 Hadoop 1 Hadoop 3 Hadoop 1 Hadoop 2 Hadoop 3 Hadoop 3 Hadoop 2 Hadoop 2 Hadoop 3 Hadoop 2 Hadoop 1 Hadoop 3 Hadoop 1 Hadoop 2
13. Question 2: Resource Allocation Option: Global scheduler Frameworks express needs in a specification language, a global scheduler matches resources to frameworks Requires encoding a framework’s semantics using the language, which is complex and can lead to ambiguities Restricts frameworks if specification is unanticipated Designing a general-purpose global scheduler is hard
21. Resource Offer Details Min and max task sizes to control fragmentation Filters let framework restrict offers sent to it By machine list By quantity of resources Timeouts can be added to filters Frameworks can signal when to destroy filters, or when they want more offers
22. Using Offers for Data Locality We found that a simple policy called delay scheduling can give very high locality: Framework waits for offers on nodes that have its data If waited longer than a certain delay, starts launching non-local tasks
23. Framework Isolation Isolation mechanism is pluggable due to the inherent perfomance/isolation tradeoff Current implementation supports Solaris projects and Linux containers Both isolate CPU, memory and network bandwidth Linux developers working on disk IO isolation Other options: VMs, Solaris zones, policing
25. Allocation Policies Nexus picks framework to offer resources to, and hence controls how many resources each framework can get (but not which) Allocation policies are pluggable to suit organization needs, through allocation modules
27. Revocation Killing tasks to make room for other users Not the normal case because fine-grained tasks enable quick reallocation of resources Sometimes necessary: Long running tasks never relinquishing resources Buggy job running forever Greedy user who decides to makes his task long
28. Revocation Mechanism Allocation policy defines a safe share for each user Users will get at least safe share within specified time Revoke only if a user is below its safe share and is interested in offers Revoke tasks from users farthest above their safe share Framework warned before its task is killed
29. How Do We Run MPI? Users always told their safe share Avoid revocation by staying below it Giving each user a small safe share may not be enough if jobs need many machines Can run a traditional grid or HPC scheduler as a user with a larger safe share of the cluster, and have MPI jobs queue up on it E.g. Torque gets 40% of cluster
32. What is Fair? Goal: define a fair allocation of resources in the cluster between multiple users Example: suppose we have: 30 CPUs and 30 GB RAM Two users with equal shares User 1 needs <1 CPU, 1 GB RAM> per task User 2 needs <1 CPU, 3 GB RAM> per task What is a fair allocation?
33. Definition 1: Asset Fairness Idea: give weights to resources (e.g. 1 CPU = 1 GB) and equalize value of resources given to each user Algorithm: when resources are free, offer to whoever has the least value Result: U1: 12 tasks: 12 CPUs, 12 GB ($24) U2: 6 tasks: 6 CPUs, 18 GB ($24) PROBLEM User 1 has < 50% of both CPUs and RAM User 1 User 2 100% 50% 0% CPU RAM
34. Lessons from Definition 1 “You shouldn’t do worse than if you ran a smaller, private cluster equal in size to your share” Thus, given N users, each user should get ≥ 1/N of his dominating resource (i.e., the resource that he consumes most of)
35. Def. 2: Dominant Resource Fairness Idea: give every user an equal share of her dominant resource (i.e., resource it consumes most of) Algorithm: when resources are free, offer to the user with the smallest dominant share (i.e., fractional share of the her dominant resource) Result: U1: 15 tasks: 15 CPUs, 15 GB U2: 5 tasks: 5 CPUs, 15 GB User 1 User 2 100% 50% 0% CPU RAM
38. Implementation Stats 7000 lines of C++ APIs in C, C++, Java, Python, Ruby Executor isolation using Linux containers and Solaris projects
39. Frameworks Ported frameworks: Hadoop(900 line patch) MPI (160 line wrapper scripts) New frameworks: Spark, Scala framework for iterative jobs (1300 lines) Apache+haproxy, elastic web server farm (200 lines)
49. Future Work Experiment with parallel programming models Further explore low-latency services on Nexus (web applications, etc) Shared services (e.g. BigTable, GFS) Deploy to users and open source
55. Open Cirrus Organization Central Management Office, oversees Open Cirrus Currently owned by HP Governance model Research team Technical team New site additions Support (legal (export, privacy), IT, etc.) Each site Runs its own research and technical teams Contributes individual technologies Operates some of the global services E.g. HP site supports portal and PRS Intel site developing and supporting Tashi Yahoo! contributes to Hadoop
56. Intel BigData Open Cirrus Site Mobile Rack 8 (1u) nodes ------------- 2 Xeon E5440 (quad-core) [Harpertown/ Core 2] 16GB DRAM 2 1TB Disk http://opencirrus.intel-research.net 1 Gb/s (x8 p2p) 1 Gb/s (x4) Switch 24 Gb/s 1 Gb/s (x8) 1 Gb/s (x4) 45 Mb/s T3 to Internet Switch 48 Gb/s * Switch 48 Gb/s 1 Gb/s (x2x5 p2p) 1 Gb/s (x4) 1 Gb/s (x4) 1 Gb/s (x4) 1 Gb/s (x4) 1 Gb/s (x4) 3U Rack 5 storage nodes ------------- 12 1TB Disks Switch 48 Gb/s Switch 48 Gb/s Switch 48 Gb/s Switch 48 Gb/s Switch 48 Gb/s 1 Gb/s (x4x4 p2p) 1 Gb/s (x4x4 p2p) 1 Gb/s (x15 p2p) 1 Gb/s (x15 p2p) 1 Gb/s (x15 p2p) (r1r5) PDU w/per-port monitoring and control Blade Rack 40 nodes Blade Rack 40 nodes 1U Rack 15 nodes 2U Rack 15 nodes 2U Rack 15 nodes 20 nodes: 1 Xeon (1-core) [Irwindale/Pent4], 6GB DRAM, 366GB disk (36+300GB) 10 nodes: 2 Xeon 5160 (2-core) [Woodcrest/Core], 4GB RAM, 2 75GB disks 10 nodes: 2 Xeon E5345 (4-core) [Clovertown/Core],8GB DRAM, 2 150GB Disk 2 Xeon E5345 (quad-core) [Clovertown/ Core] 8GB DRAM 2 150GB Disk 2 Xeon E5420 (quad-core) [Harpertown/ Core 2] 8GB DRAM 2 1TB Disk 2 Xeon E5440 (quad-core) [Harpertown/ Core 2] 8GB DRAM 6 1TB Disk 2 Xeon E5520 (quad-core) [Nehalem-EP/ Core i7] 16GB DRAM 6 1TB Disk x3 x2 x2 Key: rXrY=row X rack Y rXrYcZ=row X rack Y chassis Z (r2r2c1-4) (r2r1c1-4) (r1r1, r1r2) (r1r3, r1r4, r2r3) (r3r2, r3r3)
59. Open Cirrus Stack Compute + network + storage resources Management and control subsystem Power + cooling Physical Resource set (Zoni) service Credit: John Wilkes (HP)
60. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service PRS clients, each with theirown “physical data center” Zoni service
61. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service Virtual cluster Virtual cluster Virtual clusters (e.g., Tashi) Zoni service
62. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service Virtual cluster Virtual cluster Application running On Hadoop On Tashi virtual cluster On a PRS On real hardware BigData App Hadoop Zoni service
63. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service Virtual cluster Virtual cluster Experiment/ save/restore BigData app Hadoop Zoni service
64. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service Virtual cluster Virtual cluster Experiment/ save/restore BigData App Hadoop Platform services Zoni service
65. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service Virtual cluster Virtual cluster User services Experiment/ save/restore BigData App Hadoop Platform services Zoni service
66. Open Cirrus Stack Research Tashi NFS storage service HDFS storage service Virtual cluster Virtual cluster User services Experiment/ save/restore BigData App Hadoop Platform services Zoni
67. System Organization Compute nodes are divided into dynamically-allocated, vlan-isolated PRS subdomains Apps switch back and forth between virtual and phyiscal Open service research Apps running in a VM mgmt infrastructure (e.g., Tashi) Tashi development Production storage service Proprietary service research Open workload monitoring and trace collection
68. Open Cirrus stack - Zoni Zoni service goals Provide mini-datacenters to researchers Isolate experiments from each other Stable base for other research Zoni service approach Allocate sets of physical co-located nodes, isolated inside VLANs. Zoni code from HP being merged into Tashi Apache project and extended by Intel Running on HP site Being ported to Intel site Will eventually run on all sites
69. Open Cirrus Stack - Tashi An open source Apache Software Foundation project sponsored by Intel (with CMU, Yahoo, HP) Infrastructure for cloud computing on Big Data http://incubator.apache.org/projects/tashi Research focus: Location-aware co-scheduling of VMs, storage, and power. Seamless physical/virtual migration. Joint with Greg Ganger (CMU), Mor Harchol-Balter (CMU), Milan Milenkovic (CTG)
70. Node Node Node Node Node Node Tashi High-Level Design Services are instantiated through virtual machines Most decisions happen in the scheduler; manages compute/storage/power in concert Data location and power information is exposed to scheduler and services Scheduler Virtualization Service Storage Service The storage service aggregates the capacity of the commodity nodes to house Big Data repositories. Cluster Manager Cluster nodes are assumed to be commodity machines CM maintains databases and routes messages; decision logic is limited
72. 73 Open Cirrus Stack - Hadoop An open-source Apache Software Foundation project sponsored by Yahoo! http://wiki.apache.org/hadoop/ProjectDescription Provides a parallel programming model (MapReduce), a distributed file system, and a parallel database (HDFS)
73. What kinds of research projects are Open Cirrus sites looking for? Open Cirrus is seeking research in the following areas (different centers will weight these differently): Datacenter federation Datacenter management Web services Data-intensive applications and systems The following kinds of projects are generally not of interest: Traditional HPC application development Production applications that just need lots of cycles Closed source system development
74. How do users get access to Open Cirrus sites? Project PIs apply to each site separately. Contact names, email addresses, and web links for applications to each site will be available on the Open Cirrus Web site (which goes live Q209) http://opencirrus.org Each Open Cirrus site decides which users and projects get access to its site. Developing a global sign on for all sites (Q2 09) Users will be able to login to each Open Cirrus site for which they are authorized using the same login and password.
75. Summary and Lessons Intel is collaborating with HP and Yahoo! to provide a cloud computing testbed for the research community Using the cloud as an accelerator for interactive streaming/big data apps is an important usage model Primary goals are to Foster new systems research around cloud computing Catalyze open-source reference stack and APIs for the cloud Access model, Local and global services, Application frameworks Explore location-aware and power-aware workload scheduling Develop integrated physical/virtual allocations to combat cluster squatting Design cloud storage models GFS-style storage systems not mature, impact of SSDs unknown Investigate new application framework alternatives to map-reduce/Hadoop
77. Heterogeneity in Virtualized Environments VM technology isolates CPU and memory, but disk and network are shared Full bandwidth when no contention Equal shares when there is contention 2.5x performance difference EC2 small instances
78. Isolation Research Need predictable variance over raw performance Some resources that people have run into problems with: Power, disk space, disk I/O rate (drive, bus), memory space (user/kernel), memory bus, cache at all levels (TLB, etc), hyperthreading/etc, CPU rate, interrupts Network: NIC (Rx/Tx), Switch, cross-datacenter, cross-country OS resources: File descriptors, ports, sockets
79. Datacenter Energy EPA, 8/2007: 1.5% of total U.S. energy consumption Growing from 60 to 100 Billion kWh in 5 yrs 48% of typical IT budget spent on energy 75 MW new DC deployments in PG&E’s service area – that they know about! (expect another 2x) Microsoft: $500m new Chicago facility Three substations with a capacity of 198MW 200+ shipping containers w/ 2,000 servers each Overall growth of 20,000/month
81. First Milestone: DC Energy Conservation DCs limited by power For each dollar spent on servers, add $0.48 (2005)/$0.71 (2010) for power/cooling $26B spent to power and cool servers in 2005 grows to $45B in 2010 Within DC racks, network equipment often the “hottest” components in the hot spot
82. Thermal Image of Typical Cluster Rack Rack Switch M. K. Patterson, A. Pratt, P. Kumar, “From UPS to Silicon: an end-to-end evaluation of datacenter efficiency”, Intel Corporation
83. DC Networking and Power Selectively power down ports/portions of net elements Enhanced power-awareness in the network stack Power-aware routing and support for system virtualization Support for datacenter “slice” power down and restart Application and power-aware media access/control Dynamic selection of full/half duplex Directional asymmetry to save power, e.g., 10Gb/s send, 100Mb/s receive Power-awareness in applications and protocols Hard state (proxying), soft state (caching), protocol/data “streamlining” for power as well as b/w reduction Power implications for topology design Tradeoffs in redundancy/high-availability vs. power consumption VLANs support for power-aware system virtualization
84. Summary Many areas for research into Cloud Computing! Datacenter design, languages, scheduling, isolation, energy efficiency (at all levels) Opportunities to try out research at scale! Amazon EC2, Open Cirrus, …
Just mention briefly that there are things MR and Dryad can’t do, and that there are competing implementations; perhaps also note the need to share resources with other data center services here?The excitement surrounding cluster computing frameworks like Hadoop continues to accelerate. (e.g. EC2 Hadoop and Dryad in Azure)Startups, enterprises, and us researchers are bursting with ideas to improve these already existing frameworks. But more importantly as we encounter the limitations of MR, we’re making a shopping list of what we want in next generation frameworks, new abstractions, programming models, even new implementations of existing models (e.g. Erlang MR called Disco).We believe that no single framework can best facilitate this innovation, but instead that people will want to run existing and new frameworks on the same physical clusters at the same time.
Useful even if you only use one frameworkRun isolated framework instances (production vs test)Run multiple versions of framework together
Global scheduler needs to make guesses about a lot more (job running times, etc)Talk about adaptive frameworks that may not know how many tasks they need in advanceTalk about irregular parallelism jobs that don’t even know DAG in advance**We are exploring resource offers but don’t yet know the limits; seem to work OK for jobs with data locality needs though**
Global scheduler needs to make guesses about a lot more (job running times, etc)Talk about adaptive frameworks that may not know how many tasks they need in advanceTalk about irregular parallelism jobs that don’t even know DAG in advance**We are exploring resource offers but don’t yet know the limits; seem to work OK for jobs with data locality needs though**
…multiple frameworks to run concurrently! Here we see a new framework, Dryad being run side by side with Hadoop, and Nexus is multiplexing the slaves between both. Some are running Hadoop tasks, some Dryad, and some both.
…multiple frameworks to run concurrently! Here we see a new framework, Dryad being run side by side with Hadoop, and Nexus is multiplexing the slaves between both. Some are running Hadoop tasks, some Dryad, and some both.
…multiple frameworks to run concurrently! Here we see a new framework, Dryad being run side by side with Hadoop, and Nexus is multiplexing the slaves between both. Some are running Hadoop tasks, some Dryad, and some both.
…multiple frameworks to run concurrently! Here we see a new framework, Dryad being run side by side with Hadoop, and Nexus is multiplexing the slaves between both. Some are running Hadoop tasks, some Dryad, and some both.
Waiting 1s gives 90% locality, 5s gives 95%
Linux containers can actually be both “application” containers where an app shares the filesystem with the host (similar to Solaris projects), or “system” containers where each container has its own filesystem (similar to Solaris zones); both types also prevent processes in a container from seeing those outside it
Transition to next slide: when you have policy == SLAs
What to do with the rest of the resources?
Mentioned shared HDFS!
Mentioned shared HDFS!
16 Hadoop instances doing synthetic filter job100 nodes, 4 slots per nodeDelay scheduling improves performance by 1.7x