• Save
Leveraging Endpoint Flexibility in Data-Intensive Clusters
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Leveraging Endpoint Flexibility in Data-Intensive Clusters

on

  • 110 views

Part of the Apache Spark and Mesos projects. ...

Part of the Apache Spark and Mesos projects.
Based on an article of Mosharaf Chowdhury, Srikanth Kandula and Ion Stoica from University of California, Berkeley.
The article was presented in SIGCOMM 2013 in Hong Kong.

Statistics

Views

Total Views
110
Views on SlideShare
108
Embed Views
2

Actions

Likes
0
Downloads
1
Comments
0

1 Embed 2

https://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Good afternoon. <br /> I’m … <br /> Today, I’m going to talk about network transfers that do not have fixed destinations. <br /> This is a joint work with … <br /> Written @ Berkeley (SIGCOMM) and presented on last August in Hong Kong <br />
  • How it started: internet companies …. <br /> Main motivation in addition to regular stuff: <br /> Lower cost <br /> Less time <br /> Greater flexibility <br /> Linear scalability <br /> But How? And what happens that allows it?
  • Open source <br /> Started in Google…
  • Gordon Moore 1965
  • Capacity has increased while price has decreased
  • They analyze data in FB and Bing. <br /> Found out 33%.... <br /> ----------------- <br /> <br /> <br /> Many data-intensive jobs depend on communication for faster end-to-end completion time. <br /> For example, in one of our earlier works, we found that typical jobs at Facebook spend up to a third of their running time in shuffle or intermediate data transfers. <br /> <br /> As in-memory systems proliferate and disks are removed from the I/O pipeline, the network is likely to be the primary bottleneck. <br /> <br /> But what do the network usage of data-intensive clusters look like and where do they come from? <br /> To better understand the problem, we have analyzed traces from two data-intensive production clusters at Facebook and Microsoft. <br /> <br /> 1. Managing Data Transfers in Computer Clusters with Orchestra, SIGCOMM’2011 <br />
  • We have found something very interesting. <br /> While there has been a LOT of attention into deceasing reads over the network or managing intermediate communication, DFS replication creates almost half of all cross-rack traffic. <br /> Note that this doesn’t mean everyone was wrong; communication of intermediate data or shuffle is still a major source of job-level communication. <br /> <br /> But, the sources of these writes are ingestion of new data into the cluster and preprocessing of existing data for later use. <br /> Both of which do not show up when someone looks only at the jobs. <br /> Very small amount is actually created by typical jobs. <br /> <br /> We’ve also found that during ingestion many writers spend up to 90% of their time in writing. Well, that is their job. <br /> <br /> What is this DFS?
  • Distributed file systems are ubiquitous in data-intensive clusters and form the narrow waist. <br /> Diverse computing frameworks read from and write to the same DFS. <br /> Examples include GFS, HDFS, Cosmos etc. <br /> <br /> Typically, distributed file systems store data as files. <br /> Each file is divided into large blocks. <br /> Typical size of a block would be 256MB. <br /> Each block of a file is then replicated to three different machines for fault tolerance. <br /> These three machines are located in two different fault domains, typically racks, for partition tolerance. <br /> Finally, replicas are placed uniformly randomly throughout the cluster to avoid storage imbalance. <br /> Writes to a DFS are typically synchronous.
  • We address the traffic of distributed file systems in modern clusters like any other elephant flows in the network. <br /> We assume that the endpoints are fixed. <br /> All the existing work balance the network after the locations of the sources and destinations have already been decided. <br /> Because sources and destinations are fixed, they try to find different paths between them or try to change rates in different paths. <br /> <br /> But we can do more. <br /> Let us revisit the requirements of replica placement. <br /> <br />
  • Notice that, as long as these constraints are met, the DFS does not care where actually the replicas have been placed. <br /> This means, we can effectively change the destinations of all replication traffic if we satisfy the constraints. <br /> We refer to such transfers as constrained anycast in that replicas can go anywhere, but they are constrained.
  • In this work, we present Sinbad. <br /> By steering replication traffic away from congested hotspots in the network Sinbad can improve the performance of the network. <br /> <br /> However, this can only be useful only if we have significant hotspot activities in the cluster. <br />
  • We refer to this as the distributed writing problem. <br /> Given blocks of different size and links of different capacities, Sinbad must place the replicas in a way to minimize the average block write time as well as the average file write time. <br /> Note that, block can have different size because blocks are not padded in a DFS. <br /> <br /> Now for each block, we consider a job of that length and for each link we consider a machine of that capacity, we see that the distributed writing problem is similar to the job shop scheduling problem. <br /> And it is NP-hard. <br />
  • Let’s take an example. <br /> We have the same network as before. <br /> We are going to assume that the three core-to-rack links are the possible bottlenecks. <br /> <br /> Replica placement requests from two different files come online. <br /> The black file has two blocks and the orange one has three. <br /> Now, let us assume that time is divided into discrete intervals. <br /> We must decide on the three requests during time interval T. <br /> We are also going to assume that intervals are independent; i.e., placement decisions during T will not affect the ones during T+1. <br /> Finally, we are going to assume that link utilizations are stable for the duration of replication or during T, and all blocks have the same size. <br /> <br /> It is clear that we should pick the least-loaded link because that will finish the fastest. <br /> Because all blocks are of the same size, it doesn’t matter which block we choose for minimizing the average block write time. <br /> If we also care about minimizing the file write times, we should always pick the smallest file (the one with the least remaining blocks) to go through the fattest link. <br /> <br /> Under these assumptions, greedy placement is optimal. <br />
  • We propose a simple two-step greedy placement policy. <br /> At any point, we pick the least-loaded link and then <br />
  • That brings us to Sinbad. <br /> Sinbad performs network-aware replica placement for DFS. <br /> <br /> // Mention master-slave architecture etc.
  • That brings us to Sinbad. <br /> Given a replica placement request, the master greedily places it and returns back the locations. <br /> It also adds some hysteresis to avoid placing too many replicas in the same rack. <br /> Further details on the process can be found in the paper. <br /> <br /> One thing to note is that the interface is incredibly simple, which makes it all the more deployable. <br /> All in all, we needed to change only a couple hundred lines of code to implement the whole thing.
  • We have implemented Sinbad in the HDFS which is the de factor open source DFS used by traditional frameworks like Hadoop as well as upcoming systems like Spark. <br /> We have also performed flow-level simulation of the 3000-node facebook cluster. <br /> <br /> The three high-level question one might ask are — <br /> Does it improve performance? <br /> Does it improve network balance? <br /> Will the storage remain balanced. <br /> <br /> The short answer to all three is YES.
  • We have implemented Sinbad in the HDFS which is the de factor open source DFS used by traditional frameworks like Hadoop as well as upcoming systems like Spark. <br /> We have also performed flow-level simulation of the 3000-node facebook cluster. <br /> <br /> The three high-level question one might ask are — <br /> Does it improve performance? <br /> Does it improve network balance? <br /> Will the storage remain balanced. <br /> <br /> The short answer to all three is YES.
  • We consider performance from the perspective of the user (i.e., job performance) and that of the system (DFS performance) <br /> <br /> We have found that if we applied similar technique to in-memory storage systems like Tachyon, the improvements can be even higher because disks are never the bottlenecks. <br /> So, network-balance improved and performance improved as well. <br /> <br /> Upper bound: 1.89X
  • ציר – מקדם השונות של ה <br /> UTILIZATION <br /> <br /> We’ve found that network is highly imbalanced in both clusters. <br /> We are looking at a CDF of imbalance in core-to-rack downlinks in the facebook cluster. <br /> In the x-axis we have imbalance measured by the coefficient of variation of link utilizations. <br /> Coefficient of variation is the ratio of standard deviation to the mean of some samples, which is zero when all samples are the same, i.e., there is NO imbalance. <br /> In general, smaller CoV means smaller imbalance. <br /> We’ve measured link utilization as the average of 10s bins. <br /> We see that it is almost never zero and more than 50% of the time it is more than 1 (which is a typical threshold for high imbalance) <br /> <br /> Same is true for the Bing cluster as well. <br /> <br /> Given that a large fraction of traffic allow flexibility in endpoint placement and the network indeed has hotspots, we can now formally define the problem Sinbad is trying to address. <br /> <br /> ------------------------------------------------------------------------------------------------------------------------------ <br /> The network became more balanced as well. <br /> Notice that in both EC2 experiments and trace-based simulations, the orange moved toward the left, which indicate decreased network imbalance. <br />
  • Sinbad optimize to Network – 10s check, decide where to put replicas by network and not only by Storage. <br /> Short term = 1h
  • There have been a LOT of work on better optimizing the network. <br /> And the solutions largely fall into three categories. <br /> The first approach is to increase the capacity of the network. <br /> This includes moving from 1GigE to 10GigE links and increasing bisection bandwidth of datacenter networks. <br /> In fact, there have been a lot of proposals on designing full bisection bandwidth networks. <br /> However, full bisection bandwidth does not mean infinite bandwidth, and the size of workload is always increasing. <br /> In practice, many clusters still have some amount of oversubscription in their core-to-rack links. <br /> <br /> The next approach is decreasing the amount of network traffic. <br /> All the work on data locality, and there have been many, try to decrease network communication by moving computation closer to its input. <br /> Recently, many researchers have looked into static analysis of data-intensive applications to decrease communication. <br /> These are all best effort mechanisms, and there is always some data that must traverse the network. <br /> <br /> This brings us to the third approach, that is load balancing the network. <br /> Typically it focuses on managing large flows and optimizing communication of intermediate data. <br /> Our recent work on Orchestra and Coflow also fall in this category. <br /> This work is about going one step further in this direction.

Leveraging Endpoint Flexibility in Data-Intensive Clusters Presentation Transcript

  • 1. Leveraging Endpoint Flexibility in Data-Intensive Clusters Mosharaf Chowdhury Srikanth Kandula Ion Stoica Presented by Ran Ziv UC Berkeley Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 1
  • 2. What’s Ahead? • Intro - Data Intensive Cluster • Proposed solution • Evaluation • Conclusion Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 2
  • 3. What is Data Intensive Cluster? • Scalable data storage and processing • “Core” consists of two main parts • Distributed File System (DFS) • Processing (MapReduce) Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 3
  • 4. Motivation Store and analyze PBs of information Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 4
  • 5. How was it Originated? • Heavily influenced by Google’s architecture • Other Web companies quickly saw the benefits Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 5
  • 6. DFS: How does it work? • Moore’s law… and not Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 6
  • 7. Disk Capacity and Price • We’re generating more data than ever before • Fortunately, the size and cost of storage has kept pace Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 7
  • 8. Disk Capacity and Performance • Disk performance has also increased in the last 15 years • Unfortunately, transfer rates haven’t kept pace with capacity Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 8
  • 9. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 9
  • 10. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 10
  • 11. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 11
  • 12. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 12
  • 13. You Don’t Just Need Speed… • The problem is that we have way more data than code Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 13
  • 14. You Need Speed At Scale Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 14
  • 15. DISTRIBUTED FILESYSTEM Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 15
  • 16. Benefits of DFS • Previously impossible/impractical to do this analysis • Analysis conducted at lower cost • Analysis conducted in less time • Linear scalability Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 16
  • 17. Collocated Storage and Processing • Solution: store and process data on the same nodes • Data Locality: “Bring the computation to the data” • Reduces I/O and boosts performance Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 17
  • 18. DFS High-Level Architecture • DFS follows a master-slave architecture • Master: NameNode • Responsible for namespace and metadata • Namespace: file hierarchy • Metadata: ownership, permissions, block locations, etc. • Slave: DataNode • Responsible for storing actual datablocks Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 19
  • 19. DFS Blocks • When a file is added to DFS, it’s split into blocks • DFS uses a much larger block size (>= 64MB), for performance Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 20
  • 20. DFS Replication • Those blocks are then replicated across machines • The first block might be replicated to A, C and D Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 21
  • 21. DFS Replication • The next block might be replicated to B, D and E Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 22
  • 22. DFS Replication • The last block might be replicated to A, C and E Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 23
  • 23. DFS Reliability • Replication helps to achieve reliability • Even when a node fails, two copies of the block remain • These will be re-replicated to other nodes automatically Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 24
  • 24. Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 25
  • 25. MapReduce High-Level Architecture Like DFS, MapReduce has a master-slave Architecture • Master: JobTracker • Responsible for dividing, scheduling and monitoring work • Slave: TaskTracker • Responsible for actual processing Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 26
  • 26. Gentle Introduction to MapReduce • MapReduce is conceptually like a UNIX pipeline • One function (Map) processes data • That output is ultimately input to another function (Reduce) Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 27
  • 27. The Map Function • Operates on each record individually • Typical uses include filtering, parsing, or transforming Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 28
  • 28. Intermediate Processing • The Map function’s output is grouped and sorted • This is the automatic “sort and shuffle” process Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 29
  • 29. The Reduce Function • Operates on all records in a group • Often used for sum, average or other aggregate functions Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 30
  • 30. MapReduce Flow Job Tracker Machine Intermediate Files Output Files Maper (Task) Maper (Task) Maper (Task) Maper (Task) Input Files Reducers (Task) Reducers (Task) Reducers (Task) Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 31
  • 31. Communication is Crucial Performance Facebook analytics jobs spend 33% of their runtime in communication Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 32
  • 32. Cross-Rack Traffic DFS Reads 14% Inter. 46% DFS Writes 40% DFS Reads 31% Inter. 15% DFS Writes 54% Facebook Bing Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 33
  • 33. DFS Core Rack 1 Rack 2 Rack 3 F F F Files are divided into blocks • 64MB to 1GB in size Each block is replicated • To 3 machines for fault tolerance • In 2 fault domains for partition tolerance. Synchronous operations F I L E III E LL E L E Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 34
  • 34. DFS • Files are divided into blocks • 64MB to 1GB in size • Each block is replicated • To 3 machines for fault tolerance • In 2 fault domains for partition tolerance. Synchronous operations Fixed Sources Destinations Flexible Paths Rates Core Rack 1 Rack 2 Rack 3 F FII E LL E How to handle DFS flows? Hedera, VLB, Orchestra, Coflow, MicroTE, DevoFlow, … Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 35
  • 35. DFS • Files are divided into blocks • 64MB to 1GB in size • Each block is replicated • To 3 machines for fault tolerance • In 2 fault domains for partition tolerance. Synchronous operations Fixed Sources Destinations Flexible Paths Rates Core Rack 1 Rack 2 Rack 3 F FII E LL E Replica location don’t matter As long as constraints are met Flexible Sources Destinations How to handle DFS flows? Hedera, VLB, Orchestra, Coflow, MicroTE, DevoFlow, … Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 36
  • 36. Sinbad Steers flexible replication traffic away from hotspots • Improve write rates • More balanced network Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 37
  • 37. The Distributed Writing Problem Core Rack 1 Rack 2 Rack 3 Given • Blocks of different size • Links of different capacities Place blocks to minimize • The average block write time • The average file write time F EI L Given • Jobs of different length, and • Machines of different speed, Schedule jobs to minimize • The average job completion time Machine 1 Machine 2 Machine 3 Job Shop Scheduling J O B is NP-Hard Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 38
  • 38. How to Make it Easy? Assumptions: • All blocks have the same size • Link utilizations are stable Theorem: Greedy placement minimizes average block/file write times Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 39
  • 39. How to Make it Easy? – In Practice • Link utilizations are stable In Reality: Average link utilizations are temporarily stable1,2 • All blocks have the same size In Reality: Fixed-size large blocks write 93% of all bytes 1. Utilization is considered stable if its average over next x seconds remains within ±5% of the initial value 2. Typically, x ranges from 5 to 10 seconds. Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 40
  • 40. Greedy Algorithm two-step greedy replica placement: 1. Pick the least-loaded link 2. Send a block from the file with the least- remaining blocks through the selected link 1C B TT+1 Time A2 Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 42
  • 41. Sinbad Overview follows a master-slave architecture • Master: • collocated with the CFS master • Decides where to place each block • Slave: • periodically report information Sinbad Master DFS Master DFS Slave Sinbad Slave DFS Slave Sinbad Slave DFS Slave Sinbad Slave Machine Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 43
  • 42. Evaluation A 3000-node trace-driven simulation matched against a 100-node EC2 deployment 1. Does it improve performance? 2. Does it balance the network? 3. Does the storage remain balanced? YES Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 47
  • 43. Faster Job Improv. DFS Improv. Simulation Experiment 1.39X 1.26X 1.79X 1.60X Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 49
  • 44. More Balanced EC2 Deployment 0 0.25 0.5 0.75 1 0 1 2 3 4 FractionofTime Coeff. of Var. of Load Across Rack-to-Host Links Default Network-Aware Facebook Trace Simulation 0 0.25 0.5 0.75 1 0 1 2 3 4 FractionofTime Coeff. of Var. of Load Across Core-to-Rack Links Default Network-Aware Imbalance (Coeff. of Var.1 of Link Utilization) Imbalance (Coeff. of Var.1 of Link Utilization)Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 50
  • 45. What About Storage Balance? Imbalanced in the short term But, in the long term, hotspots are uniformly distributed Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 51
  • 46. Conclusions Three Approaches Toward Contention Mitigation #3 Balance Usage Manage elephant flows Optimize intermediate comm. Valiant load balancing (VLB), Hedera, Orchestra, Coflow, MicroTE, DevoFlow, … #1 Increase Capacity Fatter links/interfaces Increase Bisection B/W Fat tree, VL2, DCell, BCube, F10, … #2 Decrease Load Data locality Static optimization Fair scheduling, Delay scheduling, Mantri, Quincy, PeriSCOPE, RoPE, Rhea, … Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 54
  • 47. • Improves job performance by making the network more balanced • Improves DFS write performance while keeping the storage balanced • Sinbad will become increasingly more important as storage becomes faster Sinbad Greedily steers replication traffic away from hotspots Planning to deploy Sinbad at Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 55
  • 48. Questions? Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 56
  • 49. Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 57