Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Mosharaf Chowdhury
Srikanth Kandula
Ion Stoica
Presented by Ran...
What’s Ahead?
• Intro - Data Intensive Cluster
• Proposed solution
• Evaluation
• Conclusion
Leveraging Endpoint Flexibili...
What is Data Intensive Cluster?
• Scalable data storage and processing
• “Core” consists of two main parts
• Distributed F...
Motivation
Store and analyze PBs of information
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 4
How was it Originated?
• Heavily influenced by Google’s architecture
• Other Web companies quickly saw the benefits
Leveragi...
DFS: How does it work?
• Moore’s law… and not
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 6
Disk Capacity and Price
• We’re generating more data than ever before
• Fortunately, the size and cost of storage has kept...
Disk Capacity and Performance
• Disk performance has also increased in the last 15
years
• Unfortunately, transfer rates h...
Architecture of a Typical HPC System
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 9
Architecture of a Typical HPC System
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 10
Architecture of a Typical HPC System
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 11
Architecture of a Typical HPC System
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 12
You Don’t Just Need Speed…
• The problem is that we have way more data than
code
Leveraging Endpoint Flexibility
in Data-I...
You Need Speed At Scale
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 14
DISTRIBUTED FILESYSTEM
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 15
Benefits of DFS
• Previously impossible/impractical to do this analysis
• Analysis conducted at lower cost
• Analysis cond...
Collocated Storage and Processing
• Solution: store and process data on the same nodes
• Data Locality: “Bring the computa...
DFS High-Level Architecture
• DFS follows a master-slave architecture
• Master: NameNode
• Responsible for namespace and m...
DFS Blocks
• When a file is added to DFS, it’s split into blocks
• DFS uses a much larger block size (>= 64MB), for
perform...
DFS Replication
• Those blocks are then replicated across machines
• The first block might be replicated to A, C and D
Lev...
DFS Replication
• The next block might be replicated to B, D and E
Leveraging Endpoint Flexibility
in Data-Intensive Clust...
DFS Replication
• The last block might be replicated to A, C and E
Leveraging Endpoint Flexibility
in Data-Intensive Clust...
DFS Reliability
• Replication helps to achieve reliability
• Even when a node fails, two copies of the block remain
• Thes...
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 25
MapReduce High-Level Architecture
Like DFS, MapReduce has a master-slave Architecture
• Master: JobTracker
• Responsible f...
Gentle Introduction to MapReduce
• MapReduce is conceptually like a UNIX pipeline
• One function (Map) processes data
• Th...
The Map Function
• Operates on each record individually
• Typical uses include filtering, parsing, or transforming
Leveragi...
Intermediate Processing
• The Map function’s output is grouped and sorted
• This is the automatic “sort and shuffle” process...
The Reduce Function
• Operates on all records in a group
• Often used for sum, average or other aggregate functions
Levera...
MapReduce Flow
Job Tracker
Machine
Intermediate Files
Output Files
Maper
(Task)
Maper
(Task)
Maper
(Task)
Maper
(Task)
Inp...
Communication is Crucial
Performance
Facebook analytics jobs spend 33% of their runtime in
communication
Leveraging Endpoi...
Cross-Rack Traffic
DFS
Reads
14%
Inter.
46%
DFS
Writes
40%
DFS
Reads
31%
Inter.
15%
DFS
Writes
54%
Facebook Bing
Leveragin...
DFS
Core
Rack 1 Rack 2 Rack 3
F
F F
Files are divided into
blocks
• 64MB to 1GB in size
Each block is replicated
• To 3 ma...
DFS
• Files are divided into
blocks
• 64MB to 1GB in size
• Each block is replicated
• To 3 machines for fault
tolerance
•...
DFS
• Files are divided into
blocks
• 64MB to 1GB in size
• Each block is replicated
• To 3 machines for fault
tolerance
•...
Sinbad
Steers flexible replication traffic away from hotspots
• Improve write rates
• More balanced network
Leveraging End...
The Distributed Writing Problem
Core
Rack 1 Rack 2 Rack 3
Given
• Blocks of different size
• Links of different capacities...
How to Make it Easy?
Assumptions:
• All blocks have the same size
• Link utilizations are stable
Theorem:
Greedy placement...
How to Make it Easy? – In Practice
• Link utilizations are stable
In Reality: Average link utilizations are temporarily st...
Greedy Algorithm
two-step greedy replica placement:
1. Pick the least-loaded link
2. Send a block from the file with the l...
Sinbad Overview
follows a master-slave architecture
• Master:
• collocated with the CFS master
• Decides where to place ea...
Evaluation
A 3000-node trace-driven simulation matched against a
100-node EC2 deployment
1. Does it improve performance?
2...
Faster
Job Improv. DFS Improv.
Simulation
Experiment
1.39X
1.26X
1.79X
1.60X
Leveraging Endpoint Flexibility
in Data-Inten...
More Balanced
EC2 Deployment
0
0.25
0.5
0.75
1
0 1 2 3 4
FractionofTime
Coeff. of Var. of Load
Across Rack-to-Host Links
D...
What About Storage Balance?
Imbalanced in the short term
But, in the long term,
hotspots are uniformly distributed
Leverag...
Conclusions
Three
Approaches
Toward
Contention
Mitigation
#3
Balance
Usage
Manage elephant
flows
Optimize
intermediate com...
• Improves job performance by making the network more
balanced
• Improves DFS write performance while keeping the
storage ...
Questions?
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 56
Leveraging Endpoint Flexibility
in Data-Intensive Clusters
Ran Ziv© 2013 57
Upcoming SlideShare
Loading in …5
×

Leveraging Endpoint Flexibility in Data-Intensive Clusters

318 views
176 views

Published on

Part of the Apache Spark and Mesos projects.
Based on an article of Mosharaf Chowdhury, Srikanth Kandula and Ion Stoica from University of California, Berkeley.
The article was presented in SIGCOMM 2013 in Hong Kong.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
318
On SlideShare
0
From Embeds
0
Number of Embeds
11
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Good afternoon.
    I’m …
    Today, I’m going to talk about network transfers that do not have fixed destinations.
    This is a joint work with …
    Written @ Berkeley (SIGCOMM) and presented on last August in Hong Kong
  • How it started: internet companies ….
    Main motivation in addition to regular stuff:
    Lower cost
    Less time
    Greater flexibility
    Linear scalability
    But How? And what happens that allows it?
  • Open source
    Started in Google…
  • Gordon Moore 1965
  • Capacity has increased while price has decreased
  • They analyze data in FB and Bing.
    Found out 33%....
    -----------------


    Many data-intensive jobs depend on communication for faster end-to-end completion time.
    For example, in one of our earlier works, we found that typical jobs at Facebook spend up to a third of their running time in shuffle or intermediate data transfers.

    As in-memory systems proliferate and disks are removed from the I/O pipeline, the network is likely to be the primary bottleneck.

    But what do the network usage of data-intensive clusters look like and where do they come from?
    To better understand the problem, we have analyzed traces from two data-intensive production clusters at Facebook and Microsoft.

    1. Managing Data Transfers in Computer Clusters with Orchestra, SIGCOMM’2011
  • We have found something very interesting.
    While there has been a LOT of attention into deceasing reads over the network or managing intermediate communication, DFS replication creates almost half of all cross-rack traffic.
    Note that this doesn’t mean everyone was wrong; communication of intermediate data or shuffle is still a major source of job-level communication.

    But, the sources of these writes are ingestion of new data into the cluster and preprocessing of existing data for later use.
    Both of which do not show up when someone looks only at the jobs.
    Very small amount is actually created by typical jobs.

    We’ve also found that during ingestion many writers spend up to 90% of their time in writing. Well, that is their job.

    What is this DFS?
  • Distributed file systems are ubiquitous in data-intensive clusters and form the narrow waist.
    Diverse computing frameworks read from and write to the same DFS.
    Examples include GFS, HDFS, Cosmos etc.

    Typically, distributed file systems store data as files.
    Each file is divided into large blocks.
    Typical size of a block would be 256MB.
    Each block of a file is then replicated to three different machines for fault tolerance.
    These three machines are located in two different fault domains, typically racks, for partition tolerance.
    Finally, replicas are placed uniformly randomly throughout the cluster to avoid storage imbalance.
    Writes to a DFS are typically synchronous.
  • We address the traffic of distributed file systems in modern clusters like any other elephant flows in the network.
    We assume that the endpoints are fixed.
    All the existing work balance the network after the locations of the sources and destinations have already been decided.
    Because sources and destinations are fixed, they try to find different paths between them or try to change rates in different paths.

    But we can do more.
    Let us revisit the requirements of replica placement.

  • Notice that, as long as these constraints are met, the DFS does not care where actually the replicas have been placed.
    This means, we can effectively change the destinations of all replication traffic if we satisfy the constraints.
    We refer to such transfers as constrained anycast in that replicas can go anywhere, but they are constrained.
  • In this work, we present Sinbad.
    By steering replication traffic away from congested hotspots in the network Sinbad can improve the performance of the network.

    However, this can only be useful only if we have significant hotspot activities in the cluster.
  • We refer to this as the distributed writing problem.
    Given blocks of different size and links of different capacities, Sinbad must place the replicas in a way to minimize the average block write time as well as the average file write time.
    Note that, block can have different size because blocks are not padded in a DFS.

    Now for each block, we consider a job of that length and for each link we consider a machine of that capacity, we see that the distributed writing problem is similar to the job shop scheduling problem.
    And it is NP-hard.
  • Let’s take an example.
    We have the same network as before.
    We are going to assume that the three core-to-rack links are the possible bottlenecks.

    Replica placement requests from two different files come online.
    The black file has two blocks and the orange one has three.
    Now, let us assume that time is divided into discrete intervals.
    We must decide on the three requests during time interval T.
    We are also going to assume that intervals are independent; i.e., placement decisions during T will not affect the ones during T+1.
    Finally, we are going to assume that link utilizations are stable for the duration of replication or during T, and all blocks have the same size.

    It is clear that we should pick the least-loaded link because that will finish the fastest.
    Because all blocks are of the same size, it doesn’t matter which block we choose for minimizing the average block write time.
    If we also care about minimizing the file write times, we should always pick the smallest file (the one with the least remaining blocks) to go through the fattest link.

    Under these assumptions, greedy placement is optimal.
  • We propose a simple two-step greedy placement policy.
    At any point, we pick the least-loaded link and then
  • That brings us to Sinbad.
    Sinbad performs network-aware replica placement for DFS.
    <EXPLAIN>
    // Mention master-slave architecture etc.
  • That brings us to Sinbad.
    Given a replica placement request, the master greedily places it and returns back the locations.
    It also adds some hysteresis to avoid placing too many replicas in the same rack.
    Further details on the process can be found in the paper.

    One thing to note is that the interface is incredibly simple, which makes it all the more deployable.
    All in all, we needed to change only a couple hundred lines of code to implement the whole thing.
  • We have implemented Sinbad in the HDFS which is the de factor open source DFS used by traditional frameworks like Hadoop as well as upcoming systems like Spark.
    We have also performed flow-level simulation of the 3000-node facebook cluster.

    The three high-level question one might ask are —
    Does it improve performance?
    Does it improve network balance?
    Will the storage remain balanced.

    The short answer to all three is YES.
  • We have implemented Sinbad in the HDFS which is the de factor open source DFS used by traditional frameworks like Hadoop as well as upcoming systems like Spark.
    We have also performed flow-level simulation of the 3000-node facebook cluster.

    The three high-level question one might ask are —
    Does it improve performance?
    Does it improve network balance?
    Will the storage remain balanced.

    The short answer to all three is YES.
  • We consider performance from the perspective of the user (i.e., job performance) and that of the system (DFS performance)
    <EXPLAIN results>
    We have found that if we applied similar technique to in-memory storage systems like Tachyon, the improvements can be even higher because disks are never the bottlenecks.
    So, network-balance improved and performance improved as well.

    Upper bound: 1.89X
  • ציר – מקדם השונות של ה
    UTILIZATION

    We’ve found that network is highly imbalanced in both clusters.
    We are looking at a CDF of imbalance in core-to-rack downlinks in the facebook cluster.
    In the x-axis we have imbalance measured by the coefficient of variation of link utilizations.
    Coefficient of variation is the ratio of standard deviation to the mean of some samples, which is zero when all samples are the same, i.e., there is NO imbalance.
    In general, smaller CoV means smaller imbalance.
    We’ve measured link utilization as the average of 10s bins.
    We see that it is almost never zero and more than 50% of the time it is more than 1 (which is a typical threshold for high imbalance)

    Same is true for the Bing cluster as well.

    Given that a large fraction of traffic allow flexibility in endpoint placement and the network indeed has hotspots, we can now formally define the problem Sinbad is trying to address.

    ------------------------------------------------------------------------------------------------------------------------------
    The network became more balanced as well.
    Notice that in both EC2 experiments and trace-based simulations, the orange moved toward the left, which indicate decreased network imbalance.
  • Sinbad optimize to Network – 10s check, decide where to put replicas by network and not only by Storage.
    Short term = 1h
  • There have been a LOT of work on better optimizing the network.
    And the solutions largely fall into three categories.
    The first approach is to increase the capacity of the network.
    This includes moving from 1GigE to 10GigE links and increasing bisection bandwidth of datacenter networks.
    In fact, there have been a lot of proposals on designing full bisection bandwidth networks.
    However, full bisection bandwidth does not mean infinite bandwidth, and the size of workload is always increasing.
    In practice, many clusters still have some amount of oversubscription in their core-to-rack links.

    The next approach is decreasing the amount of network traffic.
    All the work on data locality, and there have been many, try to decrease network communication by moving computation closer to its input.
    Recently, many researchers have looked into static analysis of data-intensive applications to decrease communication.
    These are all best effort mechanisms, and there is always some data that must traverse the network.

    This brings us to the third approach, that is load balancing the network.
    Typically it focuses on managing large flows and optimizing communication of intermediate data.
    Our recent work on Orchestra and Coflow also fall in this category.
    This work is about going one step further in this direction.
  • Leveraging Endpoint Flexibility in Data-Intensive Clusters

    1. 1. Leveraging Endpoint Flexibility in Data-Intensive Clusters Mosharaf Chowdhury Srikanth Kandula Ion Stoica Presented by Ran Ziv UC Berkeley Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 1
    2. 2. What’s Ahead? • Intro - Data Intensive Cluster • Proposed solution • Evaluation • Conclusion Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 2
    3. 3. What is Data Intensive Cluster? • Scalable data storage and processing • “Core” consists of two main parts • Distributed File System (DFS) • Processing (MapReduce) Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 3
    4. 4. Motivation Store and analyze PBs of information Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 4
    5. 5. How was it Originated? • Heavily influenced by Google’s architecture • Other Web companies quickly saw the benefits Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 5
    6. 6. DFS: How does it work? • Moore’s law… and not Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 6
    7. 7. Disk Capacity and Price • We’re generating more data than ever before • Fortunately, the size and cost of storage has kept pace Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 7
    8. 8. Disk Capacity and Performance • Disk performance has also increased in the last 15 years • Unfortunately, transfer rates haven’t kept pace with capacity Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 8
    9. 9. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 9
    10. 10. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 10
    11. 11. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 11
    12. 12. Architecture of a Typical HPC System Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 12
    13. 13. You Don’t Just Need Speed… • The problem is that we have way more data than code Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 13
    14. 14. You Need Speed At Scale Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 14
    15. 15. DISTRIBUTED FILESYSTEM Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 15
    16. 16. Benefits of DFS • Previously impossible/impractical to do this analysis • Analysis conducted at lower cost • Analysis conducted in less time • Linear scalability Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 16
    17. 17. Collocated Storage and Processing • Solution: store and process data on the same nodes • Data Locality: “Bring the computation to the data” • Reduces I/O and boosts performance Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 17
    18. 18. DFS High-Level Architecture • DFS follows a master-slave architecture • Master: NameNode • Responsible for namespace and metadata • Namespace: file hierarchy • Metadata: ownership, permissions, block locations, etc. • Slave: DataNode • Responsible for storing actual datablocks Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 19
    19. 19. DFS Blocks • When a file is added to DFS, it’s split into blocks • DFS uses a much larger block size (>= 64MB), for performance Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 20
    20. 20. DFS Replication • Those blocks are then replicated across machines • The first block might be replicated to A, C and D Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 21
    21. 21. DFS Replication • The next block might be replicated to B, D and E Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 22
    22. 22. DFS Replication • The last block might be replicated to A, C and E Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 23
    23. 23. DFS Reliability • Replication helps to achieve reliability • Even when a node fails, two copies of the block remain • These will be re-replicated to other nodes automatically Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 24
    24. 24. Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 25
    25. 25. MapReduce High-Level Architecture Like DFS, MapReduce has a master-slave Architecture • Master: JobTracker • Responsible for dividing, scheduling and monitoring work • Slave: TaskTracker • Responsible for actual processing Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 26
    26. 26. Gentle Introduction to MapReduce • MapReduce is conceptually like a UNIX pipeline • One function (Map) processes data • That output is ultimately input to another function (Reduce) Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 27
    27. 27. The Map Function • Operates on each record individually • Typical uses include filtering, parsing, or transforming Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 28
    28. 28. Intermediate Processing • The Map function’s output is grouped and sorted • This is the automatic “sort and shuffle” process Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 29
    29. 29. The Reduce Function • Operates on all records in a group • Often used for sum, average or other aggregate functions Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 30
    30. 30. MapReduce Flow Job Tracker Machine Intermediate Files Output Files Maper (Task) Maper (Task) Maper (Task) Maper (Task) Input Files Reducers (Task) Reducers (Task) Reducers (Task) Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 31
    31. 31. Communication is Crucial Performance Facebook analytics jobs spend 33% of their runtime in communication Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 32
    32. 32. Cross-Rack Traffic DFS Reads 14% Inter. 46% DFS Writes 40% DFS Reads 31% Inter. 15% DFS Writes 54% Facebook Bing Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 33
    33. 33. DFS Core Rack 1 Rack 2 Rack 3 F F F Files are divided into blocks • 64MB to 1GB in size Each block is replicated • To 3 machines for fault tolerance • In 2 fault domains for partition tolerance. Synchronous operations F I L E III E LL E L E Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 34
    34. 34. DFS • Files are divided into blocks • 64MB to 1GB in size • Each block is replicated • To 3 machines for fault tolerance • In 2 fault domains for partition tolerance. Synchronous operations Fixed Sources Destinations Flexible Paths Rates Core Rack 1 Rack 2 Rack 3 F FII E LL E How to handle DFS flows? Hedera, VLB, Orchestra, Coflow, MicroTE, DevoFlow, … Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 35
    35. 35. DFS • Files are divided into blocks • 64MB to 1GB in size • Each block is replicated • To 3 machines for fault tolerance • In 2 fault domains for partition tolerance. Synchronous operations Fixed Sources Destinations Flexible Paths Rates Core Rack 1 Rack 2 Rack 3 F FII E LL E Replica location don’t matter As long as constraints are met Flexible Sources Destinations How to handle DFS flows? Hedera, VLB, Orchestra, Coflow, MicroTE, DevoFlow, … Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 36
    36. 36. Sinbad Steers flexible replication traffic away from hotspots • Improve write rates • More balanced network Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 37
    37. 37. The Distributed Writing Problem Core Rack 1 Rack 2 Rack 3 Given • Blocks of different size • Links of different capacities Place blocks to minimize • The average block write time • The average file write time F EI L Given • Jobs of different length, and • Machines of different speed, Schedule jobs to minimize • The average job completion time Machine 1 Machine 2 Machine 3 Job Shop Scheduling J O B is NP-Hard Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 38
    38. 38. How to Make it Easy? Assumptions: • All blocks have the same size • Link utilizations are stable Theorem: Greedy placement minimizes average block/file write times Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 39
    39. 39. How to Make it Easy? – In Practice • Link utilizations are stable In Reality: Average link utilizations are temporarily stable1,2 • All blocks have the same size In Reality: Fixed-size large blocks write 93% of all bytes 1. Utilization is considered stable if its average over next x seconds remains within ±5% of the initial value 2. Typically, x ranges from 5 to 10 seconds. Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 40
    40. 40. Greedy Algorithm two-step greedy replica placement: 1. Pick the least-loaded link 2. Send a block from the file with the least- remaining blocks through the selected link 1C B TT+1 Time A2 Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 42
    41. 41. Sinbad Overview follows a master-slave architecture • Master: • collocated with the CFS master • Decides where to place each block • Slave: • periodically report information Sinbad Master DFS Master DFS Slave Sinbad Slave DFS Slave Sinbad Slave DFS Slave Sinbad Slave Machine Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 43
    42. 42. Evaluation A 3000-node trace-driven simulation matched against a 100-node EC2 deployment 1. Does it improve performance? 2. Does it balance the network? 3. Does the storage remain balanced? YES Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 47
    43. 43. Faster Job Improv. DFS Improv. Simulation Experiment 1.39X 1.26X 1.79X 1.60X Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 49
    44. 44. More Balanced EC2 Deployment 0 0.25 0.5 0.75 1 0 1 2 3 4 FractionofTime Coeff. of Var. of Load Across Rack-to-Host Links Default Network-Aware Facebook Trace Simulation 0 0.25 0.5 0.75 1 0 1 2 3 4 FractionofTime Coeff. of Var. of Load Across Core-to-Rack Links Default Network-Aware Imbalance (Coeff. of Var.1 of Link Utilization) Imbalance (Coeff. of Var.1 of Link Utilization)Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 50
    45. 45. What About Storage Balance? Imbalanced in the short term But, in the long term, hotspots are uniformly distributed Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 51
    46. 46. Conclusions Three Approaches Toward Contention Mitigation #3 Balance Usage Manage elephant flows Optimize intermediate comm. Valiant load balancing (VLB), Hedera, Orchestra, Coflow, MicroTE, DevoFlow, … #1 Increase Capacity Fatter links/interfaces Increase Bisection B/W Fat tree, VL2, DCell, BCube, F10, … #2 Decrease Load Data locality Static optimization Fair scheduling, Delay scheduling, Mantri, Quincy, PeriSCOPE, RoPE, Rhea, … Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 54
    47. 47. • Improves job performance by making the network more balanced • Improves DFS write performance while keeping the storage balanced • Sinbad will become increasingly more important as storage becomes faster Sinbad Greedily steers replication traffic away from hotspots Planning to deploy Sinbad at Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 55
    48. 48. Questions? Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 56
    49. 49. Leveraging Endpoint Flexibility in Data-Intensive Clusters Ran Ziv© 2013 57

    ×