Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

TritonSort: A Balanced Large-Scale Sorting System (NSDI 2011)

886 views

Published on

We present TritonSort, a highly efficient, scalable sorting system. It is designed to process large datasets, and has been evaluated against as much as 100 TB of input data spread across 832 disks in 52 nodes at a rate of 0.916 TB/min. When evaluated against the annual Indy GraySort sorting benchmark, TritonSort is 60% better in absolute performance and has over six times the per-node efficiency of the previous record holder. In this paper, we describe the hardware and software architecture necessary to operate TritonSort at this level of efficiency. Through careful management of system resources to ensure cross-resource balance, we are able to sort data at approximately 80% of the disks' aggregate sequential write speed. We believe the work holds a number of lessons for balanced system design and for scale-out architectures in general. While many interesting systems are able to scale linearly with additional servers, per-server performance can lag behind per-server capacity by more than an order of magnitude. Bridging the gap between high scalability and high performance would enable either significantly cheaper systems that are able to do the same work or provide the ability to address significantly larger problem sets with the same infrastructure.

Published in: Technology
  • Be the first to comment

TritonSort: A Balanced Large-Scale Sorting System (NSDI 2011)

  1. 1. TritonSort A Balanced Large-Scale Sorting System Alex Rasmussen, George Porter, Michael Conley, Radhika Niranjan Mysore, Amin Vahdat (UCSD) Harsha V. Madhyastha (UC Riverside) Alexander Pucher (Vienna University of Technology)
  2. 2. The Rise of Big Data Workloads •  Very high I/O and storage requirements –  Large-scale web and social graph mining –  Business analytics – “you may also like …” –  Large-scale “data science” •  Recent new approaches to “data deluge”: data intensive scalable computing (DISC) systems –  MapReduce, Hadoop, Dryad, … 2
  3. 3. Performance via scalability •  10,000+ node MapReduce clusters deployed –  With impressive performance •  Example: Yahoo! Hadoop Cluster Sort –  3,452 nodes sorting 100TB in less than 3 hours •  But… –  Less Than 3 MB/sec per node –  Single disk: ~100 MB/sec •  Not an isolated case –  See “Efficiency Matters!”, SIGOPS 2010 3
  4. 4. Overcoming Inefficiency With Brute Force •  Just add more machines! –  But expensive, power-hungry mega-datacenters! •  What if we could go from 3 MBps per node to 30? –  10x fewer machines accomplishing the same task –  or 10x higher throughput 4
  5. 5. TritonSort Goals •  Build a highly efficient DISC system that improves per-node efficiency by an order of magnitude vs. existing systems –  Through balanced hardware and software •  Secondary goals: –  Completely “off-the-shelf” components –  Focus on I/O-driven workloads (“Big Data”) –  Problems that don’t come close to fitting in RAM –  Initially sorting, but have since generalized 5
  6. 6. Outline •  Define hardware and software balance •  TritonSort design – Highlighting tradeoffs to achieve balance •  Evaluation with sorting as a case study 6
  7. 7. Building a “Balanced” System •  Balanced hardware drives all resources as close to 100% as possible –  Removing any resource slows us down –  Limited by commodity configuration choices •  Balanced software fully exploits hardware resources 7
  8. 8. Hardware Selection •  Designed for I/O-heavy workloads – Not just sorting •  Static selection of resources: – Network/disk balance •  10 Gbps / 80 MBps ≈ 16 disks – CPU/disk balance •  2 disks / core = 8 cores – CPU/memory •  Originally ~1.5GB/core… later 3 GB/core 8
  9. 9. Resulting Hardware Platform 52 Nodes: •  Xeon E5520, 8 cores (16 with hyperthreading) •  24 GB RAM •  16 7200 RPM hard drives •  10 Gbps NIC •  Cisco Nexus 5020 10 Gbps switch 9
  10. 10. Software Architecture •  Staged, pipeline-oriented dataflow system •  Program expressed as digraph of stages –  Data stored in buffers that move along edges –  Stage’s work performed by worker threads •  Platform for experimentation –  Easily vary: •  Stage implementation •  Size and quantity of buffers •  Worker threads per stage •  CPU and memory allocation to each stage 10
  11. 11. Why Sorting? •  Easy to describe •  Industrially applicable •  Uses all cluster resources 11
  12. 12. Current TritonSort Architecture •  External sort – two reads, two writes* – Don’t read and write to disk at same time •  Partition disks into input and output •  Two phases – Phase one: route tuples to appropriate on-disk partition (called a “logical disk”) on appropriate node – Phase two: sort all logical disks in parallel 12 * A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. CACM, 1988.
  13. 13. Architecture Phase One 13 Input Disks Reader Node Distributor Sender
  14. 14. Architecture Phase One 14 Receiver LD Distributor Coalescer Writer Output Disks Disk 8 Disk 7 Disk 6 Disk 5 Disk 4 Disk 3 Disk 2 Disk 1 Linked list per partition
  15. 15. Reader 15 •  100 MBps/disk * 8 disks = 800 MBps •  No computation, entirely I/O and memory operations – Expect most time spent in iowait – 8 reader workers, one per input disk  All reader workers co-scheduled on a single core Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  16. 16. NodeDistributor 16 •  Appends tuples onto a buffer per destination node •  Memory scan + hash per tuple •  300 MBps per worker – Need three workers to keep up with readers Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  17. 17. Sender & Receiver 17 •  800 MBps (from Reader) is 6.4 Gbps – All-to-all traffic •  Must keep downstream disks busy – Don’t let receive buffer get empty – Implies strict socket send time bound •  Multiplex all senders on one core (single-threaded tight loop) – Visit every socket every 20 µs – Didn’t need epoll()/select() Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  18. 18. Balancing at Scale 18
  19. 19. t1t0 Logical Disk Distributor 19 t0 t1 t2 0 1 N … H(t0) = 1H(t1) = N 12.8 KB Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  20. 20. Logical Disk Distributor 20 •  Data non-uniform and bursty at short timescales – Big buffers + burstiness = head-of-line blocking – Need to use all your memory all the time •  Solution: Read incoming data into smallest buffer possible, and form chains Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  21. 21. Coalescer & Writer 21 •  Copies tuples from LDBuffer chains into a single, sequential block of memory •  Longer chains = larger write before seeking = faster writes – Also, more memory needed for LDBuffers •  Buffer size limits maximum chain length – How big should this buffer be? Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  22. 22. Writer 22 Reader Node Distributor Sender Receiver L.D. Distributor Coalescer Writer
  23. 23. Architecture Phase Two 23 Reader Sorter Writer Input Disks Output Disks
  24. 24. Sort Benchmark Challenge •  Started in 1980s by Jim Gray, now run by a committee of volunteers •  Annual competition with many categories – GraySort: Sort 100 TB •  “Indy” variant – 10 byte key, 90 byte value – Uniform key distribution 24
  25. 25. How balanced are we? 25 Worker Type Workers Total Throughput (MBps) % Over Bottleneck Stage Reader 8 683 13% Node-Distributor 3 932 55% LD-Distributor 1 683 13% Coalescer 8 18,593 30,000% Writer 8 601 0% Reader 8 740 3.2% Sorter 4 1089 52% Writer 8 717 0%
  26. 26. How balanced are we? 26 Phase Resource Utilization CPU Memory Network Disk Phase One 25% 100% 50% 82% Phase Two 50% 100% 0% 100%
  27. 27. Scalability 27
  28. 28. Raw 100TB “Indy” Performance 28 0 0.0025 0.005 0.0075 0.01 0.0125 0.015 0.0175 0.02 Prev. Record Holder TritonSort PerformanceperNode (TBperminute) 0.938 TB per minute 52 nodes 0.564 TB per minute 195 nodes 6X
  29. 29. Impact of Faster Disks •  7.2K RPM  15K RPM drives •  Smaller capacity means fewer LDs •  Examined effect of disk speed and # LDs •  Removing a bottleneck moves the bottleneck somewhere else 29 Intermediate Disk Speed (RPM) Logical Disks Per Physical Disk Phase One Throughput (MBps) Phase One Bottleneck Stage Average Write Size (MB) 7200 315 69.81 Writer 12.6 7200 158 77.89 Writer 14.0 15000 158 79.73 LD Distributor 5.02
  30. 30. Impact of Increased RAM •  Hypothesis that memory influences chain length, and thus write speed •  Doubling memory indeed increases chain length, but the effect on performance was minimal •  Increasing a non-bottleneck resource made it faster, but not by much 30 RAM Per Node (GB) Phase One Throughput (MBps) Average Write Size (MB) 24 73.53 12.43 48 76.43 19.21
  31. 31. Future Work •  Generalization – We have a fast MapReduce implementation – Considering other applications and programming paradigms •  Automatic Tuning – Determine appropriate buffer size & count, # workers per stage for reasonable performance •  Different hardware •  Different workloads 31
  32. 32. TritonSort – Questions? •  Proof-of-concept balanced sorting system •  6x improvement in per- node efficiency vs. previous record holder •  Current top speed: 938 GB per minute •  Future Work: Generalization, Automation 32 http://tritonsort.eng.ucsd.edu/

×