On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
The Infrastructure • SuperMicro half-width motherboards • 2 x Intel L5630 (40W TDP) (16 hardware threads total) • 48GB RAM • Commodity disks (consumer grade SATA 7200rpm) • 1x2TB per “thin node” (4-in-2U) (web/app servers, gearman, etc.) • 6x2TB per “thick node” (2-in-2U) (Hadoop/HBase, elasticsearch, etc.)(86 nodes = 1PB)
The Infrastructure• No virtualization• No oversubscription• Rack locality doesn’t matter much (sub-100µs RTT across racks)• cgroups / Linux containers to keep MapReduce under controlTwo production HBase clusters per colo• Low-latency (user-facing services)• Batch (analytics, scheduled jobs...)
Low-Latency Cluster• Workload mostly driven by HBase• Very few scheduled MR jobs• HBase replication to batch cluster• Most queries from PHP over ThriftChallenges:• Tuning Hadoop for low latency• Taming the long latency tail• Quickly recovering from failures
Batch Cluster• 2x more capacity• Wildly changing workload (e.g. 40K 14M QPS)• Lots of scheduled MR jobs• Frequent ad-hoc jobs (MR/Hive)• OpenTSDB’s data >800M data points added per day 133B data points totalChallenges:• Resource isolation• Tuning for larger scale