More Related Content

Slideshows for you(20)

Viewers also liked(20)


HBaseCon 2015: DeathStar - Easy, Dynamic, Multi-tenant HBase via YARN

  1. DeathStar Easy, Dynamic, Multi-tenant HBase via YARN Ishan Chhabra, Nitin Aggarwal Rocketfuel Inc.
  2. In a not so distant past…
  3. 1000 node cluster
  4. Rogue Applications
  5. Cannot customize per application
  6. Hard to capacity plan or support new applications
  7. The Common Solution: Separate Clusters
  8. Non uniform network usage Different DFSs’, leading to lot of copying of data Low cluster utilization High lead time for new applications
  9. Run HBase on YARN Built on top of Slider Solution: DeathStar
  10. Hangar App Cluster 3 Provisioning Model App Cluster 2 App Cluster 1
  11. (grid/deathstar): $ git commit Capacity planning and configuration discussion Create simple JSON config As applications mature from hangar to their cluster
  12. Dynamic Cluster: Make API call to start, stop and scale cluster Static Cluster: Good to go
  13. Strict Isolation
  14. Common HDFS Layer Bulkload MapReduce over snapshots
  15. Dynamic config and cluster size changes
  16. Fits into organization’s capacity planning model
  17. Clusters out of thin air
  18. Hot swap a new cluster (human error / corruption) Easier HBase version upgrades and testing Temporary scale up for backfill “Dynamic” enables interesting use cases
  19. Key Challenges and Solutions
  20. Another failure mode Taken care of by auto restarts RM HA in the works
  21. Early Days: Bugs Slider did not acknowledge container allocations correctly Fixed scheduled for 0.8: SLIDER-828
  22. Not easily reproducible, still debugging Zombie Regionservers Early Days: Bugs
  23. Long running apps a secondary use case Logging, an unsolved problem Store logs on local disks, considering ELK
  24. YARN/Slider lack certain scheduling constraints At most x instances per node for spread and availability Custom patch in-house
  25. Rolling restarts for config changes Unsolved. SLIDER-226
  26. Data Locality Locality Anti-Locality
  27. Metrics Reporting Custom hadoop metrics OpenTSDB reporter App name passed via config
  28. Conclusion: Is it for me?
  29. Conclusion: Is it worth it?
  30. Thank you! Questions? Reach us at:
  31. Are you sure you want to scroll down?
  32. Really? Dungeons and Dragons await…
  33. Key Insight: HBase Multi-Tenancy and Access Patterns
  34. HBase Service Online Operational Store
  35. HBase Data Pipeline 1 Mutable Materialized View Stream 1 Data Pipeline 2 Data Pipeline 3 Stream 2
  36. HBase Prep Stage Transient Cache Stage 1 Stage 2 Stage 3