Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

HBaseCon 2012 | You’ve got HBase! How AOL Mail Handles Big Data

3,671 views

Published on

The AOL Mail Team will discuss our implementation of HBase for two large scale applications: an anti-abuse mechanism and a user-visible API. We will provide an overview of how and why HBase and Hadoop were incorporated into the massive and diverse technology stack that is the nearly 20-year-old AOL Mail system and the history of how we took our HBase/Hadoop apps through our traditional process of design, to development, through QA, and into production. We will explain how our practical approach to HBase has evolved over time, and we will discuss our lessons learned and some of our techniques and tools developed via our iterative dev/qa and operational processes. We will explain the pain-points we have experienced with erratic usage and edge-cases, and how we address problems when we run across them.

Published in: Technology
  • Be the first to comment

HBaseCon 2012 | You’ve got HBase! How AOL Mail Handles Big Data

  1. 1. You’ve got HBaseHow AOL Mail handles Big DataPresented at HBaseConMay 22, 2012
  2. 2. The AOL Mail SystemOver 15 years oldConstantly evolving10,000+ hosts70+ Million mailboxes50+ Billion emailsA technology stack that runs the gamut Presented at HBaseCon 2012 Page 2
  3. 3. What that means…Lots of dataLots of moving partsTight SLAsMature system + Young software = Tough marriage We don’t buy “commodity” hardware Engrained Dev/QA/Prod product lifecycle Somewhat “version locked” to tried-and-true platforms Expect service outages to be quickly mitigated by our NOC w/out waiting for an on-call Presented at HBaseCon 2012 Page 3
  4. 4. So where does HBase fit?It’s a component, not the foundationCurrently used in two placesBeing evaluated for more It will remain a tool in our diverse Big Data arsenal Presented at HBaseCon 2012 Page 4
  5. 5. An Activity Profiler
  6. 6. An “Activity Profiler”Watches for particular behaviorsDesigned and built in 6/2010Originally “vanilla” Hadoop 0.20.2 + HBase 0.90.2Currently CDH31.4+ Million Events/min60x 24TB (raw) DataNodes w/ local RegionServers15x application hostsIs an internal-only tool Used by automated anti-abuse systems Leveraged by data analysts for adhoc queries/MapRed Presented at HBaseCon 2012 Page 6
  7. 7. An “Activity Profiler” Presented at HBaseCon 2012 Page 7
  8. 8. Why the “Event Catcher” layer?Has to “speak the language” of our existing systems Easy to plug an HBase translator in to existing data feeds Hard to modify the infrastructure to speak HBaseFlume was too young at the time Presented at HBaseCon 2012 Page 8
  9. 9. Why batch load via MapRed?Real time is not currently a requirementAllows filtering at different pointsAllows us to “trigger” events Designed before coprocessorsEarly data integrity issues necessitated “replaying” Missing append support early on Holes in the Meta table Long splits and GC pauses caused client timeoutsCan sample data into a “sandbox” for job developmentMakes pig, hive, and other MapRed easy and stable We keep the raw data around as well Presented at HBaseCon 2012 Page 9
  10. 10. HBase and MapRed can live in harmonyBigger than “average” hardware 36+GB RAM 8+ coresProper system tuning is essential Good information on tuning Hadoop is prolific, but… XFS > EXT JBOD > RAID As far as HBase is concerned… Just go buy Lars’ bookCareful job development, optimization is key! Presented at HBaseCon 2012 Page 10
  11. 11. Contact History API
  12. 12. Contact History APIServices a member-facing APIDesigned and built in 10/2010Modeled after the previous application Built by a different Engineering team Used to solve a very different problem250K+ Inserts/min3+ Million Inserts/min during MapRed20x 24TB (raw) DataNodes w/ local RegionServers14x application hostsLeverages Memcached to reduce query load on HBase Presented at HBaseCon 2012 Page 12
  13. 13. Contact History API Presented at HBaseCon 2012 Page 13
  14. 14. Where we go from here
  15. 15. Amusing mistakes to learn fromExploding regions Batch inserts via MapRed result in fast, symmetrical key space growth Attempting to split every region at the same time is a bad idea Turning off region splitting and using a custom “rolling region splitter” is a good idea Take time and load into consideration when selecting regions to splitBackups, backups, backups! You can never have to manyLarge, non-splitable regions tell you things Our key space maps to accounts Excessively large keys equal excessively “active” accounts Presented at HBaseCon 2012 Page 15
  16. 16. Next-generation model Presented at HBaseCon 2012 Page 16
  17. 17. Thanks! Presented at HBaseCon 2012 Page 17

×