Storage Infrastructure Behind                 Facebook Messages                        HBase/HDFS/ZK/MapReduce            ...
The New Facebook Messages Messages   Chats    Emails   SMS
Why we chose HBase▪   High write throughput▪   Good random read performance▪   Horizontal scalability▪   Automatic Failove...
What do we store in HBase▪   HBase    ▪   Small messages    ▪   Message metadata (thread/message indices)    ▪   Search in...
HBase-HDFS System Overview                             Database Layer   HBASE       Master      Backup                   M...
Facebook Messages Architecture                                                           User Directory Service           ...
Typical Cluster Layout ▪   Multiple clusters/cells for messaging     ▪   20 servers/rack; 5 or more racks per cluster ▪   ...
Facebook Messages: Quick Stats▪   6B+ messages/day▪   Traffic to HBase     ▪ 75+ Billion R+W ops/day     ▪ At peak: 1.5M o...
Facebook Messages: Quick Stats (contd.)▪   2PB+ of online data in HBase (6PB+ with replication;    excludes backups)    ▪ ...
Facebook Messages: Quick Stats (contd.)Timeline:▪   Started in Dec 2009▪   Roll out started in Nov 2010▪   Fully rolled ou...
Shadow Testing: Before Rollout▪   Both product and infrastructure were changing.▪   Shadows the old messages product + cha...
Shadow Testing: After Rollout▪   Shadows new version of the Messages product.▪   All backend changes go through shadow clu...
Backup/Recovery (V1)•   During early phase, concerned about potential bugs in HBase.•   Off-line backups: written to HDFS ...
Backups (V2)▪   Now, does periodic HFile level backups.▪   Working on:    ▪   Moving to HFile + Commit Log based backups t...
Messages Schema & Evolution▪   “Actions” (data) Column Family the source of truth    ▪   Log of all user actions (addMessa...
Write Path OverviewRegion Server                ....            Region #2       Region #1                        ....     ...
Flushes: Memstore -> HFileRegion Server               ....           Region #2      Region #1                        .... ...
Read Path OverviewRegion Server               ....           Region #2      Region #1                         ....        ...
CompactionsRegion Server               ....           Region #2      Region #1                        ....                ...
Reliability: Early work▪   HDFS sync support for durability of transactions▪   Multi-CF transaction atomicity▪   Several b...
Availability: Early Work▪   Common reasons for unavailability:    ▪   S/W upgrades        ▪   Solution: rolling upgrades  ...
Performance: Early Work▪   Read optimizations:    ▪   Seek optimizations for rows with large number of cells    ▪   Bloom ...
Performance: Compactions                                          Was▪   Critical for read performance▪   Old Algorithm:#1...
Performance: Compactions▪   More problems!    ▪   Read performance dips during      Small Compaction Q   Big Compaction Q ...
Metrics, metrics, metrics…▪   Initially, only had coarse level overall metrics (get/put latency/ops;    block cache counte...
Metrics (contd.)▪   HBase Master Statistics:    ▪   Number of region servers alive    ▪   Number of regions    ▪   Load ba...
Need to keep up as data grows on you!▪   Rapidly iterated on several new features while in production:    ▪   Block indexe...
Scares & Scars!▪   Not without our share of scares and incidents:    ▪   s/w bugs. (e.g., deadlocks, incompatible LZO used...
Scares & Scars! (contd.)▪   Sometimes, tried things which hadn’t been tested in dark launch!        ▪   Added a rack of se...
Future Work▪   Reliability, Availability, Scalability!▪   Lot of new use cases on top of HBase in the works.      ▪   HDFS...
AcknowledgementsWork of lot of people spanning HBase, HDFS, Migrations,Backup/Recovery pipeline, and Ops!   Karthik Rangan...
Thanks! Questions?facebook.com/engineering
Upcoming SlideShare
Loading in...5
×

Storage Infrastructure Behind Facebook Messages

6,395

Published on

Published in: Technology
0 Comments
14 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
6,395
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
168
Comments
0
Likes
14
Embeds 0
No embeds

No notes for slide

Storage Infrastructure Behind Facebook Messages

  1. 1. Storage Infrastructure Behind Facebook Messages HBase/HDFS/ZK/MapReduce Kannan Muthukkaruppan Software Engineer, FacebookBig Data Experiences & Scars, HPTS 2011Oct 24th, 2011
  2. 2. The New Facebook Messages Messages Chats Emails SMS
  3. 3. Why we chose HBase▪ High write throughput▪ Good random read performance▪ Horizontal scalability▪ Automatic Failover▪ Strong consistency▪ Benefits of HDFS ▪ Fault tolerant, scalable, checksums, MapReduce ▪ internal dev & ops expertise
  4. 4. What do we store in HBase▪ HBase ▪ Small messages ▪ Message metadata (thread/message indices) ▪ Search index▪ Haystack (our photo store) ▪ Attachments ▪ Large messages
  5. 5. HBase-HDFS System Overview Database Layer HBASE Master Backup Master Region Region Region ... Server Server Server Coordination Service Storage LayerHDFS Zookeeper Quorum Namenode Secondary Namenode ZK ZK ... Peer PeerDatanode Datanode Datanode ...
  6. 6. Facebook Messages Architecture User Directory Service Clients (Front End, MTA, etc.) What’s the cell for this user? Cell 2 Cell 1 Cell 3 Cell 1 Application Server Application Server Application Server Attachments HBase/HDFS/Z HBase/HDFS/Z Message, Metadata, K Search HBase/HDFS/Z Index K K Haystack
  7. 7. Typical Cluster Layout ▪ Multiple clusters/cells for messaging ▪ 20 servers/rack; 5 or more racks per cluster ▪ Controllers (master/Zookeeper) spread across racks ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer HDFS Namenode Backup Namenode Job Tracker HBase Master Backup HBase MasterRegion Server Region Server Region Server Region Server Region ServerData Node Data Node Data Node Data Node Data NodeTask Tracker Task Tracker Task Tracker Task Tracker Task Tracker19x... 19x... 19x... 19x... 19x...Region Server Region Server Region Server Region Server Region ServerData Node Data Node Data Node Data Node Data NodeTask Tracker Task Tracker Task Tracker Task Tracker Task Tracker Rack #1 Rack #2 Rack #3 Rack #4 Rack #5
  8. 8. Facebook Messages: Quick Stats▪ 6B+ messages/day▪ Traffic to HBase ▪ 75+ Billion R+W ops/day ▪ At peak: 1.5M ops/sec ▪ ~ 55% Read vs. 45% Write ops ▪ Avg write op inserts ~16 records across multiple column families.
  9. 9. Facebook Messages: Quick Stats (contd.)▪ 2PB+ of online data in HBase (6PB+ with replication; excludes backups) ▪ message data, metadata, search index▪ All data LZO compressed▪ Growing at 250TB/month
  10. 10. Facebook Messages: Quick Stats (contd.)Timeline:▪ Started in Dec 2009▪ Roll out started in Nov 2010▪ Fully rolled out by July 2011 (migrated 1B+ accounts from legacy messages!)While in production:▪ Schema changes: not once, but twice!▪ Implemented & rolled out HFile V2 and numerous other optimizations in an upward compatible manner!
  11. 11. Shadow Testing: Before Rollout▪ Both product and infrastructure were changing.▪ Shadows the old messages product + chat while the new one was under development Messages Chat Backfilled data from V1 Frontend Frontend MySQL for % of users and mirrored their production traffic Application Servers Cassandra MySQL ChatLogger HBase/HDFS Cluster Production Shadow
  12. 12. Shadow Testing: After Rollout▪ Shadows new version of the Messages product.▪ All backend changes go through shadow cluster before prod push New Messages Frontend Application Servers Application Servers HBase/HDFS Cluster HBase/HDFS Cluster Shadow Production
  13. 13. Backup/Recovery (V1)• During early phase, concerned about potential bugs in HBase.• Off-line backups: written to HDFS via Scribe• Recovery tools; testing of recovery tools Message Frontend Application Servers Scribe/HDFS Local Datacenter HDFS Remote Datacenter • Merge & Dedup HDFS HBase/HDFS Cluster • Store locally, and • Copy to Remote DC Double log from AppServer & HBase to reduce probability of data loss
  14. 14. Backups (V2)▪ Now, does periodic HFile level backups.▪ Working on: ▪ Moving to HFile + Commit Log based backups to be able to recover to finer grained points in time ▪ Avoid need to log data to Scribe. ▪ Zero copy (hard link based) fast backups
  15. 15. Messages Schema & Evolution▪ “Actions” (data) Column Family the source of truth ▪ Log of all user actions (addMessage, markAsRead, etc.)▪ Metadata (thread index, message index, search index) etc. in other column families▪ Metadata portion of schema underwent 3 changes: ▪ Coarse grained snapshots (early development; rollout up to 1M users) ▪ Hybrid (up to full rollout – 1B+ accounts; 800M+ active) ▪ Fine-grained metadata (after rollout)▪ MapReduce jobs against production clusters! ▪ Ran in throttled way ▪ Heavy use of HBase bulk import features
  16. 16. Write Path OverviewRegion Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore HFiles (in HDFS) Write Ahead Log ( in HDFS) Append/Sync
  17. 17. Flushes: Memstore -> HFileRegion Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore HFiles flush Data in HFile is sorted; has block index for efficient retrieval
  18. 18. Read Path OverviewRegion Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore HFiles Get
  19. 19. CompactionsRegion Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore HFiles
  20. 20. Reliability: Early work▪ HDFS sync support for durability of transactions▪ Multi-CF transaction atomicity▪ Several bug fixes in log recovery▪ New block placement policy in HDFS ▪ To reduce probability of data loss
  21. 21. Availability: Early Work▪ Common reasons for unavailability: ▪ S/W upgrades ▪ Solution: rolling upgrades ▪ Schema Changes ▪ Applications needs new Column Families ▪ Need to change settings for a CF ▪ Solution: online “alter table” ▪ Load balancing or cluster restarts took forever ▪ Upon investigation: stuck waiting for compactions to finish ▪ Solution: Interruptible Compactions!
  22. 22. Performance: Early Work▪ Read optimizations: ▪ Seek optimizations for rows with large number of cells ▪ Bloom Filters ▪ minimize HFile lookups ▪ Timerange hints on HFiles (great for temporal data) ▪ Multigets ▪ Improved handling of compressed HFiles
  23. 23. Performance: Compactions Was▪ Critical for read performance▪ Old Algorithm:#1. Start from newest file (file 0); include next file if: ▪ size[i] < size[i-1] * C (good!)#2. Always compact at least 4 files, even if rule #1 isn’t met.Solution: Now#1. Compact at least 4 files, but only if eligible files found.#2. Also, new file selection based on summation of sizes. size[i] < (size[0] + size[1] + …size[i-1]) * C
  24. 24. Performance: Compactions▪ More problems! ▪ Read performance dips during Small Compaction Q Big Compaction Q peak ▪ Major compaction storms Compaction Compaction ▪ Large compactions bottleneck Compaction Compaction▪ Enhancements/fixes: Compaction ▪ Staggered major compactions Compaction ▪ Multi-thread compactions; separate queues for small & big compactions ▪ Aggressive off-peak compactions
  25. 25. Metrics, metrics, metrics…▪ Initially, only had coarse level overall metrics (get/put latency/ops; block cache counters).▪ Slow query logging▪ Added per Column Family stats for: ▪ ops counts, latency ▪ block cache usage & hit ratio ▪ memstore usage ▪ on-disk file sizes ▪ file counts ▪ bytes returned, bytes flushed, compaction statistics ▪ stats by block type (data block vs. index blocks vs. bloom blocks, etc.) ▪ bloom filter stats
  26. 26. Metrics (contd.)▪ HBase Master Statistics: ▪ Number of region servers alive ▪ Number of regions ▪ Load balancing statistics ▪ ..▪ All stats stored in Facebook’s Operational Data Store (ODS).▪ Lots of ODS dashboards for debugging issues ▪ Side note: ODS planning to use HBase for storage pretty soon!
  27. 27. Need to keep up as data grows on you!▪ Rapidly iterated on several new features while in production: ▪ Block indexes upto 6GB per server! Cluster starts taking longer and longer. Block cache hit ratio on the decline. ▪ Solution: HFile V2 ▪ Multi-level block index, Sharded Bloom Filters ▪ Network pegged after restarts ▪ Solution: Locality on full & rolling restart ▪ High disk utilization during peak ▪ Solution: Several “seek” optimizations to reduce disk IOPS ▪ Lazy Seeks (use time hints to avoid seeking into older HFiles) ▪ Special bloom filter for deletes to avoid additional seek ▪ Utilize off-peak IOPS to do more aggressive compactions during
  28. 28. Scares & Scars!▪ Not without our share of scares and incidents: ▪ s/w bugs. (e.g., deadlocks, incompatible LZO used for bulk imported data, etc.) ▪ found a edge case bug in log recovery as recently as last week! ▪ performance spikes every 6 hours (even off-peak)! ▪ cleanup of HDFS’s Recycle bin was sub-optimal! Needed code and config fix. ▪ transient rack switch failures ▪ Zookeeper leader election took than 10 minutes when one member of the quorum died. Fixed in more recent version of ZK. ▪ HDFS Namenode – SPOF ▪ flapping servers (repeated failures)
  29. 29. Scares & Scars! (contd.)▪ Sometimes, tried things which hadn’t been tested in dark launch! ▪ Added a rack of servers to help with performance issue ▪ Pegged top of the rack network bandwidth! ▪ Had to add the servers at much slower pace. Very manual . ▪ Intelligent load balancing needed to make this more automated.▪ A high % of issues caught in shadow/stress testing▪ Lots of alerting mechanisms in place to detect failures cases ▪ Automate recovery for a lots of common ones ▪ Treat alerts on shadow cluster as hi-pri too!▪ Sharding service across multiple HBase cells also paid off
  30. 30. Future Work▪ Reliability, Availability, Scalability!▪ Lot of new use cases on top of HBase in the works. ▪ HDFS Namenode HA ▪ Recovering gracefully from transient issues ▪ Fast hot-backups ▪ Delta-encoding in block cache ▪ Replication ▪ Performance (HBase and HDFS) ▪ HBase as a service Multi-tenancy ▪ Features- coprocessors, secondary indices
  31. 31. AcknowledgementsWork of lot of people spanning HBase, HDFS, Migrations,Backup/Recovery pipeline, and Ops! Karthik Ranganathan Mikhail Bautin Nicolas Spiegelberg Pritam Damania Aravind Menon Prakash Khemani Dhruba Borthakur Doug Porter Hairong Kuang Alex Laslavic Jonathan Gray Kannan Muthukkaruppan Rodrigo Schmidt + Apache HBase Amitanand Aiyer + Interns Guoqiang (Jerry) Chen + Messages FrontEnd Team Liyin Tang + Messages AppServer Team Madhu Vaidya + many more folks…
  32. 32. Thanks! Questions?facebook.com/engineering
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×