Facebook Messages & HBase

强 王
强 王思科 - 工程师 at 思科
Facebook Messages & HBase
                Nicolas Spiegelberg
                Software Engineer, Facebook




April 8, 2011
Talk Outline

▪   About Facebook Messages
▪   Intro to HBase
▪   Why HBase
▪   HBase Contributions
▪   MySQL -> HBase Migration
▪   Future Plans
▪   Q&A
Facebook Messages
The New Facebook Messages

 Messages   Chats    Emails   SMS
Facebook Messages & HBase
Monthly data volume prior to launch


           15B x 1,024 bytes = 14TB



           120B x 100 bytes = 11TB
Messaging Data
▪   Small/medium sized data              HBase
    ▪   Message metadata & indices
    ▪   Search index
    ▪   Small message bodies


▪   Attachments and large messages                Haystack
    ▪   Used for our existing photo/video store
Open Source Stack
▪   Memcached   -->   App Server Cache
▪   ZooKeeper   -->   Small Data Coordination Service
▪   HBase             -->   Database Storage Engine
▪   HDFS        -->   Distributed FileSystem
▪   Hadoop      -->   Asynchronous Map-Reduce Jobs
Our architecture
                                                           User Directory Service
           Clients
   (Front End, MTA, etc.)
                            What’s the cell for
                               this user?
                                                  Cell 2
        Cell 1                                                   Cell 3

                                  Cell 1   Application Server
 Application Server                                  Application Server

                                                  Attachments
                                             HBase/HDFS/Z
  HBase/HDFS/Z                               KMessage, Metadata,
                                                       HBase/HDFS/Z
  K                                             Search Index
                                                       K


                            Haystack
About HBase
HBase in a nutshell

• distributed, large-scale data store

• efficient at random reads/writes

• initially modeled after Google’s BigTable

• open source project (Apache)
When to use HBase?

▪ storing large amounts of data

▪ need high write throughput

▪ need efficient random access within large data sets

▪ need to scale gracefully with data

▪ for structured and semi-structured data

▪ don’t need full RDMS capabilities (cross table transactions, joins, etc.)
HBase Data Model
• An HBase table is:
 •   a sparse , three-dimensional array of cells, indexed by:
      RowKey, ColumnKey, Timestamp/Version
 •   sharded into regions along an ordered RowKey space

• Within each region:
 •   Data is grouped into column families
 ▪   Sort order within each column family:

      Row Key (asc), Column Key (asc), Timestamp (desc)
Example: Inbox Search
• Schema
 •   Key: RowKey: userid, Column: word, Version: MessageID
 •   Value: Auxillary info (like offset of word in message)

• Data is stored sorted by <userid, word, messageID>:

      User1:hi:17->offset1
                                 Can efficiently handle queries like:
      User1:hi:16->offset2
      User1:hello:16->offset3    - Get top N messageIDs for a
      User1:hello:2->offset4       specific user & word
      ...
      User2:....                 - Typeahead query: for a given user,
      User2:...                    get words that match a prefix
      ...
HBase System Overview
                             Database Layer

   HBASE
       Master      Backup
                   Master
   Region           Region                    Region    ...
   Server           Server                    Server
                                                              Coordination Service
            Storage Layer

HDFS                                                   Zookeeper Quorum
   Namenode          Secondary Namenode                ZK      ZK           ...
                                                       Peer    Peer
Datanode Datanode           Datanode          ...
HBase Overview
HBASE Region Server
                ....
            Region #2
       Region #1
                        ....
                  ColumnFamily #2

            ColumnFamily #1                Memstore
                                    (in memory data structure)


                HFiles (in HDFS)                             flush




  Write Ahead Log ( in HDFS)
HBase Overview
•       Very good at random reads/writes

•       Write path
    •    Sequential write/sync to commit log
    •    update memstore

•       Read path
    •    Lookup memstore & persistent HFiles
    •    HFile data is sorted and has a block index for efficient retrieval

•       Background chores
    •    Flushes (memstore -> HFile)
    •    Compactions (group of HFiles merged into one)
Why HBase?
Performance is great, but what else…
Horizontal scalability
▪   HBase & HDFS are elastic by design
▪   Multiple table shards (regions) per physical server
▪   On node additions
    ▪   Load balancer automatically reassigns shards from overloaded
        nodes to new nodes
    ▪   Because filesystem underneath is itself distributed, data for
        reassigned regions is instantly servable from the new nodes.
▪   Regions can be dynamically split into smaller regions.
    ▪   Pre-sharding is not necessary
    ▪   Splits are near instantaneous!
Automatic Failover
▪   Node failures automatically detected by HBase Master
▪   Regions on failed node are distributed evenly among surviving nodes.
        ▪   Multiple regions/server model avoids need for substantial
            overprovisioning
▪   HBase Master failover
    ▪   1 active, rest standby
    ▪   When active master fails, a standby automatically takes over
HBase uses HDFS
We get the benefits of HDFS as a storage system for free
▪   Fault tolerance (block level replication for redundancy)
▪   Scalability
▪   End-to-end checksums to detect and recover from corruptions
▪   Map Reduce for large scale data processing
▪   HDFS already battle tested inside Facebook
    ▪   running petabyte scale clusters
    ▪   lot of in-house development and operational experience
Simpler Consistency Model
▪   HBase’s strong consistency model
     ▪   simpler for a wide variety of applications to deal with
     ▪   client gets same answer no matter which replica data is read from


▪   Eventual consistency: tricky for applications fronted by a cache
     ▪   replicas may heal eventually during failures
     ▪   but stale data could remain stuck in cache
Typical Cluster Layout
 ▪   Multiple clusters/cells for messaging
     ▪   20 servers/rack; 5 or more racks per cluster
 ▪   Controllers (master/Zookeeper) spread across racks
 ZooKeeper Peer       ZooKeeper Peer     ZooKeeper Peer    ZooKeeper Peer    ZooKeeper Peer
 HDFS Namenode        Backup Namenode    Job Tracker       Hbase Master      Backup Master

Region Server        Region Server      Region Server     Region Server     Region Server
Data Node            Data Node          Data Node         Data Node         Data Node
Task Tracker         Task Tracker       Task Tracker      Task Tracker      Task Tracker



19x...                19x...            19x...            19x...            19x...


Region Server        Region Server      Region Server     Region Server     Region Server
Data Node            Data Node          Data Node         Data Node         Data Node
Task Tracker         Task Tracker       Task Tracker      Task Tracker      Task Tracker

         Rack #1          Rack #2            Rack #3           Rack #4           Rack #5
HBase Enhancements
Goal: Zero Data Loss
Goal of Zero Data Loss/Correctness
▪   sync support added to hadoop-20 branch
    ▪   for keeping transaction log (WAL) in HDFS
    ▪   to guarantee durability of transactions
▪   Row-level ACID compliance
▪   Enhanced HDFS’s Block Placement Policy:
    ▪   Original: rack aware, but minimally constrained
    ▪   Now: Placement of replicas constrained to configurable node groups
    ▪   Result: Data loss probability reduced by orders of magnitude
Availability/Stability improvements
▪   HBase master rewrite- region assignments using ZK
▪   Rolling Restarts – doing software upgrades without a downtime
▪   Interrupt Compactions – prioritize availability over minor perf gains
▪   Timeouts on client-server RPCs
▪   Staggered major compaction to avoid compaction storms
Performance Improvements
▪   Compactions
    ▪   critical for read performance
    ▪   Improved compaction algo
    ▪   delete/TTL/overwrite processing in minor compactions
▪   Read optimizations:
    ▪   Seek optimizations for rows with large number of cells
    ▪   Bloom filters to minimize HFile lookups
    ▪   Timerange hints on HFiles (great for temporal data)
    ▪   Improved handling of compressed HFiles
Operational Experiences
▪   Darklaunch:
    ▪   shadow traffic on test clusters for continuous, at scale testing
    ▪   experiment/tweak knobs
    ▪   simulate failures, test rolling upgrades
▪   Constant (pre-sharding) region count & controlled rolling splits
▪   Administrative tools and monitoring
    ▪   Alerts (HBCK, memory alerts, perf alerts, health alerts)
    ▪   auto detecting/decommissioning misbehaving machines
    ▪   Dashboards
▪   Application level backup/recovery pipeline
Working within the Apache community
▪   Growing with the community
    ▪   Started with a stable, healthy project
    ▪   In house expertise in both HDFS and HBase
    ▪   Increasing community involvement
▪   Undertook massive feature improvements with community help
    ▪   HDFS 0.20-append branch
    ▪   HBase Master rewrite
▪   Continually interacting with the community to identify and fix issues
    ▪   e.g., large responses (2GB RPC)
Data migration
Another place we used HBase heavily…
Move messaging data from MySQL to
HBase
Move messaging data from MySQL to HBase
▪   In MySQL, inbox data was kept normalized
    ▪   user’s messages are stored across many different machines
▪   Migrating a user is basically one big join across tables spread over
    many different machines
▪   Multiple terabytes of data (for over 500M users)
▪   Cannot pound 1000s of production UDBs to migrate users
How we migrated
▪   Periodically, get a full export of all the users’ inbox data in MySQL
▪   And, use bulk loader to import the above into a migration HBase
    cluster
▪   To migrate users:
    ▪   Since users may continue to receive messages during migration:
        ▪   double-write (to old and new system) during the migration period
    ▪   Get a list of all recent messages (since last MySQL export) for the
        user
        ▪   Load new messages into the migration HBase cluster
        ▪   Perform the join operations to generate the new data
        ▪   Export it and upload into the final cluster
Future Plans
HBase Expands
Facebook Insights Goes Real-Time
▪   Recently launched real-time analytics for social plugins on top of
    HBase
▪   Publishers get real-time distribution/engagement metrics:
        ▪   # of impressions, likes
        ▪   analytics by
            ▪   Domain, URL, demographics
            ▪   Over various time periods (the last hour, day, all-time)
▪   Makes use of HBase capabilities like:
    ▪   Efficient counters (read-modify-write increment operations)
    ▪   TTL for purging old data
Future Work
It is still early days…!
▪   Namenode HA (AvatarNode)
▪   Fast hot-backups (Export/Import)
▪   Online schema & config changes
▪   Running HBase as a service (multi-tenancy)
▪   Features (like secondary indices, batching hybrid mutations)
▪   Cross-DC replication
▪   Lot more performance/availability improvements
Thanks! Questions?
facebook.com/engineering
杭州站 · 2011年10月20日~22日
 www.qconhangzhou.com(6月启动)




QCon北京站官方网站和资料下载
     www.qconbeijing.com
1 of 39

Recommended

Hive + Tez: A Performance Deep Dive by
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveDataWorks Summit
57.6K views41 slides
Cassandra Introduction & Features by
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
31.9K views21 slides
Apache HBase Performance Tuning by
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance TuningLars Hofhansl
39.6K views54 slides
Introduction to Redis by
Introduction to RedisIntroduction to Redis
Introduction to RedisDvir Volk
121.1K views24 slides
Migrating from InnoDB and HBase to MyRocks at Facebook by
Migrating from InnoDB and HBase to MyRocks at FacebookMigrating from InnoDB and HBase to MyRocks at Facebook
Migrating from InnoDB and HBase to MyRocks at FacebookMariaDB plc
44.4K views31 slides
Hive+Tez: A performance deep dive by
Hive+Tez: A performance deep diveHive+Tez: A performance deep dive
Hive+Tez: A performance deep divet3rmin4t0r
9.6K views41 slides

More Related Content

What's hot

Intro to HBase by
Intro to HBaseIntro to HBase
Intro to HBasealexbaranau
42.7K views27 slides
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud by
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
33.3K views60 slides
Hive 3 - a new horizon by
Hive 3 - a new horizonHive 3 - a new horizon
Hive 3 - a new horizonThejas Nair
2.6K views50 slides
HDFS on Kubernetes—Lessons Learned with Kimoon Kim by
HDFS on Kubernetes—Lessons Learned with Kimoon KimHDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon KimDatabricks
9.1K views21 slides
Apache Tez: Accelerating Hadoop Query Processing by
Apache Tez: Accelerating Hadoop Query Processing Apache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query Processing DataWorks Summit
31.7K views44 slides
Apache Spark Architecture by
Apache Spark ArchitectureApache Spark Architecture
Apache Spark ArchitectureAlexey Grishchenko
76K views114 slides

What's hot(20)

Intro to HBase by alexbaranau
Intro to HBaseIntro to HBase
Intro to HBase
alexbaranau42.7K views
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud by Noritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Noritaka Sekiyama33.3K views
Hive 3 - a new horizon by Thejas Nair
Hive 3 - a new horizonHive 3 - a new horizon
Hive 3 - a new horizon
Thejas Nair2.6K views
HDFS on Kubernetes—Lessons Learned with Kimoon Kim by Databricks
HDFS on Kubernetes—Lessons Learned with Kimoon KimHDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon Kim
Databricks9.1K views
Apache Tez: Accelerating Hadoop Query Processing by DataWorks Summit
Apache Tez: Accelerating Hadoop Query Processing Apache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query Processing
DataWorks Summit31.7K views
A Thorough Comparison of Delta Lake, Iceberg and Hudi by Databricks
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
Databricks11.1K views
Introduction to Storm by Chandler Huang
Introduction to Storm Introduction to Storm
Introduction to Storm
Chandler Huang20.1K views
Apache Tez - A New Chapter in Hadoop Data Processing by DataWorks Summit
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
DataWorks Summit18.3K views
Apache Tez – Present and Future by DataWorks Summit
Apache Tez – Present and FutureApache Tez – Present and Future
Apache Tez – Present and Future
DataWorks Summit3.8K views
Supporting Apache HBase : Troubleshooting and Supportability Improvements by DataWorks Summit
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
DataWorks Summit1.8K views
From cache to in-memory data grid. Introduction to Hazelcast. by Taras Matyashovsky
From cache to in-memory data grid. Introduction to Hazelcast.From cache to in-memory data grid. Introduction to Hazelcast.
From cache to in-memory data grid. Introduction to Hazelcast.
Taras Matyashovsky42.6K views
Scalability, Availability & Stability Patterns by Jonas Bonér
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability Patterns
Jonas Bonér516.5K views
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013 by mumrah
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
mumrah61.3K views
HBaseCon 2015: HBase Performance Tuning @ Salesforce by HBaseCon
HBaseCon 2015: HBase Performance Tuning @ SalesforceHBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon6.1K views
Kafka replication apachecon_2013 by Jun Rao
Kafka replication apachecon_2013Kafka replication apachecon_2013
Kafka replication apachecon_2013
Jun Rao21.3K views
Tuning Apache Phoenix/HBase by Anil Gupta
Tuning Apache Phoenix/HBaseTuning Apache Phoenix/HBase
Tuning Apache Phoenix/HBase
Anil Gupta1.8K views
Building robust CDC pipeline with Apache Hudi and Debezium by Tathastu.ai
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and Debezium
Tathastu.ai2.7K views

Similar to Facebook Messages & HBase

Storage Infrastructure Behind Facebook Messages by
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messagesyarapavan
5.8K views32 slides
[Hi c2011]building mission critical messaging system(guoqiang jerry) by
[Hi c2011]building mission critical messaging system(guoqiang jerry)[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)baggioss
876 views46 slides
Hbase 20141003 by
Hbase 20141003Hbase 20141003
Hbase 20141003Jean-Baptiste Poullet
256 views23 slides
Hbase: an introduction by
Hbase: an introductionHbase: an introduction
Hbase: an introductionJean-Baptiste Poullet
782 views23 slides
Hbase by
HbaseHbase
HbaseShashwat Shriparv
1.1K views29 slides
Hw09 Practical HBase Getting The Most From Your H Base Install by
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base InstallCloudera, Inc.
10.3K views61 slides

Similar to Facebook Messages & HBase(20)

Storage Infrastructure Behind Facebook Messages by yarapavan
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messages
yarapavan5.8K views
[Hi c2011]building mission critical messaging system(guoqiang jerry) by baggioss
[Hi c2011]building mission critical messaging system(guoqiang jerry)[Hi c2011]building mission critical messaging system(guoqiang jerry)
[Hi c2011]building mission critical messaging system(guoqiang jerry)
baggioss876 views
Hw09 Practical HBase Getting The Most From Your H Base Install by Cloudera, Inc.
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base Install
Cloudera, Inc.10.3K views
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends by Esther Kundin
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Esther Kundin1.1K views
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends by Esther Kundin
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Esther Kundin564 views
[B4]deview 2012-hdfs by NAVER D2
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfs
NAVER D21.9K views
Facebook's HBase Backups - StampedeCon 2012 by StampedeCon
Facebook's HBase Backups - StampedeCon 2012Facebook's HBase Backups - StampedeCon 2012
Facebook's HBase Backups - StampedeCon 2012
StampedeCon2K views
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla... by Yahoo Developer Network
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Data Storage and Management project Report by Tushar Dalvi
Data Storage and Management project ReportData Storage and Management project Report
Data Storage and Management project Report
Tushar Dalvi201 views
Large-scale Web Apps @ Pinterest by HBaseCon
Large-scale Web Apps @ PinterestLarge-scale Web Apps @ Pinterest
Large-scale Web Apps @ Pinterest
HBaseCon4.1K views
Facebook - Jonthan Gray - Hadoop World 2010 by Cloudera, Inc.
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010
Cloudera, Inc.8.7K views
Hadoop Backup and Disaster Recovery by Cloudera, Inc.
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
Cloudera, Inc.62.8K views
Hbase status quo apache-con europe - nov 2012 by Chris Huang
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012
Chris Huang657 views
Techincal Talk Hbase-Ditributed,no-sql database by Rishabh Dugar
Techincal Talk Hbase-Ditributed,no-sql databaseTechincal Talk Hbase-Ditributed,no-sql database
Techincal Talk Hbase-Ditributed,no-sql database
Rishabh Dugar70 views

Recently uploaded

DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti... by
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...ShapeBlue
139 views29 slides
Elevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlue by
Elevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlueElevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlue
Elevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlueShapeBlue
222 views7 slides
Webinar : Desperately Seeking Transformation - Part 2: Insights from leading... by
Webinar : Desperately Seeking Transformation - Part 2:  Insights from leading...Webinar : Desperately Seeking Transformation - Part 2:  Insights from leading...
Webinar : Desperately Seeking Transformation - Part 2: Insights from leading...The Digital Insurer
90 views52 slides
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT by
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITShapeBlue
206 views8 slides
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue by
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlueCloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlueShapeBlue
135 views13 slides
Cencora Executive Symposium by
Cencora Executive SymposiumCencora Executive Symposium
Cencora Executive Symposiummarketingcommunicati21
159 views14 slides

Recently uploaded(20)

DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti... by ShapeBlue
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...
DRaaS using Snapshot copy and destination selection (DRaaS) - Alexandre Matti...
ShapeBlue139 views
Elevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlue by ShapeBlue
Elevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlueElevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlue
Elevating Privacy and Security in CloudStack - Boris Stoyanov - ShapeBlue
ShapeBlue222 views
Webinar : Desperately Seeking Transformation - Part 2: Insights from leading... by The Digital Insurer
Webinar : Desperately Seeking Transformation - Part 2:  Insights from leading...Webinar : Desperately Seeking Transformation - Part 2:  Insights from leading...
Webinar : Desperately Seeking Transformation - Part 2: Insights from leading...
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT by ShapeBlue
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
ShapeBlue206 views
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue by ShapeBlue
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlueCloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue
ShapeBlue135 views
The Role of Patterns in the Era of Large Language Models by Yunyao Li
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language Models
Yunyao Li85 views
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ... by ShapeBlue
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
ShapeBlue119 views
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates by ShapeBlue
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates
ShapeBlue252 views
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ... by ShapeBlue
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
ShapeBlue184 views
NTGapps NTG LowCode Platform by Mustafa Kuğu
NTGapps NTG LowCode Platform NTGapps NTG LowCode Platform
NTGapps NTG LowCode Platform
Mustafa Kuğu423 views
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue by ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
ShapeBlue147 views
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda... by ShapeBlue
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
ShapeBlue161 views
Initiating and Advancing Your Strategic GIS Governance Strategy by Safe Software
Initiating and Advancing Your Strategic GIS Governance StrategyInitiating and Advancing Your Strategic GIS Governance Strategy
Initiating and Advancing Your Strategic GIS Governance Strategy
Safe Software176 views
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... by ShapeBlue
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
ShapeBlue180 views
State of the Union - Rohit Yadav - Apache CloudStack by ShapeBlue
State of the Union - Rohit Yadav - Apache CloudStackState of the Union - Rohit Yadav - Apache CloudStack
State of the Union - Rohit Yadav - Apache CloudStack
ShapeBlue297 views
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ... by ShapeBlue
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
ShapeBlue126 views

Facebook Messages & HBase

  • 1. Facebook Messages & HBase Nicolas Spiegelberg Software Engineer, Facebook April 8, 2011
  • 2. Talk Outline ▪ About Facebook Messages ▪ Intro to HBase ▪ Why HBase ▪ HBase Contributions ▪ MySQL -> HBase Migration ▪ Future Plans ▪ Q&A
  • 4. The New Facebook Messages Messages Chats Emails SMS
  • 6. Monthly data volume prior to launch 15B x 1,024 bytes = 14TB 120B x 100 bytes = 11TB
  • 7. Messaging Data ▪ Small/medium sized data HBase ▪ Message metadata & indices ▪ Search index ▪ Small message bodies ▪ Attachments and large messages Haystack ▪ Used for our existing photo/video store
  • 8. Open Source Stack ▪ Memcached --> App Server Cache ▪ ZooKeeper --> Small Data Coordination Service ▪ HBase --> Database Storage Engine ▪ HDFS --> Distributed FileSystem ▪ Hadoop --> Asynchronous Map-Reduce Jobs
  • 9. Our architecture User Directory Service Clients (Front End, MTA, etc.) What’s the cell for this user? Cell 2 Cell 1 Cell 3 Cell 1 Application Server Application Server Application Server Attachments HBase/HDFS/Z HBase/HDFS/Z KMessage, Metadata, HBase/HDFS/Z K Search Index K Haystack
  • 11. HBase in a nutshell • distributed, large-scale data store • efficient at random reads/writes • initially modeled after Google’s BigTable • open source project (Apache)
  • 12. When to use HBase? ▪ storing large amounts of data ▪ need high write throughput ▪ need efficient random access within large data sets ▪ need to scale gracefully with data ▪ for structured and semi-structured data ▪ don’t need full RDMS capabilities (cross table transactions, joins, etc.)
  • 13. HBase Data Model • An HBase table is: • a sparse , three-dimensional array of cells, indexed by: RowKey, ColumnKey, Timestamp/Version • sharded into regions along an ordered RowKey space • Within each region: • Data is grouped into column families ▪ Sort order within each column family: Row Key (asc), Column Key (asc), Timestamp (desc)
  • 14. Example: Inbox Search • Schema • Key: RowKey: userid, Column: word, Version: MessageID • Value: Auxillary info (like offset of word in message) • Data is stored sorted by <userid, word, messageID>: User1:hi:17->offset1 Can efficiently handle queries like: User1:hi:16->offset2 User1:hello:16->offset3 - Get top N messageIDs for a User1:hello:2->offset4 specific user & word ... User2:.... - Typeahead query: for a given user, User2:... get words that match a prefix ...
  • 15. HBase System Overview Database Layer HBASE Master Backup Master Region Region Region ... Server Server Server Coordination Service Storage Layer HDFS Zookeeper Quorum Namenode Secondary Namenode ZK ZK ... Peer Peer Datanode Datanode Datanode ...
  • 16. HBase Overview HBASE Region Server .... Region #2 Region #1 .... ColumnFamily #2 ColumnFamily #1 Memstore (in memory data structure) HFiles (in HDFS) flush Write Ahead Log ( in HDFS)
  • 17. HBase Overview • Very good at random reads/writes • Write path • Sequential write/sync to commit log • update memstore • Read path • Lookup memstore & persistent HFiles • HFile data is sorted and has a block index for efficient retrieval • Background chores • Flushes (memstore -> HFile) • Compactions (group of HFiles merged into one)
  • 18. Why HBase? Performance is great, but what else…
  • 19. Horizontal scalability ▪ HBase & HDFS are elastic by design ▪ Multiple table shards (regions) per physical server ▪ On node additions ▪ Load balancer automatically reassigns shards from overloaded nodes to new nodes ▪ Because filesystem underneath is itself distributed, data for reassigned regions is instantly servable from the new nodes. ▪ Regions can be dynamically split into smaller regions. ▪ Pre-sharding is not necessary ▪ Splits are near instantaneous!
  • 20. Automatic Failover ▪ Node failures automatically detected by HBase Master ▪ Regions on failed node are distributed evenly among surviving nodes. ▪ Multiple regions/server model avoids need for substantial overprovisioning ▪ HBase Master failover ▪ 1 active, rest standby ▪ When active master fails, a standby automatically takes over
  • 21. HBase uses HDFS We get the benefits of HDFS as a storage system for free ▪ Fault tolerance (block level replication for redundancy) ▪ Scalability ▪ End-to-end checksums to detect and recover from corruptions ▪ Map Reduce for large scale data processing ▪ HDFS already battle tested inside Facebook ▪ running petabyte scale clusters ▪ lot of in-house development and operational experience
  • 22. Simpler Consistency Model ▪ HBase’s strong consistency model ▪ simpler for a wide variety of applications to deal with ▪ client gets same answer no matter which replica data is read from ▪ Eventual consistency: tricky for applications fronted by a cache ▪ replicas may heal eventually during failures ▪ but stale data could remain stuck in cache
  • 23. Typical Cluster Layout ▪ Multiple clusters/cells for messaging ▪ 20 servers/rack; 5 or more racks per cluster ▪ Controllers (master/Zookeeper) spread across racks ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer ZooKeeper Peer HDFS Namenode Backup Namenode Job Tracker Hbase Master Backup Master Region Server Region Server Region Server Region Server Region Server Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker 19x... 19x... 19x... 19x... 19x... Region Server Region Server Region Server Region Server Region Server Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker Rack #1 Rack #2 Rack #3 Rack #4 Rack #5
  • 26. Goal of Zero Data Loss/Correctness ▪ sync support added to hadoop-20 branch ▪ for keeping transaction log (WAL) in HDFS ▪ to guarantee durability of transactions ▪ Row-level ACID compliance ▪ Enhanced HDFS’s Block Placement Policy: ▪ Original: rack aware, but minimally constrained ▪ Now: Placement of replicas constrained to configurable node groups ▪ Result: Data loss probability reduced by orders of magnitude
  • 27. Availability/Stability improvements ▪ HBase master rewrite- region assignments using ZK ▪ Rolling Restarts – doing software upgrades without a downtime ▪ Interrupt Compactions – prioritize availability over minor perf gains ▪ Timeouts on client-server RPCs ▪ Staggered major compaction to avoid compaction storms
  • 28. Performance Improvements ▪ Compactions ▪ critical for read performance ▪ Improved compaction algo ▪ delete/TTL/overwrite processing in minor compactions ▪ Read optimizations: ▪ Seek optimizations for rows with large number of cells ▪ Bloom filters to minimize HFile lookups ▪ Timerange hints on HFiles (great for temporal data) ▪ Improved handling of compressed HFiles
  • 29. Operational Experiences ▪ Darklaunch: ▪ shadow traffic on test clusters for continuous, at scale testing ▪ experiment/tweak knobs ▪ simulate failures, test rolling upgrades ▪ Constant (pre-sharding) region count & controlled rolling splits ▪ Administrative tools and monitoring ▪ Alerts (HBCK, memory alerts, perf alerts, health alerts) ▪ auto detecting/decommissioning misbehaving machines ▪ Dashboards ▪ Application level backup/recovery pipeline
  • 30. Working within the Apache community ▪ Growing with the community ▪ Started with a stable, healthy project ▪ In house expertise in both HDFS and HBase ▪ Increasing community involvement ▪ Undertook massive feature improvements with community help ▪ HDFS 0.20-append branch ▪ HBase Master rewrite ▪ Continually interacting with the community to identify and fix issues ▪ e.g., large responses (2GB RPC)
  • 31. Data migration Another place we used HBase heavily…
  • 32. Move messaging data from MySQL to HBase
  • 33. Move messaging data from MySQL to HBase ▪ In MySQL, inbox data was kept normalized ▪ user’s messages are stored across many different machines ▪ Migrating a user is basically one big join across tables spread over many different machines ▪ Multiple terabytes of data (for over 500M users) ▪ Cannot pound 1000s of production UDBs to migrate users
  • 34. How we migrated ▪ Periodically, get a full export of all the users’ inbox data in MySQL ▪ And, use bulk loader to import the above into a migration HBase cluster ▪ To migrate users: ▪ Since users may continue to receive messages during migration: ▪ double-write (to old and new system) during the migration period ▪ Get a list of all recent messages (since last MySQL export) for the user ▪ Load new messages into the migration HBase cluster ▪ Perform the join operations to generate the new data ▪ Export it and upload into the final cluster
  • 36. Facebook Insights Goes Real-Time ▪ Recently launched real-time analytics for social plugins on top of HBase ▪ Publishers get real-time distribution/engagement metrics: ▪ # of impressions, likes ▪ analytics by ▪ Domain, URL, demographics ▪ Over various time periods (the last hour, day, all-time) ▪ Makes use of HBase capabilities like: ▪ Efficient counters (read-modify-write increment operations) ▪ TTL for purging old data
  • 37. Future Work It is still early days…! ▪ Namenode HA (AvatarNode) ▪ Fast hot-backups (Export/Import) ▪ Online schema & config changes ▪ Running HBase as a service (multi-tenancy) ▪ Features (like secondary indices, batching hybrid mutations) ▪ Cross-DC replication ▪ Lot more performance/availability improvements
  • 39. 杭州站 · 2011年10月20日~22日 www.qconhangzhou.com(6月启动) QCon北京站官方网站和资料下载 www.qconbeijing.com