Hadoop Scalability at Facebook Dmytro Molkov ( [email_address] ) YaC, Moscow, September 19, 2011
How Facebook uses Hadoop Hadoop Scalability Hadoop High Availability HDFS Raid HDFS Raid HDFS Raid HDFS Raid
How Facebook uses Hadoop
Usages of Hadoop at Facebook Warehouse Thousands of machines in the cluster Tens of petabytes of data Tens of thousands of jobs/queries a day Over a hundred million files Scribe-HDFS Dozens of small clusters Append support High availability High throughput
Usages of Hadoop at Facebook (contd.) Realtime Analytics Medium sized hbase clusters High throughput/low latency FB Messages Storage Medium sized hbase clusters Low latency High data durability High Availability Misc Storage/Backup clusters Small to medium sized Various availability/performance requirements
Hadoop Scalability
Hadoop Scalability Warehouse Cluster - A “Single Cluster” approach Good data locality Ease of data access Operational Simplicity NameNode is the bottleneck Memory pressure - too many files and blocks CPU pressure - too many metadata operations against a single node Long Startup Time JobTracker is the bottleneck Memory Pressure - too many jobs/tasks/counters in memory CPU pressure - scheduling computation is expensive
HDFS Federation Wishlist Single Cluster Preserve Data Locality Keep Operations Simple Distribute both CPU and Memory Load
Hadoop Federation Design NameNode  #1 NameNode  #N Data Node Data Node ... Data Node
HDFS Federation Overview Each NameNode holds a part of the NameSpace Hive tables are distributed between namenodes Hive Metastore stores full locations of the tables (including the namenode) -> Hive clients know which cluster the data is stored in HDFS Clients have a mount table to know where the data is Each namespace uses all datanodes for storage -> the cluster load is fully balanced (Storage and I/O) Single Datanode process per node ensures good utilization of resources
Map-Reduce Federation Backward Compatibility with the old code Preserve data locality Make scheduling faster Ease the resource pressure on the JobTracker
Map Reduce Federation Cluster Resource Manager Job Client Resource Request Task Tracker Task Tracker Resource Heartbeats Job Communication ...
MapReduce Federation Overview Cluster Manager only allocates resources JobTracker per user -> few tasks per JobTracker -> more responsive scheduling ClusterManager is stateless -> shorter restart times -> better availability
Hadoop High Availability
Warehouse High Availability Full cluster restart takes 90-120 mins Software upgrade is 20-30 hrs of downtime/year Cluster crash is 5 hrs of downtime/year MapReduce tolerates failures
HDFS High Availability Design Primary NN Standby NN NFS DataNodes Edits Log Edits Log Block Reports/ Block Received Block Reports/ Block Received
Clients Design Using ZooKeeper as a method of name resolution Under normal conditions ZooKeeper contains a location of the primary node During the failover ZooKeeper record is empty and the clients know to wait for the failover to complete On a network failure clients check if the ZooKeeper entry has changed and retry the command agains the new Primary NameNode if the failover has occurred For the large clusters Clients also cache the location of the primary on the local node to ease the load on the zookeeper cluster
HDFS Raid
HDFS Raid 3 way replication Data locality - necessary only for the new data Data availability - necessary for all kinds of data Erasure codes Data locality is worse than 3 way replication Data availability is at least as good as 3 way replication
HDFS Raid Detais 10 blocks replicated 3 times = 30 physical blocks Effective replication factor 3.0 10 blocks replicated twice +  checksum (XOR) block replicated twice = 22 physical blocks. Effective replication factor 2.2 XOR Reed Solomon Encoding 10 blocks replicated 3 times = 30 physical blocks Effective replication factor 3.0 10 blocks with replication factor 1 +  erasure codes (RS) replicated once = 14 physical blocks. Effective replication factor 1.4
HDFS Raid Pros and Cons Saves a lot of space Provides same guarantees for data availability Worse data locality Need to reconstruct blocks instead of replicating (CPU + Network cost) Block location in the cluster is important and needs to be maintained
facebook.com/dms [email_address] [email_address]
 

Масштабируемость Hadoop в Facebook. Дмитрий Мольков, Facebook

  • 1.
    Hadoop Scalability atFacebook Dmytro Molkov ( [email_address] ) YaC, Moscow, September 19, 2011
  • 2.
    How Facebook usesHadoop Hadoop Scalability Hadoop High Availability HDFS Raid HDFS Raid HDFS Raid HDFS Raid
  • 3.
  • 4.
    Usages of Hadoopat Facebook Warehouse Thousands of machines in the cluster Tens of petabytes of data Tens of thousands of jobs/queries a day Over a hundred million files Scribe-HDFS Dozens of small clusters Append support High availability High throughput
  • 5.
    Usages of Hadoopat Facebook (contd.) Realtime Analytics Medium sized hbase clusters High throughput/low latency FB Messages Storage Medium sized hbase clusters Low latency High data durability High Availability Misc Storage/Backup clusters Small to medium sized Various availability/performance requirements
  • 6.
  • 7.
    Hadoop Scalability WarehouseCluster - A “Single Cluster” approach Good data locality Ease of data access Operational Simplicity NameNode is the bottleneck Memory pressure - too many files and blocks CPU pressure - too many metadata operations against a single node Long Startup Time JobTracker is the bottleneck Memory Pressure - too many jobs/tasks/counters in memory CPU pressure - scheduling computation is expensive
  • 8.
    HDFS Federation WishlistSingle Cluster Preserve Data Locality Keep Operations Simple Distribute both CPU and Memory Load
  • 9.
    Hadoop Federation DesignNameNode #1 NameNode #N Data Node Data Node ... Data Node
  • 10.
    HDFS Federation OverviewEach NameNode holds a part of the NameSpace Hive tables are distributed between namenodes Hive Metastore stores full locations of the tables (including the namenode) -> Hive clients know which cluster the data is stored in HDFS Clients have a mount table to know where the data is Each namespace uses all datanodes for storage -> the cluster load is fully balanced (Storage and I/O) Single Datanode process per node ensures good utilization of resources
  • 11.
    Map-Reduce Federation BackwardCompatibility with the old code Preserve data locality Make scheduling faster Ease the resource pressure on the JobTracker
  • 12.
    Map Reduce FederationCluster Resource Manager Job Client Resource Request Task Tracker Task Tracker Resource Heartbeats Job Communication ...
  • 13.
    MapReduce Federation OverviewCluster Manager only allocates resources JobTracker per user -> few tasks per JobTracker -> more responsive scheduling ClusterManager is stateless -> shorter restart times -> better availability
  • 14.
  • 15.
    Warehouse High AvailabilityFull cluster restart takes 90-120 mins Software upgrade is 20-30 hrs of downtime/year Cluster crash is 5 hrs of downtime/year MapReduce tolerates failures
  • 16.
    HDFS High AvailabilityDesign Primary NN Standby NN NFS DataNodes Edits Log Edits Log Block Reports/ Block Received Block Reports/ Block Received
  • 17.
    Clients Design UsingZooKeeper as a method of name resolution Under normal conditions ZooKeeper contains a location of the primary node During the failover ZooKeeper record is empty and the clients know to wait for the failover to complete On a network failure clients check if the ZooKeeper entry has changed and retry the command agains the new Primary NameNode if the failover has occurred For the large clusters Clients also cache the location of the primary on the local node to ease the load on the zookeeper cluster
  • 18.
  • 19.
    HDFS Raid 3way replication Data locality - necessary only for the new data Data availability - necessary for all kinds of data Erasure codes Data locality is worse than 3 way replication Data availability is at least as good as 3 way replication
  • 20.
    HDFS Raid Detais10 blocks replicated 3 times = 30 physical blocks Effective replication factor 3.0 10 blocks replicated twice + checksum (XOR) block replicated twice = 22 physical blocks. Effective replication factor 2.2 XOR Reed Solomon Encoding 10 blocks replicated 3 times = 30 physical blocks Effective replication factor 3.0 10 blocks with replication factor 1 + erasure codes (RS) replicated once = 14 physical blocks. Effective replication factor 1.4
  • 21.
    HDFS Raid Prosand Cons Saves a lot of space Provides same guarantees for data availability Worse data locality Need to reconstruct blocks instead of replicating (CPU + Network cost) Block location in the cluster is important and needs to be maintained
  • 22.
  • 23.