Cross-Site BigTable using HBase

  • 279 views
Uploaded on

Speakers: Jingcheng Du and Ramkrishna Vasudevan (Intel) …

Speakers: Jingcheng Du and Ramkrishna Vasudevan (Intel)

As HBase continues to expand in application and enterprise or government deployments, there is a growing demand for storing data across geographically distributed datacenters for improved availability and disaster recovery. The Cross-Site BigTable extends HBase to make it well-suited for such deployments, providing the capabilities of creating and accessing HBase tables that are partitioned and asynchronously backed-up over a number of distributed datacenters. This talk reveals how the Cross-Site BigTable manages data access over multiple datacenters and removes the data center itself as a single point of failure in geographically distributed HBase deployments.

More in: Software , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
279
On Slideshare
0
From Embeds
0
Number of Embeds
2

Actions

Shares
Downloads
27
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. HBase Cross-site BigTable Security Features in Apache HBase – An Operator’s Guide Cross-Site Big Table using HBase Anoop Sam John, Du Jingcheng, Ramkrishna S. Vasudevan Big Data US Research And Development, Intel
  • 2. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Motivation
  • 3. Motivation • Growing demands for storing the data across geographically distributed data centers. – Data and data pattern is similar across data centers. – But the data is private to each of the data center. • Improve the data availability and disaster recovery. • An easy way to access these distributed data. • Manage the hierarchy relationship between data centers. (Grouping of data centers)
  • 4. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Use Case
  • 5. Intelligent Transportation System • Monitors traffic movements, traffic patterns etc. in every city. • Data in every data center is private and holds traffic pattern of that city • Hierarchy of departments - National Transportation Department/State Transportation Department/City Transportation Department • National/State Transportation Department – Virtual node • Helps to aggregate results/statistics over all the data centers. • Easy access and single point of view of all the data centers.
  • 6. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Agenda
  • 7. Agenda • Goals of CSBT[1] • Architecture of CSBT – Highly Available Global Zookeeper Quorum – Cross-Site Metadata in Global Zookeeper – Cluster Locator – Hierarchy • Admin Operations on CSBT • Read/Write operations on CrossSiteHTable • Data Replication and FailOver • Future Improvements [1] – CSBT refers to Cross-Site Big Table
  • 8. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Goals
  • 9. Goals • A global view for tables across different data centers. • Define and manage the hierarchy relationship between data centers and data. • High availability • Locality – In terms of geography – Each data center holds its own data.
  • 10. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Architecture
  • 11. Architecture CSBTCSBT
  • 12. Architecture • An across data center dedicated, distributed zookeeper quorum – Global Zookeeper • Table partitioning – Each data center holds a specific partition of the table – Every partition of the Cross-site HTable is an HTable itself, bearing a table name “<tableName>_<clustername>” – The partitioning rule is set by user at table creation time using Cluster Locators. • Supports all admin and table operations as supported on the HTable. Apache ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services (Source : http://zookeeper.apache.org/)
  • 13. Architecture • Data center relationship – Allows each data center to configure its peer data center (for replication and failover in read) • Peers for a cluster could be nodes in another cluster also. • Master-Master, Master-slave replication etc. – Uses Asynchronous Replication of Apache HBase • Asynchronously writes the WAL entries to the configured peers – Could define hierarchy for the data centers
  • 14. Highly Available Global Zookeeper Quorum • Dedicated Zookeeper cluster. • Split the Zookeeper quorum across data centers. • Recommended not to use the zookeeper cluster used by the individualHBase setups. • Leverage the Zookeeper Observer – Do not impact the Zookeeper write performance – Configure in such a way that ‘reads’ are served locally – Configure the Zookeeper quorum as <local observers>,<leader/followers>,<observers in other DCs> Observers are non-voting members of an ensemble which only hear the results of votes Observers may be used to talk to an Apache ZooKeeper server from another data center Clients of the Observer will see fast reads, as all reads are served locally, and writes result in minimal network traffic as the number of messages required in the absence of the vote protocol is smaller (Source : http://zookeeper.apache.org/)
  • 15. Cross-Site Metadata in Global Zookeeper CrossSite clusters tables address hierarchy state splitkeys desc proposed_desc peerscluster1 cluster2 cluster3 table1 table2 table3 locator
  • 16. Cluster Locator • Sets data partition logic/rule. • Helps to locate a specific cluster based on the row key. • Users are allowed to create their own cluster locators – PrefixClusterLocator – <clustername>,<row> where “,” is the delimiter – SuffixClusterLocator – <row>,<xxx>,<yyy>,<clustername> where “,” is the delimiter and cluster name is always the string that appears after the occurrence of the last delimiter. • Note that its up to the user to specify the cluster name in the row key while doing ‘puts’ based on the cluster locator configured while table creation.
  • 17. Hierarchy
  • 18. Hierarchy • Could define the parent-child relationship for clusters • The node in the hierarchy could be either a physical cluster, or a virtual one. A virtual node may represent a logical grouping of a set of physical clusters • The hierarchy is used while ‘scan’ing – If a parent node is specified, all its descendants are also counted
  • 19. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Admin operations on Cross-site BigTable
  • 20. Admin Operations • Operation performed using CrossSiteHBaseAdmin • Extends HBaseAdmin.
  • 21. Create Peers • Specifying peers creates the peers under the ‘peers’ node • Address of each peer is written as data in the peer znodes peers cluster2 cluster3 cluster4 cluster1
  • 22. Create Table Cluster a01 Table: T1_a01 Cluster a02 Table: T1_a02 Peer1 Table: T1_a01 (backup) CSBTAdmin Global ZK Cluster a01->Peer1 Cluster a02->Peer2 Peer2 Table: T1_a02 (backup) 1. Create the table znode in ZK 4. Writes the table related data in table’s znode and updates the state in zk 2. Create tables in clusters 3. Create table in peers if any
  • 23. Disable Table Cluster a01 Table: T1_a01 Cluster a02 Table: T1_a02 Peer1 Table: T1_a01 (backup) CSBTAdmin Global ZK Cluster a01->Peer1 Cluster a02->Peer2 Peer2 Table: T1_a02 (backup) 1. Update the state to DISABLING 3. Update the state to DISABLED 2. Disable tables in clusters • Do NOT disable the tables in the peers - As it is asynchronous replication, disabling peer may stop the entire replication. There may be some unfinished WALs from getting replicated
  • 24. Enable Table Cluster a01 Table: T1_a01 Cluster a02 Table: T1_a02 Peer1 Table: T1_a01 (backup) CSBTAdmin Global ZK Cluster a01->Peer1 Cluster a02->Peer2 Peer2 Table: T1_a02 (backup) 1. Update the state to ENABLING 4. Update the state to ENABLED 2. Enable tables in clusters 3. Handle TableNotDisabledException as Peers already ENABLED
  • 25. Alter Schema Cluster a01 Table: T1_a01 Cluster a02 Table: T1_a02 Peer1 Table: T1_a01 (backup) CSBTAdmin Global ZK Cluster a01->Peer1 Cluster a02->Peer2 Peer2 Table: T1_a02 (backup) 1. Write the new HTD to PROPOSED_DESC znode 3. Update the table’s HTD znode 4. Update table state to DISABLED 5. Delete the PROPOSED_DESC znode 3. Alter schema in clusters 4. Add/Modify column in peers by DISABLING. ENABLE after completion. If table not present create the table with the new HTD. 2. Update the state to MODIFYING/ADDING xxx state
  • 26. Delete Table Cluster a01 Table: T1_a01 Cluster a02 Table: T1_a02 Peer1 Table: T1_a01 (backup) CSBTAdmin Global ZK Cluster a01->Peer1 Cluster a02->Peer2 Peer2 Table: T1_a02 (backup) 1. Update the state to DELETING 4. Remove the table from the zk 2. Delete tables in clusters 3. Disable and Delete the tables from the peer
  • 27. Failure handling • Failures are handled for the create/enable/disable/delete table by using ZK states. Any failure the entire operation has to be retried. • A tool that helps to deduce and auto-correct inconsistencies in the CSBT cluster in terms of table state.
  • 28. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Read/Write operations on CrossSiteHTable
  • 29. Operations using CrossSiteHTable • Operations like put/get/scan/delete performed using CrossSiteHTable • Extends HTable
  • 30. Get/Put/Delete • Get/Put/Delete “a01, row1” from table T1 Cluster a01 Table: T1_a01 Cluster a02 Table: T1_a02 CSBTHTable Global ZK 1. retrieve cluster locator for table “T1”(cached) 2. map “a01,row1” to cluster “a01” 3. find address for cluster “a01” (cached) 4. do get/put/delete(“a01,row1”) on table “T1_a01” from cluster “a01”
  • 31. Scan with Start/Stop row • New scan APIs added where cluster names could be passed while creating scans • Scan from table T1 [ start – “row1”, end – “row6” ] , clusters-[cluster a01, cluster a02] Cluster a01 Table: T1_a01 CSBTHTable Global ZK 1. retrieve cluster info for table “T1”(cached) 2. find address for cluster “a01” and “a02” (cached) 3. scan from(“a01,row1”) to (“a01,row6) on table “T1_a01” from cluster “a01” 4. scan from(“a02,row1”) to (“a02,row6) on table “T1_a02” from cluster “a02” Cluster a02 Table: T1_a02 Cluster a03 Table: T1_a03
  • 32. Scan with Hierarchy Scan from table T1 [ start – “row1”, end – “row6” ] , clusters- [California] California – virtual node SFO, LA, San Diego – physical nodes
  • 33. Scan • Uses a merge sort iterator to merge the results from different clusters Client Merge(sort) Iterator Cluster A Cluster B Cluster Zall clusters …
  • 34. Operations on CSBT • The admin operations have shell and thrift support. • Also supports MapReduce for operations on CrossSiteBigTable.
  • 35. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Data Replication and FailOver
  • 36. Data Center Relationship • Allows data centers to add peers • Apache HBase replication – Asynchronous data replication – Customized replication sink for CSBT • Read-only failover – Automatically redirects the read to the peer center • Existing data not getting replicated for dynamic peer addition.
  • 37. Data Replication Cluster “a01” Table: T1_a01 Table: T1_a02’ (backup) Cluster “a02” Table: T1_a03’ (backup) Table: T1_a02 Cluster “a03” Table: T1_a01’ (backup) Table: T1_a03 CSBTHTable replicate replicate replicate put put put
  • 38. Read-only Failover Cluster “a01” Table: T1_a01 Table: T1_a02’ (backup) Cluster “a02” Table: T1_a03’ (backup) Table: T1_a02 Cluster “a03” Table: T1_a01’ (backup) Table: T1_a03 CSBTHTable failover to backup DC get/scan
  • 39. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Future improvements
  • 40. Future improvements • Security – CSBT security and how user/group authentications interact • MR improvement • Full fledged CSBT HBCK. Currently the MR tasks runs in one cluster and all the result computation happens in one cluster. We could improve this by dispatching the task to each cluster and then collect the results from them.
  • 41. Partitioning Rule • A rule to parse row keys, help to map records to different clusters. ClusterLocator provides this facility which is recorded in the central ZK – PrefixClusterLocator – SuffixClusterLocator – … • An example of PrefixClusterLocator – If a row key is “clusterA,rowKey1”, then this record belongs to clusterA Q & A