• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Introduction to Apache HBase, MapR Tables and Security
 

Introduction to Apache HBase, MapR Tables and Security

on

  • 334 views

This talk with focus on two key aspects of applications that are using the HBase APIs. The first part will provide a basic overview of how HBase works followed by an introduction to the HBase APIs ...

This talk with focus on two key aspects of applications that are using the HBase APIs. The first part will provide a basic overview of how HBase works followed by an introduction to the HBase APIs with a simple example. The second part will extend what we've learned to secure the HBase application running on MapR's industry leading Hadoop.

Keys Botzum is a Senior Principal Technologist with MapR Technologies. He has over 15 years of experience in large scale distributed system design. At MapR his primary responsibility is working with customers as a consultant, but he also teaches classes, contributes to documentation, and works with MapR engineering. Previously he was a Senior Technical Staff Member with IBM and a respected author of many articles on WebSphere Application Server as well as a book. He holds a Masters degree in Computer Science from Stanford University and a B.S. in Applied Mathematics/Computer Science from Carnegie Mellon University.

Statistics

Views

Total Views
334
Views on SlideShare
334
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Let’s take a quick look at the relational database model versus non-relational database models. Most of us are familiar with Relational Database Management Systems (RDBMS). We’ll briefly compare the relational model to the column family oriented model in the context of big data. This will help us fully understand the structure of MapR Tables and their underlying concepts.
  • In the relational model data is normalized, it is split into tables when stored , and then joined back together when queried. We will see that hbase has a different model. Relational databases brought us many benefits: They take care of persistenceThey manage concurrency for transactions. SQL has become a defacto standardRelational databases provide lots of tools , They have become very important for integration of applications and for reportingMany business rules map well to a tabular structure and relationshipsRelational databases provide an efficient and robust structure for storing datastandard model of persistence- standard language of data manipulation (SQL)Relational databases handle concurrency by controlling all access to data through transactions. this transactional mechanism has worked well to contain the complexity of concurrency.shared databases have worked well for integration of applicationsRelational databases have succeeded because they provide these benefits in a standard way
  • • Row-oriented: Each row is indexed by a key that you can use for lookup.(for example, customer with the ID of 1234) • Column-fanily oriented: Each column family groups like data (customer address, order) within rows. You can think of a row as the join of all values in all column families.Grouping the data by key is central to running on a cluster and sharding. The key acts as the atomic unit for updates.
  • Data stored in the “big table” is located by it’s “rowkey.” This is like a primary key from a relational database. Records in HBase are stored in sorted order according to rowkey. This is a fundamental tenant of HBase and is also a critical semantic used in HBase schema design. 
  • Tables are divided into sequences of rows, by key range, called regionsThese Regions are then assigned to the data nodesin the cluster called “RegionServers”. This Scale read and write capacity by spreading acrosscluster.
  • If a cell is empty then it does not consume disk spaceSparseness provides schema flexibilityAdd columns later, no need totransform entire schema
  • Once you have created a table you define column families . Columns may be defined on the fly. You can define them ahead of the time but that is not common practice. That’s it. You don’t define rows ahead of time.Table operations are fairly simple.put Inserts data into rows (both add and update)get Accesses data from one rowscan Accesses data from a range of rows
  • As we go through the details of the HBase API, you will see that there is a pattern that is followed most of the time for CRUD operations.First you instantiate an object for the operation you’re about to execute: put, get, scan or deleteThen you add details to that object and specify what you need from it. You do this by calling an add method and sometimes a set method.Once your object is specified with these attributes you are ready to execute the operation against a table. To do that you invoke the operation with the object you’ve prepared. For example for a put operation you call table.put() and you pass the put object you created as the parameter.Let’s look at the Put operation now.
  • Here is an example of single put operation. Let’s look at what all this means.
  • Now that you have an instance of a put object for a specified row key you should provide some details, specifically what value you need to insert or update. In general you add a value for a column that belongs to a column family. That’s the most common case. Just like in the constructor for the Put object itself you don’t have to provide a timestamp but there is a method that lets you control that if you need to by proving a timestamp argument.
  • This is the same thing as what we saw earlier except that now we add several values to the same put object. Each call to add() specifies exactly one column, or, in combination with an optional timestamp, one single cell.This is just to show you that even though this is a single put operation you typically call add more than once.We saw that one of the add methods takes a KeyValue parameter so let’s look at the KeyValue class.
  • Everything in Hbase is stored as Bytes. The Bytes class is a utility class that provides methods to convert Java types to and from byte[] arrays.The Native java types supported are String, boolean, short, int, long, double, and float. Bytes The HBase Bytes class is similar to the Java ByteBufferclass but the HBase class performs all of its operations without instantiating new classes (and thus avoids garbage collection)Note to instructor: optional, show the javadoc to point out what conforms and what doesn’t conform to this patternThere are other methods that are worth looking at and we will do that in a later session after we’ve gone through CRUD operations.
  • Here is an example of a single get operation. You can see it is following the pattern we mentioned earlier. The only notable difference is that we call addColumn instead of just an add. Let’s look at all this in detail now.
  • You call add to specify what you want returned this is similar to what we saw for Put except that here you specify the family or column you are interested in.If you want to be more precise then you should call one of the set methods to be more specific. You can control what timestamp or time range you are interested in, how many versions of the data you want to see. You can even add a filter and we will talk about filters later as they deserve more than just passing attention.
  • In this get operation we have narrowed things down to a specific column. Once we got the result back we invoke one of the convenience methods from Result, here getValue, to retrieve the value in the Result instance.To see more about the Result class go to http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Result.htmlWe’ve added and retrieved data so now to complete the CRUD cycle we need to look at deleting data.
  • (*) MapR takes things one step further, by integrating table storage into the MapR-FS, eliminating all JVM layers and interacting directly with disks for both file and table storage. The result is an enterprise-grade datastore for tables with all the reliability of a MapR cluster, and no additional administrative burden. removed layersfewer layers unified namespaceAgain, (*) MapR preserves the standard Hadoop and HBase APIs, so all ecosystem components continue to operate without modification.Fewer layersSingle hop to dataNo compactions, low i/o amplificationSeamless splits, automatic mergesInstant recoveryWith the MapR M5 Edition of the Hadoop stack, the company basically pushed HDFS down into a distributed NFS file system, supporting all of the APIs of HDFS . with MapR M7 Edition, the file system can not only handle small chunks of data but also small pieces of HBase tables. This eliminates some layers of Java virtualization, and the way that MapR has implemented its code, all of the HBase APIs are supported so hbaseapplications don't know they are using MapR's file system.
  • In MapR tables are part of the file system so it’s a single hop. Single hop means client to MapR FS that handles write/read operations to the file system directly.MapRFilesystem is an integrated systemTables and Files in a unified filesystem, based on MapR’s enterprise-grade storage layer.MapR tables use the HBase data model and APIKey differences between MapR tables and Apache HBaseTables are part of the MapR File systemNo RegionServers and HBaseMaster daemonsWrite-ahead logs (WALs) are much smallerNo manual compactionNo major compaction delaysRegion splits are seamless and require no manual interventionIn MapR tables are part of the file system so it’s a single hop. Single hop means client to MapR FS that handles write/read operations to the file system directly.Seamless splits, no compaction and small WALs
  • MapR Filesystem provides strong consistency for table data and a high level of availability in a distributed environment, while also solving the common problems with other popular NoSQL options, such as compaction delays and manual administration.
  • Can also use hbase shell> create '/user/keys/e3', 'base', 'salary'
  • hbase shell
  • Use MCS to set ACEs
  • Use MCS to set ACEs
  • ACE on SSN columnFiltering out responses (coming soon in fix)
  • Let's review the HBase data model as a quick refresher of terms and concepts.

Introduction to Apache HBase, MapR Tables and Security Introduction to Apache HBase, MapR Tables and Security Presentation Transcript

  • 1©MapR Technologies © MapR Technologies, confidential Introduction to Apache HBase, MapR Tables, and Security
  • Agenda  HBase Overview  HBase APIs  MapR Tables  Example  Securing tables
  • What‟s HBase??  A NoSQL database – Synonym for ‘non-traditional’ database  A distributed columnar data store – Storage layout implies performance characteristics  The “Hadoop” database  A semi-structured database – No rigid requirements to define columns or even data types in advance – It’s all bytes to HBase  A persistent sorted Map of Maps – Programmers view 3
  • Column Oriented CF1 colA colB colC val val val  Row is indexed by a key – Data stored sorted by key  Data is stored by columns grouped into column families – Each family is a file of column values laid out in sorted order by row key – Contrast this to a traditional row oriented database where rows are stored together with fixed space allocated for each row CF2 colA colB colC val val val Row Key axxx gxxx Customer Address data Customer order dataCustomer id
  • HBase Data Model- Row Keys  Row Keys: identify the rows in an HBase table. Row Key CF1 CF2 … colA colB colC colA colB colC colD R1 axxx val val val val … gxxx val val val val R2 hxxx val val val val val val val … jxxx val R3 kxxx val val val val … rxxx val val val val val val … sxxx val val
  • Rows are Stored in Sorted Order  Sorting of row key is based upon binary values –Sort is lexicographic at byte level –Comparison is “left to right”  Example: –Sort order for String 1, 2, 3, …, 99, 100:  1, 10, 100, 11, 12,…, 2, 20, 21, …, 9, 91, 92, …, 98, 99 – Sort order for String 001, 002, 003, …, 099, 100:  001, 002, 003, …, 099, 100 –What if the RowKeys were numbers converted to fixed sized binary?
  • Tables are split into Regions = contiguous keys Source: Diagram from Lars George‟s HBase: The Definitive Guide. Key Range Region1 Key Range axxx gxxx  Tables are partitioned into key ranges (regions)  Region= contiguous keys, served by nodes (RegionServers)  Regions are spread across cluster: S1, S2… Region 2 Key Range Lxxx zxxx Region CF1 colA colB colC val val val CF2 colA colB colC val val val Region Row key axxx gxxx Region Server for Region 2, 3
  • HBase Data Model- Cells  Value for each cell is specified by complete coordinates: – RowKey  Column Family  Column  Version: Value – Key:CF:Col:Version:Value RowKey CF:Qualifier version value smithj Data:street 12734567800 Main street Column Key
  • Sparsely-Populated Data  Missing values: Cells remain empty and consume no storage Row Key CF1 CF2 … colA colB colC colA colB colC colD Region 1 axxx val val val val … gxxx val val val val Region 2 hxxx val val val val val val val … jxxx val R3 kxxx val val val val … rxxx val val val val val val … sxxx val val
  • HBase Data Model Summary  Efficient/Flexible – Storage allocated for columns only as needed on a given row • Great for sparse data • Great for data of widely varying size – Adding columns can be done at any time without impact – Compression and versioning are usually built-in and take advantage of column family storage (like data together)  Highly Scalable – Data is sharded amongst regions based upon key • Regions are distributed in cluster – Grouping by key = related data stored together  Finding data – Key implies region and server, column family implies file – Efficiently get to any data by key
  • Agenda  HBase Overview  HBase APIs  MapR Tables  Example  Securing tables
  • Basic Table Operations  Create Table, define Column Families before data is imported – But not the rows keys or number/names of columns  Basic data access operations (CRUD): put Inserts data into rows (both add and update) get Accesses data from one row scan Accesses data from a range of rows delete Delete a row or a range of rows or columns
  • CRUD Operations Follow A Pattern (mostly)  Most common pattern – Instantiate object for an operation: Put put = new Put(key) – Add or Set attributes to specify what you need: put.add(…) – Execute the operation against the table: myTable.put(put) // Insert value1 into rowKey in columnFamily:columnName1 Put put = new Put(rowKey); put.add(columnFamily, columnName1, value1); myTable.put(put); // Retrieve values from rowA in columnFamily:columnName1 Get get = new Get(rowKey); get.addColumn(columnFamily, columnName1); Result result = myTable.get(get);
  • Put Example byte [] invTable = Bytes.toBytes("/path/Inventory"); byte [] stockCF = Bytes.toBytes(“stock"); byte [] quantityCol = Bytes.toBytes (“quantity”); long amt = 24l; HTableInterface table = new HTable(hbaseConfig, invTable); Put put = new Put(Bytes.toBytes (“pens”)); put.add(stockCF, quantityCol, Bytes.toBytes(amt)); table.put(put); quantity pens 24 CF “stock"Inventory
  • Put Operation – Add method  Once a Put instance is created you call an add method on it  Typically you add a value for a specific column in a column family – ("column name" and "qualifier" mean the same thing)  Optionally you can set a timestamp for a cell Put add(byte[] family, byte[] qualifier, long ts, byte[] value) Put add(byte[] family, byte[] qualifier, byte[] value)
  • Put Operation –Single Put Example adding multiple column values to a row byte [] tableName = Bytes.toBytes("/path/Shopping"); byte [] itemsCF = Bytes.toBytes(“items"); byte [] penCol = Bytes.toBytes (“pens”); byte [] noteCol = Bytes.toBytes (“notes”); byte [] eraserCol = Bytes.toBytes (“erasers”); HTableInterface table = new HTable(hbaseConfig, tableName); Put put = new Put(“mike”); put.add(itemsCF, penCol, Bytes.toBytes(5l)); put.add(itemsCF, noteCol, Bytes.toBytes(5l)); put.add(itemsCF, eraserCol, Bytes.toBytes(2l)); table.put(put);
  • Bytes class http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/util/Bytes.html  org.apache.hadoop.hbase.util.Bytes  Provides methods to convert Java types to and from byte[] arrays  Support for  String, boolean, short, int, long, double, and float  Example: byte [] bytesTablePath = Bytes.toBytes("/path/Shopping"); String myTable = Bytes.toString(bytesTablePath); byte [] amountBytes = Bytes.toBytes(1000l); long amount = Bytes.toLong(amount);
  • Get Operation – Single Get Example byte [] tableName = Bytes.toBytes("/path/Shopping"); byte [] itemsCF = Bytes.toBytes(“stock"); byte [] penCol = Bytes.toBytes (“pens”); HTableInterface table = new HTable(hbaseConfig, tableName); Get get = new Get(“Mike”); get.addColumn(itemsCF, penCol); Result result = myTable.get(get); byte[] val = result.getValue(itemsCF, penCol); System.out.println("Value: " + Bytes.toLong(val));
  • Get Operation – Add And Set methods  Using just a get object will return everything for a row.  To narrow down results call add – addFamily: get all columns for a specific family – addColumn: get a specific column  To further narrow down results, specify more details via one or more set calls then call add – setTimeRange: retrieve columns within a specific range of version timestamps – setTimestamp: retrieve columns with a specific timestamp – setMaxVersions: set the number of versions of each column to be returned – setFilter: add a filter get.addColumn(columnFamilyName, columnName1);
  • Result – Retrieve A Value From A Result public static final byte[] ITEMS_CF= Bytes.toBytes("items"); public static final byte[] PENS_COL = Bytes.toBytes(“pens"); Get g = new Get(Bytes.toBytes(“Adam”)); g.addColumn(ITEMS_CF , PENS_COL); Result result = table.get(g); byte[] b = result.getValue(ITEMS_CF, PENS_COL); long valueInColumn = Bytes.toLong(b); http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Result.html Items:pens Items:notepads Items:erasers Adam 18 7 10
  • Other APIs  Not covering append, delete, and scan  Not covering administrative APIs 24
  • Agenda  HBase Overview  HBase APIs  MapR Tables  Example  Securing tables
  • Tables and Files in a Unified Storage Layer HBase JVM HDFS JVM ext3 FS Disks Apache HBase on Hadoop HBase JVM Apache HBase on MapR Filesystem MapR-FS Disks HDFS API M7 Tables Integrated into Filesystem MapR-FS Disks HBase API HDFS API MapR Filesystem is an integrated system – Tables and Files in a unified filesystem, based on MapR’s enterprise-grade storage layer.
  • Portability  MapR tables use the HBase data model and API  Apache HBase applications work as-is on MapR tables –No need to recompile –No vendor lock-in MapR-FS Disks HBase API HDFS API
  • MapR M7 Table Storage  Table regions live inside a MapR container – Served by MapR fileserver service running on nodes – HBase RegionServer and HBase Master services are not required Region Region Container Key colB colC val val val Key colB colC val val val Region Region Container Key colB colC val val val Key colB colC val val val Client Nodes
  • MapR Tables vs. HBase • Compaction delays • Manual administration • Poor reliability • Lengthy disaster recovery • No Compaction delays • Easy administration • Strong consistency • Rapid recovery • 2x Cassandra performance • 3x HBase performance Apache HBase
  • MapR M7 vs. CDH – Mixed Load (50-50)
  • Agenda  HBase Overview  HBase APIs  MapR Tables  Example  Securing tables
  • Example: Employee Database  Column Family: Base – lastName – firstName – address – SSN  Column Family: salary – ‘dynamic’ columns – year:salary  Row key – lastName:firstName? Not unique – Unique id? Can’t search easily – lastName:firstName:id? Can’t search by id 32
  • Source: “employee class” public class Employee { String key; String lastName, firstName, address; String ssn; Map<Integer, Integer> salary; … } 33
  • Source: „schema‟ byte[] BASE_CF = Bytes.toBytes("base"); byte[] SALARY_CF = Bytes.toBytes("salary"); byte[] FIRST_COL = Bytes.toBytes("firstName"); byte[] LAST_COL = Bytes.toBytes("lastName"); byte[] ADDRESS_COL = Bytes.toBytes("address"); byte[] SSN_COL = Bytes.toBytes("ssn"); String tableName = userdirectory + "/" + shortName; byte[] TABLE_NAME = Bytes.toBytes(tableName); 34
  • Source: “get table” HTablePool pool = new HTablePool(); table = pool.getTable(TABLE_NAME); return table; 35
  • Source: “get row”  Whole row Get g = new Get(Bytes.toBytes(key)); Result result = getTable().get(g);  Just base column family Get g = new Get(Bytes.toBytes(key)); g.addFamily(BASE_CF); Result result = getTable().get(g); 36
  • Source: “parse row” Employee e = new Employee(); e.setKey(Bytes.toString(r.getRow())); e.setLastName(getString(r, BASE_CF, LAST_COL)); e.setFirstName(getString(r,BASE_CF, FIRST_COL)); e.setAddress(getString(r,BASE_CF, ADDRESS_COL)); e.setSsn(getString(r,BASE_CF, SSN_COL)); String getString(Result r, byte[] cf, byte[] col) { byte[] b = r.getValue(cf, col); if (b != null) return Bytes.toString(b); else return ""; } 37
  • Source: “parse row” //get salary information Map<byte[], byte[]> m = r.getFamilyMap(SALARY_CF); Iterator<Map.Entry<byte[], byte[]>> i = m.entrySet().iterator(); while (i.hasNext()) { Map.Entry<byte[], byte[]> entry = i.next(); Integer year = Integer.parseInt(Bytes.toString(entry.getKey())); Integer amt = Integer.parseInt(Bytes.toString( entry.getValue())); e.getSalary().put(year, amt); } 38
  • Demo  Create a table using MCS  Create a table and column families using maprcli 39 $ maprcli table create -path /user/keys/employees $ maprcli table cf create -path /user/keys/employees -cfname base $ maprcli table cf create -path /user/keys/employees -cfname salary
  • Demo  Populate with sample data using hbase shell 40 hbase> put '/user/keys/employees', 'k1', 'base:lastName', 'William' > put '/user/keys/employees', 'k1', 'base:firstName', 'John' > put '/user/keys/employees', 'k1', 'base:address', '123 street, springfield, VA' > put '/user/keys/empoyees', 'k1', 'base:ssn', '999-99-9999' > put '/user/keys/employees', 'k1', 'salary:2010', '90000’ > put '/user/keys/employees', 'k1', 'salary:2011', '91000’ > put '/user/keys/employees', 'k1', 'salary:2012', '92000’ > put '/user/keys/employees', 'k1', 'salary:2013', '93000’ ….….
  • Demo  Fetch record using java program 41 $ ./run employees get k1 Use command get against table /user/keys/employees Employee record: Employee [key=k1, lastName=William, firstName=John, address=123 first street, springfield, VA, ssn=999-99-9999, salary={2010=90000, 2011=91000, 2012=92000, 2013=93000}]
  • Demo – run script 42 #!/bin/bash export LD_LIBRARY_PATH=/opt/mapr/hadoop/hadoop- 0.20.2/lib/native/Linux-amd64-64 java -cp `hbase classpath`:/home/kbotzum/development/exercises/target/exercises.jar person.botzum.hbase.Demo $*
  • What Didn‟t I Consider? 43
  •  Row Key  Secondary ways of searching – Other tables as indexes?  Long term data evolution – Avro? – Protobufs?  Security – SSN is sensitive – Salary looks kind of sensitive What Didn‟t I Consider? 44
  • Agenda  HBase Overview  HBase APIs  MapR Tables  Example  Securing tables
  • MapR Tables Security  Access Control Expressions (ACEs) – Boolean logic to control access at table, column family, and column level 46
  • ACE Highlights  Creator of table has all rights by default – Others have none  Can grant admin rights without granting read/write rights  Defaults for column families set at table level  Access to data depends on column family and column access controls  Boolean logic 47
  • MapR Tables Security  Leverages MapR security when enabled – Wire level authentication – Wire level encryption – Trivial to configure • Most reasonable settings by default • No Kerberos required! – Portable • No MapR specific APIs 48
  • Demo  Enable cluster security  Yes, that’s it! – Now all Web UI and CLI access requires authentication – Traffic is now authenticated using encrypted credentials – Most traffic is encrypted and bulk data transfer traffic can be encrypted 49 # configure.sh –C hostname –Z hostname -secure –genkeys
  • Demo  Fetch record using java program when not authenticated 50 $ ./run employees get k1 Use command get against table /user/keys/employees 14/03/14 18:42:39 ERROR fs.MapRFileSystem: Exception while trying to get currentUser java.io.IOException: failure to login: Unable to obtain MapR credentials
  • Demo  Fetch record using java program 51 $ maprlogin password [Password for user 'keys' at cluster 'my.cluster.com': ] MapR credentials of user 'keys' for cluster 'my.cluster.com' are written to '/tmp/maprticket_1000' $ ./run employees get k1 Use command get against table /user/keys/employees Employee record: Employee [key=k1, lastName=William, firstName=John, address=123 first street, springfield, VA, ssn=999-99-9999, salary={2010=90000, 2011=91000, 2012=92000, 2013=93000}]
  • Demo  Fetch record using java program as someone not authorized to table 52 $ maprlogin password [Password for user 'fred' at cluster 'my.cluster.com': ] MapR credentials of user 'fred' for cluster 'my.cluster.com' are written to '/tmp/maprticket_2001' $ ./run /user/keys/employees get k1 Use command get against table /user/keys/employees 2014-03-14 18:49:20,2787 ERROR JniCommon fs/client/fileclient/cc/jni_common.cc:7318 Thread: 139674989631232 Error in DBGetRPC for table /user/keys/employees, error: Permission denied(13) Exception in thread "main" java.io.IOException: Error: Permission denied(13)
  • Demo  Set ACEs to allow read to base information but not salary  Fetch whole record using java program 53 $ ./run /user/keys/employees get k1 Use command get against table /user/keys/employees 2014-03-14 18:53:15,0806 ERROR JniCommon fs/client/fileclient/cc/jni_common.cc:7318 Thread: 139715048077056 Error in DBGetRPC for table /user/keys/employees, error: Permission denied(13) Exception in thread "main" java.io.IOException: Error: Permission denied(13)
  • Demo  Set ACEs to allow read to base information but not salary  Fetch just base record using java program 54 $ ./run employees getbase k1 Use command get against table /user/keys/employees Employee record: Employee [key=k1, lastName=William, firstName=John, address=123 first street, springfield, VA, ssn=999-99-9999, salary={}]
  • What Else Didn‟t I Consider?  55
  • References  http://www.mapr.com/blog/getting-started-mapr-security-0  http://www.mapr.com/  http://hadoop.apache.org/  http://hbase.apache.org/  http://tech.flurry.com/2012/06/12/137492485/  http://en.wikipedia.org/wiki/Lexicographical_order  Hbase in Action, Nick Dimiduck, Amandeep Khurana  HBase: The Definitive Guide, Lars George  Note: this presentation includes materials from the MapR HBase training classes
  • 57©MapR Technologies © MapR Technologies, confidential Questions? 57
  • 58©MapR Technologies © MapR Technologies, confidential Hbase Architecture
  • What is HBase? (Cluster View)  ZooKeeper (ZK)  HMaster (HM)  Region Servers (RS) For MapR, there is less delineation between Control and Data Nodes. ZooKeeper NameNode A B HMaster C D HMaster ZooKeeper ZooKeeper Master servers Slave servers Region Server Data Node Region Server Data Node Region Server Data Node Region Server Data Node
  • What is a Region?  The basic partitioning/sharding unit of HBase.  Each region is assigned a range of keys it is responsible for.  Region servers serve data for reads and writes Region Server Client Region Region HMaster zookeeper Region Region Region Server Key colB colC val val val Key colB colC val val val Key colB colC val val val Key colB colC val val val zookeeper zookeeper