• Save
Chicago Data Summit: Apache HBase: An Introduction
 

Chicago Data Summit: Apache HBase: An Introduction

on

  • 21,295 views

Apache HBase is an open source distributed data-store capable of managing billions of rows of semi-structured data across large clusters of commodity hardware. HBase provides real-time random ...

Apache HBase is an open source distributed data-store capable of managing billions of rows of semi-structured data across large clusters of commodity hardware. HBase provides real-time random read-write access as well as integration with Hadoop MapReduce, Hive, and Pig for batch analysis. In this talk, Todd will provide an introduction to the capabilities and characteristics of HBase, comparing and contrasting it with traditional database systems. He will also introduce its architecture and data model, and present some example use cases.

Statistics

Views

Total Views
21,295
Views on SlideShare
19,781
Embed Views
1,514

Actions

Likes
110
Downloads
0
Comments
3

14 Embeds 1,514

http://www.cloudera.com 1086
http://www.scoop.it 153
http://tedwon.com 133
http://lanyrd.com 72
http://dev-wiki.rakuten.co.jp 28
http://dev-wiki.rakuten.co.jp 28
http://blog.cloudera.com 4
https://twitter.com 3
http://www.linkedin.com 2
http://test.cloudera.com 1
http://translate.googleusercontent.com 1
https://www.cloudera.com 1
http://cloudera.matt.dev 1
http://115.68.2.182 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • After much research in the internet I ended up with the tutorial LEARN HBase from the INTELLIPAAT. In this tutorial the way THEY organized the videos are excellent. It's the very helpfull resource to learn and understand the compents.
    http://www.youtube.com/watch?v=1Qx3uvmjIyU
    For more information drop a mail : sales@intellipaat.com
    Are you sure you want to
    Your message goes here
    Processing…
  • Great stuff! Thanks for sharing!
    Are you sure you want to
    Your message goes here
    Processing…
  • Fantastic deck. Perfect for a brown-bag session on HBase.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Hbase is a project that solves this problem. In a sentence, Hbase is an open source, distributed, sorted map modeled after Google’s BigTable. Open-source: Apache HBase is an open source project with an Apache 2.0 license. Distributed: HBase is designed to use multiple machines to store and serve data. Sorted Map: HBase stores data as a map, and guarantees that adjacent keys will be stored next to each other on disk. HBase is modeled after BigTable, a system that is used for hundreds of applications at Google. Copyright 2010 Cloudera - Do not distribute
  • Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the "info" column family. Row2 only has a single column. A column can also be empty. Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1 st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained. Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek. Copyright 2010 Cloudera - Do not distribute
  • Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the "info" column family. Row2 only has a single column. A column can also be empty. Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1 st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained. Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek. Copyright 2010 Cloudera - Do not distribute
  • Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the "info" column family. Row2 only has a single column. A column can also be empty. Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1 st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained. Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek. Copyright 2010 Cloudera - Do not distribute
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs. Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice! Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables. Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek. Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step. Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  • One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs. Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice! Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables. Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek. Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step. Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  • Data Layout : An traditional RDBMS uses a fixed schema and row-oriented storage model. This has drawbacks if the number of columns per row could vary drastically. A semi-structured column-oriented store handles this case very well. Transactions : A benefit that an RDBMS offers is strict ACID compliance with full transaction support. HBase currently offers transactions on a per row basis. There is work being done to expand HBase's transactional support. Query language : RDBMSs support SQL, a full-featured language for doing filtering, joining, aggregating, sorting, etc. HBase does not support SQL*. There are two ways to find rows in HBase: get a row by key or scan a table. Security : In version 0.20.4, authentication and authorization are not yet available for HBase. Indexes : In a typical RDBMS, indexes can be created on arbitrary columns. HBase does not have any traditional indexes**. The rows are stored sorted, with a sparse index of row offsets. This means it is very fast to find a row by its row key. Max data size : Most RDBMS architectures are designed to store GBs or TBs of data. HBase can scale to much larger data sizes. Read/write throughput limits : Typical RDBMS deployments can scale to thousands of queries/second. There is virtually no upper bound to the number of reads and writes HBase can handle. * Hive/HBase integration is being worked on ** There are contrib packages for building indexes on HBase tables Copyright 2010 Cloudera - Do not distribute
  • One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs. Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice! Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables. Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek. Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step. Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  • People often want to know “the numbers” about a storage system. I would recommend that you test it yourself – benchmarks always lie. But, here are some general numbers about Hbase. The largest cluster I’ve seen is 600 nodes, storing around 600TB. Most clusters are much smaller, only 5-20 nodes, hosting a few hundred gigabytes. Generally, writes take a few ms, and throughput is on the order of thousands of writes per node per second, but of course it depends on the size of the writes. Reads are a few milliseconds if the data is in cache, or 10-30ms if disk seeks are required. Generally we don’t recommend that you store very large values in Hbase. It is not efficient if the values stored are more than a few MB.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • Hbase is currently used in production at a number of companies. Here are a few examples. Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics. StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase. Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • So, if you are interested in Hadoop and Hbase, here are some resources. The easiest way to install Hadoop is to use Cloudera’s Distribution for Hadoop from cloudera.com. You can also download the Apache source directly from hadoop.apache.org. You can get started on your laptop, in a VM, or running on EC2. I also recommend our free training videos from our website. The Hadoop: The Definitive Guide book is also really great – it’s also available translated in Japanese.
  • Thanks very much for having me! If you have any questions, please feel free to ask now or send me an email. Also, we’re hiring both in the USA and in Japan, so if you’re interested in working on Hadoop or Hbase, please get in touch.

Chicago Data Summit: Apache HBase: An Introduction Chicago Data Summit: Apache HBase: An Introduction Presentation Transcript

  • Apache HBase: an introduction
    • Todd Lipcon
    • [email_address]
    • @tlipcon @cloudera
    • April 26th, 2011
    • Software Engineer at
    • Committer and PMC member on Apache HBase, HDFS, MapReduce, and Thrift
    • Previously: systems programming, operations, large scale data analysis
    • I love data and data systems
    Introductions
  • Outline
    • What is HBase?
    • HBase Architecture 101
    • HBase vs Other Technologies
    • Use Cases
    • Questions
  • Apache HBase HBase is an open source , distributed , sorted map datastore modeled after Google’s BigTable
  • Open Source
    • Apache 2.0 License
    • Committers and contributors from diverse organizations
      • Cloudera, Facebook, StumbleUpon, Trend Micro, etc.
  • Distributed
    • Store and access data on 1-700 commodity servers
    • Automatic failover based on Apache ZooKeeper
    • Linear scaling of capacity and IOPS by adding servers
  • Sorted Map Datastore
    • Not a relational database (very light “schema”)
    • Tables consist of rows, each of which has a primary key (row key)
    • Each row may have any number of columns, like a Map<byte[], byte[]>
    • Rows are stored in sorted order
  • Sorted Map Datastore (logical view as “records”) A single cell might have different values at different timestamps Different rows may have different sets of columns (table is sparse ) Different types of data separated into different “ column families” Implicit PRIMARY KEY in RDBMS terms Data is all byte[] in HBase Row key Data cutting info: { ‘height’: ‘9ft’, ‘state’: ‘CA’} roles: { ‘ASF’: ‘director’, ‘Hadoop’: ‘founder’} tlipcon info: { ‘height’: ‘5ft7, ‘state’: ‘CA’} roles: { ‘Hadoop’: ‘committer’@ts=2010, ‘ Hadoop’: ‘PMC’@ts=2011, ‘ Hive’: ‘contributor’ }
  • Sorted Map Datastore (physical view as “cells”) Sorted on disk by Row key, Col key, descending timestamp Milliseconds since unix epoch Row key Column key Timestamp Cell value cutting info:height 1273516197868 9ft cutting info:state 1273871824184 CA cutting roles:ASF 1273871823022 director cutting roles:Hadoop 1273746289103 founder tlipcon info:height 1273878447049 5ft7 tlipcon info:state 1273616297446 CA tlipcon roles:Hadoop 1300062064923 PMC tlipcon roles:Hadoop 1293388212294 Committer tlipcon roles:Hive 1273616297446 contributor
  • Column Families
    • Different sets of columns may have different properties and access patterns
    • Configurable by column family:
      • Compression (none, gzip, LZO)
      • Version retention policies
      • Cache priority
    • CFs stored separately on disk: access one without wasting IO on the other.
  • HBase API
    • get(row)
    • put(row, Map<column, value>)
    • scan(key range, filter)
    • increment(row, columns)
    • … (checkAndPut, delete, etc…)
    • MapReduce/Hive
  • Accessing HBase
    • Java API (thick client)
    • REST/HTTP
    • Apache Thrift (any language)
    • Hive/Pig for analytics
  • High Level Architecture HBase HDFS ZooKeeper Java Client MapReduce Hive/Pig Thrift/REST Gateway Your Application
  • HBase Terms
    • Region
      • - A subset of a table's rows, like a range partition
      • - Automatically sharded
    • RegionServer (slave)
      • Serves data for reads and writes
    • Master
      • Responsible for coordinating the slaves
      • Assigns regions, detects failures of Region Servers Controls some admin functions
  • HBase Daemons RegionServer HDFS HMaster RegionServer RegionServer … HMaster ZK Peer ZK Peer ZK Peer ZK Quorum Client Client finds RegionServer addresses in ZooKeeper Client reads and writes rows by directly accessing the RegionServers Master assigns regions and achieves load balancing
  • HBase Deployment (big cluster) HDFS NameNode Secondary NameNode MapReduce JobTracker ZooKeeper ZooKeeper ZooKeeper HMaster HMaster RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker 3 or 5 nodes ZK HMaster with one standby 40+ slaves with HBase, HDFS, and MR slave processes
  • HBase Deployment (small cluster / POC) NameNode SecondaryNameNode HMaster JobTracker ZooKeeper RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker RegionServer DataNode TaskTracker 5+ slaves with HBase, HDFS, and MR slave processes The proverbial basket full of eggs
  • HBase vs other systems
  • HBase vs just HDFS If you have neither random write nor random read, stick to HDFS! Plain HDFS/MR HBase Write pattern Append-only Random write, bulk incremental Read pattern Full table scan, partition table scan Random read, small range scan, or table scan Hive (SQL) performance Very good 4-5x slower Structured storage Do-it-yourself / TSV / SequenceFile / Avro / ? Sparse columnar data model Max data size 30+ PB ~1PB
  • HBase vs RDBMS RDBMS HBase Data layout Row-oriented Column-oriented Transactions Multi-row ACID Single row only Query language SQL get/put/scan/etc * Security Authentication/Authorization Work in progress Indexes On arbitrary columns Row-key only Max data size TBs ~1PB Read/write throughput limits 1000s queries/second Millions of queries/second
  • HBase vs other “NoSQL”
    • Favors Consistency over Availability (but availability is good in practice!)
    • Great Hadoop integration (very efficient bulk loads, MapReduce analysis)
    • Ordered range partitions (not hash)
    • Automatically shards/scales (just turn on more servers)
    • Sparse column storage (not key-value)
  • HBase in Numbers
    • Largest cluster : 700 nodes, ~700TB
    • Most clusters: 5-20 nodes, 100GB-4TB
    • Writes : 1-3ms, 1k-10k writes/sec per node
    • Reads : 0-3ms cached, 10-30ms disk
      • 10-40k reads / second / node from cache
    • Cell size : 0-3MB preferred
  • Use cases
  • Audit Logging
    • Online service requires per-user audit logs
    • Row key userid_timestamp allows efficient range-scan lookups to fetch per-user history
    • Server-side Filter mechanism allows efficient queries
    • MapReduce for analytic questions about user behavior
  • Facebook Analytics
    • Realtime counters of URLs shared, links “liked”, impressions generated
    • 20 billion events/day (200K events/sec)
    • ~30 second latency from click to count
    • Heavy use of incrementColumnValue API for consistent counters
    • Tried MySQL, Cassandra, settled on HBase
    http://tiny.cloudera.com/hbase-fb-analytics
  • OpenTSDB
    • Scalable time-series store and metrics collector
    • Thousands of machines each generating hundreds of operational metrics
    • Thousands of writes/second
    • Web interface to fetch and display graphs of metrics over time for selected hosts
    http://opentsdb.net
  • Powered By HBase … and others
  • Use HBase if…
      • You need random write, random read, or both ( but not neither )
      • You need to do many thousands of operations per second on multiple TB of data
      • Your access patterns are well-known and simple
  • Don’t use HBase if…
      • You only append to your dataset, and tend to read the whole thing
      • You primarily do ad-hoc analytics (ill defined access patterns)
      • Your data easily fits on one beefy node
  • Resources
    • Download CDH3 (http://cloudera.com/)
    • Cloudera HBase training (1 st chapter free online)
    • http://hbase.apache.org/
    • irc.freenode.net #hbase
    • Coming soon: HBase: The Definitive Guide by Lars George
  • Questions?
    • [email_address]
    • (feedback? yes!)
    • (hiring? yes!)