• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Sf NoSQL MeetUp: Apache Hadoop and HBase
 

Sf NoSQL MeetUp: Apache Hadoop and HBase

on

  • 5,456 views

What is Apache Hadoop, what does it do? How does Hadoop Work? What is Apache HBase? Hadoop and HBase use cases.

What is Apache Hadoop, what does it do? How does Hadoop Work? What is Apache HBase? Hadoop and HBase use cases.

Statistics

Views

Total Views
5,456
Views on SlideShare
4,083
Embed Views
1,373

Actions

Likes
13
Downloads
0
Comments
1

7 Embeds 1,373

http://www.cloudera.com 1337
http://blog.cloudera.com 19
http://www.twylah.com 12
http://paper.li 2
http://webcache.googleusercontent.com 1
http://test.cloudera.com 1
http://translate.googleusercontent.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Given the nature of this meetup, I imagine that most people already know this. Data is very important, and it is getting more important every day.
  • Google wrote an interesting article last year about this, called “The Unreasonable Effectiveness of Data”. In this paper, they talk about the algorithm for Google Translate, which has done very well in various competitions. They say that it is “unreasonable” that it works so well, since they do not use an advanced algorithm. Instead, they feed a simple algorithm with more data than anyone else, since they collect the entire web. With more data, they can do a better job even without doing anything fancy.
  • For example, if you are a credit card company, you can use transaction data to determine how risky a loan is. If a customer drinks a lot of beer, he is probably risky. If the customer buy equipment for a dentist’s office, he is probably less risky. If a credit card company can do a better job of predicting risk, it will save them billions of dollars per year.
  • One good quote is this, from Hal Varian, Google’s chief economist. He is saying that engineering is important, but what will differentiate businesses is their ability to extract information from data.
  • One good quote is this, from Hal Varian, Google’s chief economist. He is saying that engineering is important, but what will differentiate businesses is their ability to extract information from data.
  • So, what is Hadoop?
  • So, what is Hadoop?
  • Hadoop is an open source project hosted by the Apache Software Foundation that can reliably store and process a lot of data. It does this using commodity computers, like typical servers from Dell, HP, SuperMicro, etc. Here is a screenshot of a Hadoop cluster that has a capacity of 1.5 petabytes. This is not the largest Hadoop cluster! Hadoop can easily store many petabytes of information.
  • Hadoop has two main components. The first is HDFS, the Hadoop Distributed File System, which stores data. The second is MapReduce, a fault tolerant distributed processing system, which processes the data stored in HDFS.
  • The first thing is that Hadoop takes care of the distributed systems for you. As we said earlier, statisticians are the ones who need to be looking at data, but there are not many statisticians who are also systems programmers. Hadoop takes care of the systems problems so that the analysts can look at the data.
  • Hadoop is also different because it harnesses the power of a cluster, while not making users interact separately with a bunch of machines. A user can write one piece of code and submit it to the cluster, and Hadoop will automatically deploy and run the code on all of the machines.
  • Hadoop is also special because it really scales linearly, both in terms of data size and analysis complexity. For example, you may not have a lot of data, but you may want to do something very complicated with it – for example, detecting faces in a lot of images. Or, you may have a huge amount of data and just want to summarize or aggregate some metrics. Hadoop can work for both kinds of applications.
  • Hadoop sounds great – it can make 4000 servers look like one big computer, and it can store and process petabytes of information. Let’s look at how it works.
  • Let’s look at a typical Hadoop cluster. Most production clusters have at least 5 servers, though you can run it on a laptop for development. A typical server probably has 8 cores, 24GB of RAM, 4-12TB of disk, and gigabit ethernet, for example something like a Dell R410 or an HP SL170. On larger clusters, the machines are spread out in multiple racks, with 20 or 40 nodes per rack. The largest Hadoop clusters have about 4000 servers in them.
  • Hadoop has 4 main types of nodes. There are a few special “master” nodes. The NameNode stores metadata about the filesystem – for example the names and permissions of all of the files on HDFS. The JobTracker acts as a scheduler for processing being done on the cluster. Then there are the slave nodes, which run on every machine in the cluster. The DataNodes store the actual file data, and the TaskTrackers run the actual analysis code that a user submits.
  • Let’s look more closely at HDFS. As I said, the NameNode is responsible for metadata storage. Here we see that the NameNode has a file called /logs/weblog.txt. When it is written, it is automatically split into pieces called “blocks” which each have a numeric ID. The default block size is 64MB, but if a file is not a multiple of 64MB, a smaller block is used, so space is not wasted. Each block is then replicated out to three datanodes, so that if any datanode crashes, the data is not lost.
  • This is a simplified diagram of how data is written on HDFS. First, the client asks the NameNode to create a new file. Then it directly accesses the datanodes to write the data – this is so that the NameNode is not a bottleneck when loading data. When the data has been completely written, the client informs the NameNode, and the NameNode saves the metadata.
  • So now, we’ve uploaded a file into HDFS. HDFS has split it into chunks for us, and spread those chunks around the cluster on the DataNodes.But we don’t want to just store the data – we want to process it, too.
  • This is where MapReduce comes in. I imagine some of the earlier presenters already covered MapReduce, so I’ll try to move quickly.
  • The basic idea of MapReduce is simple. You provide just two functions, map, and reduce, and the framework takes care of the rest. That means you don’t need to worry about fault tolerance, or figuring out where the data is stored, for example.
  • First, let’s look at map(). Map is a function that takes a key/value pair, and outputs a list of keys and values. For this example, we’re going to look at a MapReduce job that tells us how many bytes were transferred for each type of image in our Apache web logs. The input here has a key which is just the offset in the log file. The value is the text of the line itself. Our map function parses the line, and outputs simply the image type and the number of bytes transferred.The MapReduce framework automatically will run this function on the same node as the actual data is stored, on all of the nodes in the cluster at once.
  • But wait! I said earlier that HDFS just stores bytes, but the Map function acts on keys and values. MapReduce uses a class called InputFormat to convert between bytes and key/value pairs. This is very powerful, since it means you don’t need to figure out a schema ahead of time – you can just load data and write an input format that parses it however you need.
  • For each output from the map function, MapReduce will assign it to a “reducer”. So, if two different log files have data for the same image, the byte counts will still end up at the same reducer.
  • The reducer function takes a key, and then a list of all of the values for that key. Here we see that user images were requested 3 times. The reducer calculates a sum. The default output format is TextOutputFormat, which produces a tab separated output file. In this case we found out that 6346 bytes of bandwidth were used for user images.
  • Here’s a diagram of MapReduce from beginning to end. On the left you can see the input on HDFS, which has been split into 5 pieces. The pieces get assigned to map functions on three different nodes. Each of these then outputs some keys, which get grouped and sent to two different reducers, which also run on the cluster. The reducers then write their output back to HDFS.If at any point any node fails, the MapReduce framework will reassign the work to a different node.
  • Now I have to apologize to you. I am speaking at a NoSQL event, but Hadoop is not NoSQL! There is a project called Hive which adds SQL support to Hadoop.Hive takes a query written in HiveQL, a dialect of SQL, and compiles it into a query plan. The query plan is in terms of MapReduce jobs, which are automatically executed on the cluster. The results can be returned to a client or written directly back to the cluster.
  • Here’s an example Hive query. First, we create a table. Note that Hive’s tables can be stored as text – here it is just tab-separated values: “fields terminated by \\t”. Next, we load a particular dataset into the table. Then we can create a new summary table by issuing a query over the first table. This is a simple example, but Hive can do most common SQL functions.
  • In addition to Hive, Hadoop has a number of other projects in its ecosystem. For example, Sqoop can import or export data from relational databases, and Pig is another high-level scripting language to help write MapReduce jobs quickly.
  • Hadoop is also heavily used in production. Here are a few examples of companies that use Hadoop. In addition to these very big clusters at companies like Yahoo and Facebook, there are hundreds of smaller companies with clusters between 5 and 40 nodes.
  • So, we just saw how MapReduce can be used to do analysis on a large dataset of Apache logs. MapReduce is a batch system, though – the very fastest MapReduce job takes about 24 seconds to run, even if the dataset is tiny.Also, HDFS is an append-only file system – it doesn’t support editing existing files. So, it is not like a database that will serve a web site.
  • Hbase is a project that solves this problem. In a sentence, Hbase is an open source, distributed, sorted map modeled after Google’s BigTable.Open-source: Apache HBase is an open source project with an Apache 2.0 license.Distributed: HBase is designed to use multiple machines to store and serve data.Sorted Map: HBase stores data as a map, and guarantees that adjacent keys will be stored next to each other on disk.HBase is modeled after BigTable, a system that is used for hundreds of applications at Google.
  • Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the "info" column family. Row2 only has a single column. A column can also be empty.Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained.Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek.
  • Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the "info" column family. Row2 only has a single column. A column can also be empty.Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained.Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek.
  • Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the "info" column family. Row2 only has a single column. A column can also be empty.Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained.Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  • One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs.Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice!Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables.Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek.Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step.Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  • One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs.Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice!Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables.Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek.Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step.Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  • Data Layout: An traditional RDBMS uses a fixed schema and row-oriented storage model. This has drawbacks if the number of columns per row could vary drastically. A semi-structured column-oriented store handles this case very well.Transactions: A benefit that an RDBMS offers is strict ACID compliance with full transaction support. HBase currently offers transactions on a per row basis. There is work being done to expand HBase's transactional support.Query language: RDBMSs support SQL, a full-featured language for doing filtering, joining, aggregating, sorting, etc. HBase does not support SQL*. There are two ways to find rows in HBase: get a row by key or scan a table.Security: In version 0.20.4, authentication and authorization are not yet available for HBase.Indexes: In a typical RDBMS, indexes can be created on arbitrary columns. HBase does not have any traditional indexes**. The rows are stored sorted, with a sparse index of row offsets. This means it is very fast to find a row by its row key.Max data size: Most RDBMS architectures are designed to store GBs or TBs of data. HBase can scale to much larger data sizes.Read/write throughput limits: Typical RDBMS deployments can scale to thousands of queries/second. There is virtually no upper bound to the number of reads and writes HBase can handle.* Hive/HBase integration is being worked on** There are contrib packages for building indexes on HBase tables
  • One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs.Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice!Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables.Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek.Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step.Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  • People often want to know “the numbers” about a storage system. I would recommend that you test it yourself – benchmarks always lie.But, here are some general numbers about Hbase. The largest cluster I’ve seen is 600 nodes, storing around 600TB. Most clusters are much smaller, only 5-20 nodes, hosting a few hundred gigabytes. Generally, writes take a few ms, and throughput is on the order of thousands of writes per node per second, but of course it depends on the size of the writes. Reads are a few milliseconds if the data is in cache, or 10-30ms if disk seeks are required.Generally we don’t recommend that you store very large values in Hbase. It is not efficient if the values stored are more than a few MB.
  • Hbase is currently used in production at a number of companies. Here are a few examples.Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics.StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase.Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  • So, if you are interested in Hadoop and Hbase, here are some resources. The easiest way to install Hadoop is to use Cloudera’s Distribution for Hadoop from cloudera.com. You can also download the Apache source directly from hadoop.apache.org. You can get started on your laptop, in a VM, or running on EC2. I also recommend our free training videos from our website.The Hadoop: The Definitive Guide book is also really great – it’s also available translated in Japanese.
  • Thanks very much for having me! If you have any questions, please feel free to ask now or send me an email. Also, we’re hiring both in the USA and in Japan, so if you’re interested in working on Hadoop or Hbase, please get in touch.

Sf NoSQL MeetUp: Apache Hadoop and HBase Sf NoSQL MeetUp: Apache Hadoop and HBase Presentation Transcript

  • Apache Hadoop and HBase
    Todd Lipcon
    todd@cloudera.com
    @tlipcon @cloudera
    August 9th, 2011
  • Outline
    Why should you care? (Intro)
    What is Apache Hadoop?
    How does it work?
    What is Apache HBase?
    Use Cases
  • Software Engineer at
    Committer and PMC member on Apache HBase, HDFS, MapReduce, and Thrift
    Previously: systems programming, operations, large scale data analysis
    I love data and data systems
    Introductions
  • Data is the difference.
  • “Every two days we create as much information as we did from the dawn of civilization up until 2003.”
    Eric Schmidt
    (Chairman of Google)
  • “I keep saying that the sexy job in the next 10 years will be statisticians. And I’m not kidding.”
    Hal Varian (Google’s chief economist)
  • Are you throwing away data?
    Data comes in many shapes and sizes: relational tuples, log files, semistructured textual data (e.g., e-mail), … .
    Are you throwing it away because it doesn’t ‘fit’?
  • So, what’s Hadoop?
    The Little Prince, Antoine de Saint-Exupéry, Irene Testot-Ferry
  • So, what’s Hadoop?
    The Little Prince, Antoine de Saint-Exupéry, Irene Testot-Ferry
  • Apache Hadoop is anopen-source systemto reliably store and processGOBS of dataacross many commodity computers.
  • Two Core Components
    Store
    Process
    HDFS
    MapReduce
    Self-healing
    high-bandwidthclustered storage.
    Fault-tolerant distributed processing.
  • What makes Hadoop special?
  • Falsehood #1: Machines can be reliable…
    Image: MadMan the Mighty CC BY-NC-SA
  • Hadoop separates distributed system fault-tolerance code from application logic.
    Unicorns
    Systems Programmers
    Statisticians
  • Falsehood #2: Machines deserve identities...
    Image:Laughing Squid CC BY-NC-SA
  • Hadoop lets you interact with a cluster, not a bunch of machines.
    Image:Yahoo! Hadoop cluster [ OSCON ’07 ]
  • Falsehood #3: Your analysis fits on one machine…
    Image: Matthew J. Stinson CC-BY-NC
  • Hadoop scales linearlywith data sizeor analysis complexity.
    Data-parallel or compute-parallel. For example:
    Extensive machine learning on <100GB of image data
    Simple SQL queries on >100TB of clickstream data
    Hadoop works for both applications!
  • Hadoop sounds likemagic.How is it possible?
  • A Typical Look...
    5-4000 commodity servers(8-core, 24GB RAM, 4-12 TB, gig-E)
    2-level network architecture
    20-40 nodes per rack
  • Cluster nodes
    Master nodes (1 each)
    NameNode (metadata server and database)
    JobTracker (scheduler)
    Slave nodes (1-4000 each)
    DataNodes (block storage)
    TaskTrackers (task execution)
  • HDFS Data Storage
    DN 1
    NameNode
    /logs/weblog.txt
    64MB
    64MB
    30MB
    DN 2
    blk_29232
    158MB
    blk_19231
    DN 3
    blk_329432
    DN 4
  • HDFS Write Path
  • HDFS has split the file into 64MB blocks and stored it on the DataNodes.
    Now, we want to process that data.
  • The MapReduce Programming Model
  • You specify map() and reduce() functions.The framework does the rest.
  • map()
    map: K₁,V₁->list K₂,V₂
    Key: byte offset 193284
    Value: “127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326”
    Key: userimage
    Value: 2326 bytes
    The map function runs on the same node as the data was stored!
  • Input Format
    Wait! HDFS is not a Key-Value store!
    InputFormatinterprets bytes as a Key and Value
    127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326
    Key: log offset 193284
    Value: “127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326”
  • The Shuffle
    Each map output is assigned to a “reducer” based on its key
    map output is grouped andsorted by key
  • reduce()
    K₂, iter(V₂)->list(K₃,V₃)
    Key: userimage
    Value: 2326 bytes (from map task 0001)
    Value: 1000 bytes (from map task 0008)
    Value: 3020 bytes (from map task 0120)
    Reducer function
    Key: userimage
    Value: 6346 bytes
    TextOutputFormat
    userimage t 6346
  • Putting it together...
    Note: not limited to just one reducer. Result set may be many TB!
  • Hadoop is not NoSQL(NoNoSQL? Sorry…)
    Hive project adds SQL support to Hadoop
    HiveQL (SQL dialect) compiles to a query plan
    Query plan executes as MapReduce jobs
  • Hive Example
    CREATE TABLE movie_rating_data (
    userid INT, movieid INT, rating INT, unixtime STRING
    ) ROW FORMAT DELIMITED
    FIELDS TERMINATED BY 't‘
    STORED AS TEXTFILE;
    LOAD DATA INPATH ‘/datasets/movielens’ INTO TABLE movie_rating_data;
    CREATE TABLE average_ratings AS
    SELECT movieid, AVG(rating) FROM movie_rating_data
    GROUP BY movieid;
  • The Hadoop Ecosystem
    (Column DB)
  • Hadoop in the Wild(yes, it’s used in production)
    Yahoo! Hadoop Clusters: > 82PB, >40k machines(as of Jun ‘11)
    Facebook: 15TB new data per day;1200+ machines, 30PB in one cluster
    Twitter: >1TB per day, ~120 nodes
    Lots of 5-40 node clusters at companies withoutpetabytes of data (web, retail, finance, telecom, research, government)
  • What about real time access?
    MapReduce is a batch system
    The fastest MR job takes 15+ seconds
    HDFS just stores bytes, and is append-only
    Not about to serve data for your next web site.
  • Apache HBase
    HBase is an
    open source, distributed, sorted map
    modeled after Google’s BigTable
  • Open Source
    Apache 2.0 License
    Committers and contributors from diverse organizations
    Cloudera, Facebook, StumbleUpon, Trend Micro, etc.
  • Distributed
    Store and access data on 1-1000 commodity servers
    Automatic failover based on Apache ZooKeeper
    Linear scaling of capacity and IOPS by adding servers
  • Sorted Map Datastore
    Tables consist of rows, each of which has a primary key (row key)
    Each row may have any number of columns -- like a Map<byte[], byte[]>
    Rows are stored in sorted order
  • Sorted Map Datastore(logical view as “records”)
    Implicit PRIMARY KEY in RDBMS terms
    Data is all byte[] in HBase
    Different types of data separated into different
    “column families”
    Different rows may have different sets of columns(table is sparse)
    Useful for *-To-Many mappings
    A single cell might have different
    values at different timestamps
  • Sorted Map Datastore(physical view as “cells”)
    info Column Family
    roles Column Family
    Sorted
    on disk byRow key, Col key, descending timestamp
    Milliseconds since unix epoch
  • Cost Transparency
  • Column Families
    Different sets of columns may have different properties and access patterns
    Configurable by column family:
    Block Compression (none, gzip, LZO, Snappy)
    Version retention policies
    Cache priority
    CFs stored separately on disk: access one without wasting IO on the other.
  • HBase API
    get(row)
    put(row, Map<column, value>)
    scan(key range, filter)
    increment(row, columns)
    … (checkAndPut, delete, etc…)
    MapReduce/Hive
  • Accessing HBase
    Java API (thick client)
    REST/HTTP
    Apache Thrift (any language)
    Hive/Pig for analytics
  • High Level Architecture
    Your PHP Application
    MapReduce
    Hive/Pig
    Thrift/REST Gateway
    Your Java Application
    Java Client
    ZooKeeper
    HBase
    HDFS
  • HBase vs other systems
  • HBase vs just HDFS
    If you have neither random write nor random read, stick to HDFS!
  • HBase vs RDBMS
  • HBase vs other “NoSQL”
    Favors Strict Consistency over Availability (but availability is good in practice!)
    Great Hadoop integration (very efficient bulk loads, MapReduce analysis)
    Ordered range partitions (not hash)
    Automatically shards/scales (just turn on more servers, really proven at petabyte scale)
    Sparse column storage (not key-value)
  • HBase in Numbers
    Largest cluster: ~1000 nodes, ~1PB
    Most clusters: 5-20 nodes, 100GB-4TB
    Writes: 1-3ms, 1k-10k writes/sec per node
    Reads: 0-3ms cached, 10-30ms disk
    10-40k reads / second / node from cache
    Cell size: 0-3MB preferred
  • HBase in Production
    Facebook (Messages, Analytics, operational datastore, more on the way) [see SIGMOD paper]
    StumbleUpon / http://su.pr
    Mozilla (receives crash reports)
    Yahoo (stores a copy of the web)
    Twitter (stores users and tweets for analytics)
    … many others
  • Ok, fine, what next?
    Get Hadoop!
    CDH - Cloudera’s Distribution including Apache Hadoophttp://cloudera.com/
    http://hadoop.apache.org/
    Try it out! (Locally, VM, or EC2)
    Watch free training videos onhttp://cloudera.com/
  • Thanks!
    todd@cloudera.com
    @tlipcon
    (feedback? yes!)
    (hiring? yes!)