HBase in Practice

L
HBase in Practice
Lars George – Partner and Co-Founder @ OpenCore
DataWorks Summit 2017 - Munich
NoSQL is no SQL is SQL?
About Me
• Partner & Co-Founder at OpenCore
• Before that
• Lars: EMEA Chief Architect at Cloudera (5+ years)
• Hadoop since 2007
• Apache Committer & Apache Member
• HBase (also in PMC)
• Lars: O’Reilly Author: HBase – The Definitive Guide
• Contact
• lars.george@opencore.com
• @larsgeorge
Website: www.opencore.com
Agenda
• Brief Intro To Core Concepts
• Access Options
• Data Modelling
• Performance Tuning
• Use-Cases
• Summary
Introduction To Core Concepts
HBase Tables
• From user perspective, HBase is similar to a database, or spreadsheet
• There are rows and columns, storing values
• By default asking for a specific row/column combination returns the
current value (that is, that last value stored there)
HBase Tables
• HBase can have a
different schema
per row
• Could be called
schema-less
• Primary access by
the user given row
key and column
name
• Sorting of rows and
columns by their
key (aka names)
HBase Tables
• Each row/column coordinate is tagged with a version number, allowing
multi-versioned values
• Version is usually
the current time
(as epoch)
• API lets user ask
for versions
(specific, by count,
or by ranges)
• Up to 2B versions
HBase Tables
• Table data is cut into pieces to distribute over cluster
• Regions split table into
shards at size boundaries
• Families split within
regions to group
sets of columns
together
• At least one of
each is needed
Scalability – Regions as Shards
• A region is served by exactly
one region server
• Every region server serves
many regions
• Table data is spread over servers
• Distribution of I/O
• Assignment is based on
configurable logic
• Balancing cluster load
• Clients talk directly to region
servers
Column Family-Oriented
• Group multiple columns into
physically separated locations
• Apply different properties to each
family
• TTL, compression, versions, …
• Useful to separate distinct data
sets that are related
• Also useful to separate larger blob
from meta data
Data Management
• What is available is tracked in three
locations
• System catalog table hbase:meta
• Files in HDFS directories
• Open region instances on servers
• System aligns these locations
• Sometimes (very rarely) a repair may
be needed using HBase Fsck
• Redundant information is useful to
repair corrupt tables
HBase really is….
• A distributed Hash Map
• Imagine a complex, concatenated key including the user given row key and
column name, the timestamp (version)
• Complex key points to actual value, that is, the cell
Fold, Store, and Shift
• Logical rows in tables are
really stored as flat key-value
pairs
• Each carries full coordinates
• Pertinent information can be
freely placed in cell to
improve lookup
• HBase is a column-family
grouped key-value store
HFile Format Information
• All data is stored in a custom (open-source) format, called HFile
• Data is stored in blocks (64KB default)
• Trade-off between lookups and I/O throughput
• Compression, encoding applied _after_ limit check
• Index, filter and meta data is stored in separate blocks
• Fixed trailer allows traversal of file structure
• Newer versions introduce multilayered index and filter structures
• Only load master index and load partial index blocks on demand
• Reading data requires deserialization of block into cells
• Kind of Amdahl’s Law applies
HBase Architecture
• One Master and many Worker servers
• Clients mostly communicate with workers
• Workers store actual data
• Memstore for accruing
• HFile for persistence
• WAL for fail-safety
• Data provided as regions
• HDFS is backing store
• But could be another
HBase Architecture (cont.)
HBase Architecture (cont.)
• Based on Log-Structured Merge-Trees (LSM-Trees)
• Inserts are done in write-ahead log first
• Data is stored in memory and flushed to disk on regular intervals or based
on size
• Small flushes are merged in the background to keep number of files small
• Reads read memory stores first and then disk based files second
• Deletes are handled with “tombstone”
markers
• Atomicity on row level no matter how
many columns
• Keeps locking model easy
Merge Reads
• Read Memstore & StoreFiles
using separate scanners
• Merge matching cells into
single row “view”
• Delete’s mask existing data
• Bloom filters help skip
StoreFiles
• Reads may have to span
many files
APIs and Access Options
HBase Clients
• Native Java Client/API
• Non-Java Clients
• REST server
• Thrift server
• Jython, Groovy DSL
• Spark
• TableInputFormat/TableOutputFormat for MapReduce
• HBase as MapReduce source and/or target
• Also available for table snapshots
• HBase Shell
• JRuby shell adding get, put, scan etc. and admin calls
• Phoenix, Impala, Hive, …
Java API
From Wikipedia:
• CRUD: “In computer programming, create, read, update, and delete are the
four basic functions of persistent storage.”
• Other variations of CRUD include
• BREAD (Browse, Read, Edit, Add, Delete)
• MADS (Modify, Add, Delete, Show)
• DAVE (Delete, Add, View, Edit)
• CRAP (Create, Retrieve, Alter, Purge)
Wait
what?
Java API (cont.)
• CRUD
• put: Create and update a row (CU)
• get: Retrieve an entire, or partial row (R)
• delete: Delete a cell, column, columns, or row (D)
• CRUD+SI
• scan: Scan any number of rows (S)
• increment: Increment a column value (I)
• CRUD+SI+CAS
• Atomic compare-and-swap (CAS)
• Combined get, check, and put operation
• Helps to overcome lack of full transactions
Java API (cont.)
• Batch Operations
• Support Get, Put, and Delete
• Reduce network round-trips
• If possible, batch operation to the server to gain better overall throughput
• Filters
• Can be used with Get and Scan operations
• Server side hinting
• Reduce data transferred to client
• Filters are no guarantee for fast scans
• Still full table scan in worst-case scenario
• Might have to implement your own
• Filters can hint next row key
Data Modeling
Where’s your data at?
Key Cardinality
• The best performance is gained from using row keys
• Time range bound reads can skip store files
• So can Bloom Filters
• Selecting column families
reduces the amount of data
to be scanned
• Pure value based access
is a full table scan
• Filters often are too, but
reduce network traffic
Key/Table Design
• Crucial to gain best performance
• Why do I need to know? Well, you also need to know that RDBMS is only working
well when columns are indexed and query plan is OK
• Absence of secondary indexes forces use of row key or column name
sorting
• Transfer multiple indexes into one
• Generate large table -> Good since fits architecture and spreads across cluster
• DDI
• Stands for Denormalization, Duplication and Intelligent Keys
• Needed to overcome trade-offs of architecture
• Denormalization -> Replacement for JOINs
• Duplication -> Design for reads
• Intelligent Keys -> Implement indexing and sorting, optimize reads
Pre-materialize Everything
• Achieve one read per customer request if possible
• Otherwise keep at lowest number
• Reads between 10ms (cache miss) and 1ms (cache hit)
• Use MapReduce or Spark to compute exacts in batch
• Store and merge updates live
• Use increment() methods
Motto: “Design for Reads”
Tall-Narrow vs. Flat-Wide Tables
• Rows do not split
• Might end up with one row per region
• Same storage footprint
• Put more details into the row key
• Sometimes dummy column only
• Make use of partial key scans
• Tall with Scans, Wide with Gets
• Atomicity only on row level
• Examples
• Large graphs, stored as adjacency matrix (narrow)
• Message inbox (wide)
Sequential Keys
<timestamp><more key>: {CF: {CQ: {TS : Val}}}
• Hotspotting on regions is bad!
• Instead do one of the following:
• Salting
• Prefix <timestamp> with distributed value
• Binning or bucketing rows across regions
• Key field swap/promotion
• Move <more key> before the timestamp (see OpenTSDB)
• Randomization
• Move <timestamp> out of key or prefix with MD5 hash
• Might also be mitigated by overall spread of workloads
Key Design Choices
• Based on access pattern, either use
sequential or random keys
• Often a combination of both is needed
• Overcome architectural limitations
• Neither is necessarily bad
• Use bulk import for sequential keys and
reads
• Random keys are good for random access
patterns
Checklist
• Design for Use-Case
• Read, Write, or Both?
• Avoid Hotspotting
• Hash leading key part, or use salting/bucketing
• Use bulk loading where possible
• Monitor your servers!
• Presplit tables
• Try prefix encoding when values are small
• Otherwise use compression (or both)
• For Reads: Restrict yourself
• Specify what you need, i.e. columns, families, time range
• Shift details to appropriate position
• Composite Keys
• Column Qualifiers
Performance Tuning
1000 knobs to turn… 20 are important?
Everything is Pluggable
• Cell
• Memstore
• Flush Policy
• Compaction
Policy
• Cache
• WAL
• RPC handling
• …
Cluster Tuning
• First, tune the global settings
• Heap size and GC algorithm
• Memory share for reads and writes
• Enable Block Cache
• Number of RPC handlers
• Load Balancer
• Default flush and compaction strategy
• Thread pools (10+)
• Next, tune the per-table and family settings
• Region sizes
• Block sizes
• Compression and encoding
• Compactions
• …
Region Balancer Tuning
• A background process in the HBase
Master is tracking load on servers
• The load balancer moves regions
occasionally
• Multiple implementations exists
• Simple counts number of regions
• Stochastic determines cost
• Favored Node pins HDFS block
replicas
• Can be tuned further
• Cluster-wide setting!
RPC Tuning
• Default is one queue for
all types of requests
• Can be split into
separate queues for
reads and writes
• Read queue can be
further split into reads
and scans
 Stricter resource limits,
but may avoid cross-
starvation
Key Tuning
• Design keys to match use-case
• Sequential, salted, or random
• Use sorting to convey meaning
• Colocate related data
• Spread load over all servers
• Clever key design can make use
of distribution: aging-out regions
Compaction Tuning
• Default compaction settings are aggressive
• Set for update use-case
• For insert use-cases, Blooms are effective
• Allows to tune down compactions
• Saves resources by reducing write amplification
• More store files are also enabling faster full
table scans with time range bound scans
• Server can ignore older files
• Large regions may be eligible for advanced
compaction strategies
• Stripe or date-tiered compactions
• Reduce rewrites to fraction of region size
Use-Cases
What works well, what does not, and what is so-so
Placing the Use-Case
• HBase chooses to work best for random access
• You can optimize a table to prefer scans over gets
• Fewer columns with larger payload
• Larger HFile block sizes (maybe even
duplicate data in two differently
configured column families)
• After that is the realm of hybrid systems
• For fastest scans use brute force HDFS
and native query engine with a
columnar format
Big Data Workloads
Low
latency
Batch
Random Access Full ScanShort Scan
HDFS + MR
(Hive/Pig)
HBase
HBase + Snapshots
-> HDFS + MR/Spark
HDFS
+ SQL
HBase + MR/Spark
Big Data Workloads
Low
latency
Batch
Random Access Full ScanShort Scan
HDFS + MR/Spark
(Hive/Pig)
HBase
HBase + Snapshots
-> HDFS + MR/Spark
HDFS
+ SQL
HBase + MR/Spark
Current Metrics
Graph data
Simple Entities
Hybrid Entity Time series
+ Rollup serving
Messages
Analytic archive
Hybrid Entity Time series
+ Rollup generation
Index building
Entity Time series
Summary
Wrapping it up…
Optimizations
Mostly Inserts Use-Cases
• Tune down compactions
• Compaction ratio, max store file size
• Use Bloom Filters
• On by default for row keys
Mostly Update Use-Cases
• Batch updates if possible
Mostly Serial Keys
• Use bulk loading or salting
Mostly Random Keys
• Hash key with MD5 prefix
Mostly Random Reads
• Decrease HFile block size
• Use random keys
Mostly Scans
• Increase HFile (and HDFS) block size
• Reduce columns and increase cell sizes
What matters…
• For optimal performance, two things need to be considered:
• Optimize the cluster and table settings
• Choose the matching key schema
• Ensure load is spread over tables and cluster nodes
• HBase works best for random access and bound scans
• HBase can be optimized for larger scans, but its sweet spot is short burst scans (can
be parallelized too) and random point gets
• Java heap space limits addressable space
• Play with region sizes, compaction strategies, and key design to maximize result
• Using HBase for a suitable use-case will make for a happy customer…
• Conversely, forcing it into non-suitable use-cases may be cause for trouble
Questions?
Thank You!
@larsgeorge
1 of 47

Recommended

Introduction to memcached by
Introduction to memcachedIntroduction to memcached
Introduction to memcachedJurriaan Persyn
70.9K views77 slides
Apache Iceberg - A Table Format for Hige Analytic Datasets by
Apache Iceberg - A Table Format for Hige Analytic DatasetsApache Iceberg - A Table Format for Hige Analytic Datasets
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
6.6K views28 slides
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa... by
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...StreamNative
509 views47 slides
Introduction to redis - version 2 by
Introduction to redis - version 2Introduction to redis - version 2
Introduction to redis - version 2Dvir Volk
9.9K views21 slides
Hadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera by
Hadoop World 2011: Advanced HBase Schema Design - Lars George, ClouderaHadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, ClouderaCloudera, Inc.
8.8K views34 slides
Introduction to Redis by
Introduction to RedisIntroduction to Redis
Introduction to RedisArnab Mitra
11K views31 slides

More Related Content

What's hot

Cassandra Introduction & Features by
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
31.9K views21 slides
Caching solutions with Redis by
Caching solutions   with RedisCaching solutions   with Redis
Caching solutions with RedisGeorge Platon
4.8K views14 slides
Presto: SQL-on-anything by
Presto: SQL-on-anythingPresto: SQL-on-anything
Presto: SQL-on-anythingDataWorks Summit
5.3K views31 slides
Building robust CDC pipeline with Apache Hudi and Debezium by
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumTathastu.ai
2.7K views17 slides
The Parquet Format and Performance Optimization Opportunities by
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesDatabricks
8.1K views32 slides
MySQL 상태 메시지 분석 및 활용 by
MySQL 상태 메시지 분석 및 활용MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용I Goo Lee
5.9K views28 slides

What's hot(20)

Cassandra Introduction & Features by DataStax Academy
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
DataStax Academy31.9K views
Caching solutions with Redis by George Platon
Caching solutions   with RedisCaching solutions   with Redis
Caching solutions with Redis
George Platon4.8K views
Building robust CDC pipeline with Apache Hudi and Debezium by Tathastu.ai
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and Debezium
Tathastu.ai2.7K views
The Parquet Format and Performance Optimization Opportunities by Databricks
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Databricks8.1K views
MySQL 상태 메시지 분석 및 활용 by I Goo Lee
MySQL 상태 메시지 분석 및 활용MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용
I Goo Lee5.9K views
Apache Iceberg: An Architectural Look Under the Covers by ScyllaDB
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the Covers
ScyllaDB1.4K views
Overview of new features in Apache Ranger by DataWorks Summit
Overview of new features in Apache RangerOverview of new features in Apache Ranger
Overview of new features in Apache Ranger
DataWorks Summit832 views
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise by DataWorks Summit
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseUsing Spark Streaming and NiFi for the next generation of ETL in the enterprise
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
DataWorks Summit2.3K views
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in... by InfluxData
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
InfluxData3.6K views
Apache HBase Performance Tuning by Lars Hofhansl
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl39.6K views
Cosco: An Efficient Facebook-Scale Shuffle Service by Databricks
Cosco: An Efficient Facebook-Scale Shuffle ServiceCosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle Service
Databricks2.9K views
SQL-on-Hadoop Tutorial by Daniel Abadi
SQL-on-Hadoop TutorialSQL-on-Hadoop Tutorial
SQL-on-Hadoop Tutorial
Daniel Abadi10.7K views
Introduction to Apache ZooKeeper by Saurav Haloi
Introduction to Apache ZooKeeperIntroduction to Apache ZooKeeper
Introduction to Apache ZooKeeper
Saurav Haloi128.4K views
RocksDB compaction by MIJIN AN
RocksDB compactionRocksDB compaction
RocksDB compaction
MIJIN AN8.5K views
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics by DataWorks Summit
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsApache Ignite vs Alluxio: Memory Speed Big Data Analytics
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
DataWorks Summit7.9K views

Similar to HBase in Practice

HBase Advanced - Lars George by
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars GeorgeJAX London
9.9K views45 slides
Hbase schema design and sizing apache-con europe - nov 2012 by
Hbase schema design and sizing   apache-con europe - nov 2012Hbase schema design and sizing   apache-con europe - nov 2012
Hbase schema design and sizing apache-con europe - nov 2012Chris Huang
1K views64 slides
HBase Advanced Schema Design - Berlin Buzzwords - June 2012 by
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012larsgeorge
4.4K views33 slides
Schema Design by
Schema DesignSchema Design
Schema DesignQBurst
834 views20 slides
Intro to HBase - Lars George by
Intro to HBase - Lars GeorgeIntro to HBase - Lars George
Intro to HBase - Lars GeorgeJAX London
5K views61 slides
Introduction to Apache HBase by
Introduction to Apache HBaseIntroduction to Apache HBase
Introduction to Apache HBaseGokuldas Pillai
1.3K views21 slides

Similar to HBase in Practice(20)

HBase Advanced - Lars George by JAX London
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars George
JAX London9.9K views
Hbase schema design and sizing apache-con europe - nov 2012 by Chris Huang
Hbase schema design and sizing   apache-con europe - nov 2012Hbase schema design and sizing   apache-con europe - nov 2012
Hbase schema design and sizing apache-con europe - nov 2012
Chris Huang1K views
HBase Advanced Schema Design - Berlin Buzzwords - June 2012 by larsgeorge
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
larsgeorge4.4K views
Schema Design by QBurst
Schema DesignSchema Design
Schema Design
QBurst834 views
Intro to HBase - Lars George by JAX London
Intro to HBase - Lars GeorgeIntro to HBase - Lars George
Intro to HBase - Lars George
JAX London5K views
Cassandra an overview by PritamKathar
Cassandra an overviewCassandra an overview
Cassandra an overview
PritamKathar330 views
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends by Esther Kundin
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Esther Kundin564 views
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ... by Fwdays
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...
Fwdays515 views
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends by Esther Kundin
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Esther Kundin1.1K views
UNIT I Introduction to NoSQL.pptx by Rahul Borate
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptx
Rahul Borate12 views
Hbasepreso 111116185419-phpapp02 by Gokuldas Pillai
Hbasepreso 111116185419-phpapp02Hbasepreso 111116185419-phpapp02
Hbasepreso 111116185419-phpapp02
Gokuldas Pillai245 views
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516] by Malin Weiss
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
Malin Weiss103 views

More from larsgeorge

Backup and Disaster Recovery in Hadoop by
Backup and Disaster Recovery in HadoopBackup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadooplarsgeorge
6.5K views30 slides
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv by
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv larsgeorge
682 views54 slides
HBase Status Report - Hadoop Summit Europe 2014 by
HBase Status Report - Hadoop Summit Europe 2014HBase Status Report - Hadoop Summit Europe 2014
HBase Status Report - Hadoop Summit Europe 2014larsgeorge
1.1K views40 slides
Big Data is not Rocket Science by
Big Data is not Rocket ScienceBig Data is not Rocket Science
Big Data is not Rocket Sciencelarsgeorge
2K views11 slides
HBase Sizing Guide by
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guidelarsgeorge
7.3K views44 slides
HBase Applications - Atlanta HUG - May 2014 by
HBase Applications - Atlanta HUG - May 2014HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014larsgeorge
1.6K views92 slides

More from larsgeorge(13)

Backup and Disaster Recovery in Hadoop by larsgeorge
Backup and Disaster Recovery in HadoopBackup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
larsgeorge6.5K views
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv by larsgeorge
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
larsgeorge682 views
HBase Status Report - Hadoop Summit Europe 2014 by larsgeorge
HBase Status Report - Hadoop Summit Europe 2014HBase Status Report - Hadoop Summit Europe 2014
HBase Status Report - Hadoop Summit Europe 2014
larsgeorge1.1K views
Big Data is not Rocket Science by larsgeorge
Big Data is not Rocket ScienceBig Data is not Rocket Science
Big Data is not Rocket Science
larsgeorge2K views
HBase Sizing Guide by larsgeorge
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guide
larsgeorge7.3K views
HBase Applications - Atlanta HUG - May 2014 by larsgeorge
HBase Applications - Atlanta HUG - May 2014HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014
larsgeorge1.6K views
Parquet - Data I/O - Philadelphia 2013 by larsgeorge
Parquet - Data I/O - Philadelphia 2013Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013
larsgeorge10.8K views
HBase and Impala Notes - Munich HUG - 20131017 by larsgeorge
HBase and Impala Notes - Munich HUG - 20131017HBase and Impala Notes - Munich HUG - 20131017
HBase and Impala Notes - Munich HUG - 20131017
larsgeorge10.2K views
Hadoop is dead - long live Hadoop | BiDaTA 2013 Genoa by larsgeorge
Hadoop is dead - long live Hadoop | BiDaTA 2013 GenoaHadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
Hadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
larsgeorge6K views
HBase Sizing Notes by larsgeorge
HBase Sizing NotesHBase Sizing Notes
HBase Sizing Notes
larsgeorge9.1K views
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012 by larsgeorge
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
larsgeorge4.4K views
Realtime Analytics with Hadoop and HBase by larsgeorge
Realtime Analytics with Hadoop and HBaseRealtime Analytics with Hadoop and HBase
Realtime Analytics with Hadoop and HBase
larsgeorge20.3K views
Social Networks and the Richness of Data by larsgeorge
Social Networks and the Richness of DataSocial Networks and the Richness of Data
Social Networks and the Richness of Data
larsgeorge5.5K views

Recently uploaded

Voice Logger - Telephony Integration Solution at Aegis by
Voice Logger - Telephony Integration Solution at AegisVoice Logger - Telephony Integration Solution at Aegis
Voice Logger - Telephony Integration Solution at AegisNirmal Sharma
17 views1 slide
Uni Systems for Power Platform.pptx by
Uni Systems for Power Platform.pptxUni Systems for Power Platform.pptx
Uni Systems for Power Platform.pptxUni Systems S.M.S.A.
50 views21 slides
STPI OctaNE CoE Brochure.pdf by
STPI OctaNE CoE Brochure.pdfSTPI OctaNE CoE Brochure.pdf
STPI OctaNE CoE Brochure.pdfmadhurjyapb
12 views1 slide
handbook for web 3 adoption.pdf by
handbook for web 3 adoption.pdfhandbook for web 3 adoption.pdf
handbook for web 3 adoption.pdfLiveplex
19 views16 slides
Understanding GenAI/LLM and What is Google Offering - Felix Goh by
Understanding GenAI/LLM and What is Google Offering - Felix GohUnderstanding GenAI/LLM and What is Google Offering - Felix Goh
Understanding GenAI/LLM and What is Google Offering - Felix GohNUS-ISS
41 views33 slides
Black and White Modern Science Presentation.pptx by
Black and White Modern Science Presentation.pptxBlack and White Modern Science Presentation.pptx
Black and White Modern Science Presentation.pptxmaryamkhalid2916
14 views21 slides

Recently uploaded(20)

Voice Logger - Telephony Integration Solution at Aegis by Nirmal Sharma
Voice Logger - Telephony Integration Solution at AegisVoice Logger - Telephony Integration Solution at Aegis
Voice Logger - Telephony Integration Solution at Aegis
Nirmal Sharma17 views
STPI OctaNE CoE Brochure.pdf by madhurjyapb
STPI OctaNE CoE Brochure.pdfSTPI OctaNE CoE Brochure.pdf
STPI OctaNE CoE Brochure.pdf
madhurjyapb12 views
handbook for web 3 adoption.pdf by Liveplex
handbook for web 3 adoption.pdfhandbook for web 3 adoption.pdf
handbook for web 3 adoption.pdf
Liveplex19 views
Understanding GenAI/LLM and What is Google Offering - Felix Goh by NUS-ISS
Understanding GenAI/LLM and What is Google Offering - Felix GohUnderstanding GenAI/LLM and What is Google Offering - Felix Goh
Understanding GenAI/LLM and What is Google Offering - Felix Goh
NUS-ISS41 views
Black and White Modern Science Presentation.pptx by maryamkhalid2916
Black and White Modern Science Presentation.pptxBlack and White Modern Science Presentation.pptx
Black and White Modern Science Presentation.pptx
maryamkhalid291614 views
Digital Product-Centric Enterprise and Enterprise Architecture - Tan Eng Tsze by NUS-ISS
Digital Product-Centric Enterprise and Enterprise Architecture - Tan Eng TszeDigital Product-Centric Enterprise and Enterprise Architecture - Tan Eng Tsze
Digital Product-Centric Enterprise and Enterprise Architecture - Tan Eng Tsze
NUS-ISS19 views
The Importance of Cybersecurity for Digital Transformation by NUS-ISS
The Importance of Cybersecurity for Digital TransformationThe Importance of Cybersecurity for Digital Transformation
The Importance of Cybersecurity for Digital Transformation
NUS-ISS27 views
AI: mind, matter, meaning, metaphors, being, becoming, life values by Twain Liu 刘秋艳
AI: mind, matter, meaning, metaphors, being, becoming, life valuesAI: mind, matter, meaning, metaphors, being, becoming, life values
AI: mind, matter, meaning, metaphors, being, becoming, life values
Transcript: The Details of Description Techniques tips and tangents on altern... by BookNet Canada
Transcript: The Details of Description Techniques tips and tangents on altern...Transcript: The Details of Description Techniques tips and tangents on altern...
Transcript: The Details of Description Techniques tips and tangents on altern...
BookNet Canada130 views
Perth MeetUp November 2023 by Michael Price
Perth MeetUp November 2023 Perth MeetUp November 2023
Perth MeetUp November 2023
Michael Price15 views
DALI Basics Course 2023 by Ivory Egg
DALI Basics Course  2023DALI Basics Course  2023
DALI Basics Course 2023
Ivory Egg14 views
The details of description: Techniques, tips, and tangents on alternative tex... by BookNet Canada
The details of description: Techniques, tips, and tangents on alternative tex...The details of description: Techniques, tips, and tangents on alternative tex...
The details of description: Techniques, tips, and tangents on alternative tex...
BookNet Canada121 views
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensors by sugiuralab
TouchLog: Finger Micro Gesture Recognition  Using Photo-Reflective SensorsTouchLog: Finger Micro Gesture Recognition  Using Photo-Reflective Sensors
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensors
sugiuralab15 views
Future of Learning - Yap Aye Wee.pdf by NUS-ISS
Future of Learning - Yap Aye Wee.pdfFuture of Learning - Yap Aye Wee.pdf
Future of Learning - Yap Aye Wee.pdf
NUS-ISS41 views
Empathic Computing: Delivering the Potential of the Metaverse by Mark Billinghurst
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
Mark Billinghurst470 views
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum... by NUS-ISS
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...
NUS-ISS34 views
Five Things You SHOULD Know About Postman by Postman
Five Things You SHOULD Know About PostmanFive Things You SHOULD Know About Postman
Five Things You SHOULD Know About Postman
Postman27 views

HBase in Practice

  • 1. HBase in Practice Lars George – Partner and Co-Founder @ OpenCore DataWorks Summit 2017 - Munich NoSQL is no SQL is SQL?
  • 2. About Me • Partner & Co-Founder at OpenCore • Before that • Lars: EMEA Chief Architect at Cloudera (5+ years) • Hadoop since 2007 • Apache Committer & Apache Member • HBase (also in PMC) • Lars: O’Reilly Author: HBase – The Definitive Guide • Contact • lars.george@opencore.com • @larsgeorge Website: www.opencore.com
  • 3. Agenda • Brief Intro To Core Concepts • Access Options • Data Modelling • Performance Tuning • Use-Cases • Summary
  • 5. HBase Tables • From user perspective, HBase is similar to a database, or spreadsheet • There are rows and columns, storing values • By default asking for a specific row/column combination returns the current value (that is, that last value stored there)
  • 6. HBase Tables • HBase can have a different schema per row • Could be called schema-less • Primary access by the user given row key and column name • Sorting of rows and columns by their key (aka names)
  • 7. HBase Tables • Each row/column coordinate is tagged with a version number, allowing multi-versioned values • Version is usually the current time (as epoch) • API lets user ask for versions (specific, by count, or by ranges) • Up to 2B versions
  • 8. HBase Tables • Table data is cut into pieces to distribute over cluster • Regions split table into shards at size boundaries • Families split within regions to group sets of columns together • At least one of each is needed
  • 9. Scalability – Regions as Shards • A region is served by exactly one region server • Every region server serves many regions • Table data is spread over servers • Distribution of I/O • Assignment is based on configurable logic • Balancing cluster load • Clients talk directly to region servers
  • 10. Column Family-Oriented • Group multiple columns into physically separated locations • Apply different properties to each family • TTL, compression, versions, … • Useful to separate distinct data sets that are related • Also useful to separate larger blob from meta data
  • 11. Data Management • What is available is tracked in three locations • System catalog table hbase:meta • Files in HDFS directories • Open region instances on servers • System aligns these locations • Sometimes (very rarely) a repair may be needed using HBase Fsck • Redundant information is useful to repair corrupt tables
  • 12. HBase really is…. • A distributed Hash Map • Imagine a complex, concatenated key including the user given row key and column name, the timestamp (version) • Complex key points to actual value, that is, the cell
  • 13. Fold, Store, and Shift • Logical rows in tables are really stored as flat key-value pairs • Each carries full coordinates • Pertinent information can be freely placed in cell to improve lookup • HBase is a column-family grouped key-value store
  • 14. HFile Format Information • All data is stored in a custom (open-source) format, called HFile • Data is stored in blocks (64KB default) • Trade-off between lookups and I/O throughput • Compression, encoding applied _after_ limit check • Index, filter and meta data is stored in separate blocks • Fixed trailer allows traversal of file structure • Newer versions introduce multilayered index and filter structures • Only load master index and load partial index blocks on demand • Reading data requires deserialization of block into cells • Kind of Amdahl’s Law applies
  • 15. HBase Architecture • One Master and many Worker servers • Clients mostly communicate with workers • Workers store actual data • Memstore for accruing • HFile for persistence • WAL for fail-safety • Data provided as regions • HDFS is backing store • But could be another
  • 17. HBase Architecture (cont.) • Based on Log-Structured Merge-Trees (LSM-Trees) • Inserts are done in write-ahead log first • Data is stored in memory and flushed to disk on regular intervals or based on size • Small flushes are merged in the background to keep number of files small • Reads read memory stores first and then disk based files second • Deletes are handled with “tombstone” markers • Atomicity on row level no matter how many columns • Keeps locking model easy
  • 18. Merge Reads • Read Memstore & StoreFiles using separate scanners • Merge matching cells into single row “view” • Delete’s mask existing data • Bloom filters help skip StoreFiles • Reads may have to span many files
  • 19. APIs and Access Options
  • 20. HBase Clients • Native Java Client/API • Non-Java Clients • REST server • Thrift server • Jython, Groovy DSL • Spark • TableInputFormat/TableOutputFormat for MapReduce • HBase as MapReduce source and/or target • Also available for table snapshots • HBase Shell • JRuby shell adding get, put, scan etc. and admin calls • Phoenix, Impala, Hive, …
  • 21. Java API From Wikipedia: • CRUD: “In computer programming, create, read, update, and delete are the four basic functions of persistent storage.” • Other variations of CRUD include • BREAD (Browse, Read, Edit, Add, Delete) • MADS (Modify, Add, Delete, Show) • DAVE (Delete, Add, View, Edit) • CRAP (Create, Retrieve, Alter, Purge) Wait what?
  • 22. Java API (cont.) • CRUD • put: Create and update a row (CU) • get: Retrieve an entire, or partial row (R) • delete: Delete a cell, column, columns, or row (D) • CRUD+SI • scan: Scan any number of rows (S) • increment: Increment a column value (I) • CRUD+SI+CAS • Atomic compare-and-swap (CAS) • Combined get, check, and put operation • Helps to overcome lack of full transactions
  • 23. Java API (cont.) • Batch Operations • Support Get, Put, and Delete • Reduce network round-trips • If possible, batch operation to the server to gain better overall throughput • Filters • Can be used with Get and Scan operations • Server side hinting • Reduce data transferred to client • Filters are no guarantee for fast scans • Still full table scan in worst-case scenario • Might have to implement your own • Filters can hint next row key
  • 25. Key Cardinality • The best performance is gained from using row keys • Time range bound reads can skip store files • So can Bloom Filters • Selecting column families reduces the amount of data to be scanned • Pure value based access is a full table scan • Filters often are too, but reduce network traffic
  • 26. Key/Table Design • Crucial to gain best performance • Why do I need to know? Well, you also need to know that RDBMS is only working well when columns are indexed and query plan is OK • Absence of secondary indexes forces use of row key or column name sorting • Transfer multiple indexes into one • Generate large table -> Good since fits architecture and spreads across cluster • DDI • Stands for Denormalization, Duplication and Intelligent Keys • Needed to overcome trade-offs of architecture • Denormalization -> Replacement for JOINs • Duplication -> Design for reads • Intelligent Keys -> Implement indexing and sorting, optimize reads
  • 27. Pre-materialize Everything • Achieve one read per customer request if possible • Otherwise keep at lowest number • Reads between 10ms (cache miss) and 1ms (cache hit) • Use MapReduce or Spark to compute exacts in batch • Store and merge updates live • Use increment() methods Motto: “Design for Reads”
  • 28. Tall-Narrow vs. Flat-Wide Tables • Rows do not split • Might end up with one row per region • Same storage footprint • Put more details into the row key • Sometimes dummy column only • Make use of partial key scans • Tall with Scans, Wide with Gets • Atomicity only on row level • Examples • Large graphs, stored as adjacency matrix (narrow) • Message inbox (wide)
  • 29. Sequential Keys <timestamp><more key>: {CF: {CQ: {TS : Val}}} • Hotspotting on regions is bad! • Instead do one of the following: • Salting • Prefix <timestamp> with distributed value • Binning or bucketing rows across regions • Key field swap/promotion • Move <more key> before the timestamp (see OpenTSDB) • Randomization • Move <timestamp> out of key or prefix with MD5 hash • Might also be mitigated by overall spread of workloads
  • 30. Key Design Choices • Based on access pattern, either use sequential or random keys • Often a combination of both is needed • Overcome architectural limitations • Neither is necessarily bad • Use bulk import for sequential keys and reads • Random keys are good for random access patterns
  • 31. Checklist • Design for Use-Case • Read, Write, or Both? • Avoid Hotspotting • Hash leading key part, or use salting/bucketing • Use bulk loading where possible • Monitor your servers! • Presplit tables • Try prefix encoding when values are small • Otherwise use compression (or both) • For Reads: Restrict yourself • Specify what you need, i.e. columns, families, time range • Shift details to appropriate position • Composite Keys • Column Qualifiers
  • 32. Performance Tuning 1000 knobs to turn… 20 are important?
  • 33. Everything is Pluggable • Cell • Memstore • Flush Policy • Compaction Policy • Cache • WAL • RPC handling • …
  • 34. Cluster Tuning • First, tune the global settings • Heap size and GC algorithm • Memory share for reads and writes • Enable Block Cache • Number of RPC handlers • Load Balancer • Default flush and compaction strategy • Thread pools (10+) • Next, tune the per-table and family settings • Region sizes • Block sizes • Compression and encoding • Compactions • …
  • 35. Region Balancer Tuning • A background process in the HBase Master is tracking load on servers • The load balancer moves regions occasionally • Multiple implementations exists • Simple counts number of regions • Stochastic determines cost • Favored Node pins HDFS block replicas • Can be tuned further • Cluster-wide setting!
  • 36. RPC Tuning • Default is one queue for all types of requests • Can be split into separate queues for reads and writes • Read queue can be further split into reads and scans  Stricter resource limits, but may avoid cross- starvation
  • 37. Key Tuning • Design keys to match use-case • Sequential, salted, or random • Use sorting to convey meaning • Colocate related data • Spread load over all servers • Clever key design can make use of distribution: aging-out regions
  • 38. Compaction Tuning • Default compaction settings are aggressive • Set for update use-case • For insert use-cases, Blooms are effective • Allows to tune down compactions • Saves resources by reducing write amplification • More store files are also enabling faster full table scans with time range bound scans • Server can ignore older files • Large regions may be eligible for advanced compaction strategies • Stripe or date-tiered compactions • Reduce rewrites to fraction of region size
  • 39. Use-Cases What works well, what does not, and what is so-so
  • 40. Placing the Use-Case • HBase chooses to work best for random access • You can optimize a table to prefer scans over gets • Fewer columns with larger payload • Larger HFile block sizes (maybe even duplicate data in two differently configured column families) • After that is the realm of hybrid systems • For fastest scans use brute force HDFS and native query engine with a columnar format
  • 41. Big Data Workloads Low latency Batch Random Access Full ScanShort Scan HDFS + MR (Hive/Pig) HBase HBase + Snapshots -> HDFS + MR/Spark HDFS + SQL HBase + MR/Spark
  • 42. Big Data Workloads Low latency Batch Random Access Full ScanShort Scan HDFS + MR/Spark (Hive/Pig) HBase HBase + Snapshots -> HDFS + MR/Spark HDFS + SQL HBase + MR/Spark Current Metrics Graph data Simple Entities Hybrid Entity Time series + Rollup serving Messages Analytic archive Hybrid Entity Time series + Rollup generation Index building Entity Time series
  • 44. Optimizations Mostly Inserts Use-Cases • Tune down compactions • Compaction ratio, max store file size • Use Bloom Filters • On by default for row keys Mostly Update Use-Cases • Batch updates if possible Mostly Serial Keys • Use bulk loading or salting Mostly Random Keys • Hash key with MD5 prefix Mostly Random Reads • Decrease HFile block size • Use random keys Mostly Scans • Increase HFile (and HDFS) block size • Reduce columns and increase cell sizes
  • 45. What matters… • For optimal performance, two things need to be considered: • Optimize the cluster and table settings • Choose the matching key schema • Ensure load is spread over tables and cluster nodes • HBase works best for random access and bound scans • HBase can be optimized for larger scans, but its sweet spot is short burst scans (can be parallelized too) and random point gets • Java heap space limits addressable space • Play with region sizes, compaction strategies, and key design to maximize result • Using HBase for a suitable use-case will make for a happy customer… • Conversely, forcing it into non-suitable use-cases may be cause for trouble

Editor's Notes

  1. For Developers & End-Users – Apache Phoenix, Spark
  2. Importance of Row Key structure
  3. Time-series Data etc.
  4. Time-series Data etc.