SlideShare a Scribd company logo
1 of 22
HBase Operations in a Flurry
Rahul Gidwani
Ian Friedman
@Yahoo!
● 2008 - Flurry Analytics for Mobile Apps
○ Sharded MySQL, or
○ HBase
● Launched on 0.18.1 with a 3 node cluster
● Great Community
● Now running 0.98.12 (+patches)
● 2 data centers with 3 clusters each
● Bidirectional replication between all
History
How we use HBase
Flurry
SDK
PROCESSING
PIPELINE
MOBILE
ADVERTISING
In each datacenter we have 3 hbase clusters.
Our Clusters
1400 Nodes 800 Nodes 60 Nodes
128GB RAM, 4 drives (4TB each) , 10GigE, 2CPU x 6Core x 2HT = 24 procs
Each Machine: RegionServer, NodeManager, DataNode
1 Table
60k regions
1.2PB (LZO Compressed)
37 Tables
115k regions
400TB (LZO Compressed)
1 Table
10k regions
2TB (LZO Compressed)
Ingestion Pipeline
MapReduce Jobs
Random Reads
Ingestion Pipeline
MapReduce Jobs
Random Reads
Ingestion Pipeline
Low Latency Random Reads
99% <= 1ms, Max TP: 1MM rps
1. Start replication to the new datacenter
2. Backfill using Copy Table
Data Migration Attempt #1
Old DC New DC
Replication
CopyTable MR Job
Replication
Old DC New DC
 Pros
● Easy from an operational standpoint
● One Map/Reduce job per table does the job
● No extra copies of the data to keep around
 Cons
● Job Failure == Starting Over
● Shipping uncompressed data over the wire
● Slow
Data Migration - Attempt #1
1. Start replication to the new datacenter
1. Snapshot Table
Data Migration - Attempt #2
Old DC New DC
Replication
Replication
Old DC New DC
Snapshot
3. Export Snapshot
4. Bulk Load into destination cluster
HDFS
Data Migration - Attempt #2
Old DC New DC
Replication
Snapshot
Exported
Snapshot
ExportSnapshot MR Job
HDFS
Old DC New DC
Replication
Exported
Snapshot
Bulk
Load
 Pros
● Shipping compressed data over the wire
● Can easily modify the code such that if the Export Job fails, you can resume
where you left off
● Much faster than copy table if your compression rates are good
 Cons
● When you compact with snapshots you keep the old HFiles
● Potentially storing 2x original table on disk.
● More operational steps than copy table.
● Possibility of resurrecting deleted data.
● Snapshot Data - Delete data - Major Compact - Import Snapshot
Data Migration - Attempt #2
1. Start replication to the new datacenter
2. Partial Snapshot Table (HBASE-13031 - Ability to snapshot based on a key range)
Data Migration - Attempt #3
Old DC New DC
Replication
Old DC New DC
Snapshot Replication
Snapshot
Snapshot
3. Export Snapshot
3. Bulk Load into destination cluster
Data Migration - Attempt #3
Replication
Multiple ExportSnapshot MR Jobs
HDFS
Old DC New DC
Snapshot
Snapshot
Snapshot
Exported Snapshot
Exported Snapshot
Exported Snapshot
Exported Snapshot
HDFS
Old DC New DC
Replication
Exported Snapshot
Exported Snapshot
Multiple Bulk Load runs
 Pros
● Same as the previous attempt
● If you have large tables and limited space in DFS you can snapshot a key
range, thus limiting the amount of duplicate storage at any time.
 Cons
● Adding even more operational overhead.
● Still possibility of resurrecting deleted data
Data Migration - Attempt #3
No downtime: [Hadoop-1.x,HBase-94.x] => [Hadoop-2.x, HBase-98.x]
 Issues we had to iron out
● How to Migrate Data?
● HBase-94.x (Writable) HBase-98.x (Protobufs)
● Hadoop-1.x <-> Hadoop-2.x (Can’t push data, must use HFtp/WebHDFS)
● Snapshots are not compatible between HBase-94.x and HBase-98.x
● Client code compatibility
● Must be compatible with both Hadoop versions and HBase versions for some time
● Migrating our HBase jobs from Mapreduce -> YARN
● We had a few patches to Hadoop which protected HBase which no longer apply
● max_concurrent_map_tasks
● max_tasks_input_split
Upgrading our cluster
The Requirement
Deploy a single code base to either 0.94 or 0.98 clusters
Good News: Most of the API calls are identical, so everything can
resolve properly at runtime!
Bad News: … most, not all.
Migrating Client Code from 0.94 to 0.98
What we did
 Separated out our HBase client code from the rest of the project
 Forked that library to have separate 94 and 98 versions
 Changed our build process to include either version depending on
which cluster we’re building for
Migrating Client Code from 0.94 to 0.98
VS
 Serialization changed significantly (Hadoop Writable -> Protobufs)
 Input value types changed (KeyValue -> Cell)
Solution: Added an abstract base class to handle these differences
0.94 to 0.98 - Filters
0.94 Filter 0.98 Filter
Instantiation changed too - now each Filter needs its own static Factory
method which is found via reflection in the RegionServer code
Adding this method causes no backwards compatibility issues!
0.94 to 0.98 - Filters
New reversed field on base Filter class broke our serialization unit tests,
which expects transient fields to be marked transient even if they aren’t used
in Java Serialization
See HBASE-12996
0.94 to 0.98 - Filters
Why aren’t you transient??
- In 0.94 HTables were heavy, so we cached them
- We maintained a singleton cache of HTables so we wouldn’t have
to reinstantiate them repeatedly
0.94 to 0.98 - HTable Caching
TableInstanceCache
● Map<TableName, HTable>
● Lazy initialization
HBaseDAOs
- setAutoFlush()
Give me an HTable!
 But in 0.98 HTables are light and you’re expected to create/destroy
them as needed.
 We still use that cache because we make heavy use of batching
writes via setAutoFlush(false)
 HTableMultiplexer is the 0.98 way to do this, but has a few issues
that prevented us from moving towards it:
1. Backwards API compatibility with 0.94
2. Dropping requests (i.e. put() returns false) if the queue is full
3. No flush() sync point to ensure all buffered data is written
0.94 to 0.98 - HTable Caching
 Problem
● Adding Removing Racks causes the balancer to move way too many regions
 Solution
● Patch the balancer to limit the number of regions moved per run
 Problem
● Regions not written to for a while with only 1 store file were not candidates for
major compaction, thus potentially having non-local blocks
 Solution
● HBASE-11195 - Potentially improve block locality during major compaction for
old regions
 Problem
● Balancer does not respect draining nodes
 Solution
● HBASE-10528 - DefaultBalancer selects plans to move regions onto draining
nodes
Some Operational Issues and Solutions
Ian Friedman - ianfriedman@yahoo-inc.com
Rahul Gidwani - rahulgidwani@yahoo-inc.com
Questions?

More Related Content

What's hot

Time-Series Apache HBase
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBaseHBaseCon
 
HBaseCon 2015: State of HBase Docs and How to Contribute
HBaseCon 2015: State of HBase Docs and How to ContributeHBaseCon 2015: State of HBase Docs and How to Contribute
HBaseCon 2015: State of HBase Docs and How to ContributeHBaseCon
 
HBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed Storage
HBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed StorageHBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed Storage
HBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed StorageCloudera, Inc.
 
Apache HBase in the Enterprise Data Hub at Cerner
Apache HBase in the Enterprise Data Hub at CernerApache HBase in the Enterprise Data Hub at Cerner
Apache HBase in the Enterprise Data Hub at CernerHBaseCon
 
HBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at XiaomiHBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at XiaomiHBaseCon
 
HBaseCon 2012 | HBase, the Use Case in eBay Cassini
HBaseCon 2012 | HBase, the Use Case in eBay Cassini HBaseCon 2012 | HBase, the Use Case in eBay Cassini
HBaseCon 2012 | HBase, the Use Case in eBay Cassini Cloudera, Inc.
 
Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path HBaseCon
 
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC timeHBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC timeMichael Stack
 
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
HBaseCon 2015: HBase at Scale in an Online and  High-Demand EnvironmentHBaseCon 2015: HBase at Scale in an Online and  High-Demand Environment
HBaseCon 2015: HBase at Scale in an Online and High-Demand EnvironmentHBaseCon
 
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
HBaseCon 2012 | Building a Large Search Platform on a Shoestring BudgetHBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
HBaseCon 2012 | Building a Large Search Platform on a Shoestring BudgetCloudera, Inc.
 
HBaseCon 2015: Blackbird Collections - In-situ Stream Processing in HBase
HBaseCon 2015: Blackbird Collections - In-situ  Stream Processing in HBaseHBaseCon 2015: Blackbird Collections - In-situ  Stream Processing in HBase
HBaseCon 2015: Blackbird Collections - In-situ Stream Processing in HBaseHBaseCon
 
Rolling Out Apache HBase for Mobile Offerings at Visa
Rolling Out Apache HBase for Mobile Offerings at Visa Rolling Out Apache HBase for Mobile Offerings at Visa
Rolling Out Apache HBase for Mobile Offerings at Visa HBaseCon
 
Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Cloudera, Inc.
 
Digital Library Collection Management using HBase
Digital Library Collection Management using HBaseDigital Library Collection Management using HBase
Digital Library Collection Management using HBaseHBaseCon
 
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWSHBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWSHBaseCon
 
HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...
HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...
HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...Cloudera, Inc.
 
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster Cloudera, Inc.
 
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseHBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseCloudera, Inc.
 
HBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ FlipboardHBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ FlipboardMatthew Blair
 
Argus Production Monitoring at Salesforce
Argus Production Monitoring at SalesforceArgus Production Monitoring at Salesforce
Argus Production Monitoring at SalesforceHBaseCon
 

What's hot (20)

Time-Series Apache HBase
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBase
 
HBaseCon 2015: State of HBase Docs and How to Contribute
HBaseCon 2015: State of HBase Docs and How to ContributeHBaseCon 2015: State of HBase Docs and How to Contribute
HBaseCon 2015: State of HBase Docs and How to Contribute
 
HBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed Storage
HBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed StorageHBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed Storage
HBaseCon 2013: Apache HBase at Pinterest - Scaling Our Feed Storage
 
Apache HBase in the Enterprise Data Hub at Cerner
Apache HBase in the Enterprise Data Hub at CernerApache HBase in the Enterprise Data Hub at Cerner
Apache HBase in the Enterprise Data Hub at Cerner
 
HBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at XiaomiHBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at Xiaomi
 
HBaseCon 2012 | HBase, the Use Case in eBay Cassini
HBaseCon 2012 | HBase, the Use Case in eBay Cassini HBaseCon 2012 | HBase, the Use Case in eBay Cassini
HBaseCon 2012 | HBase, the Use Case in eBay Cassini
 
Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path
 
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC timeHBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
 
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
HBaseCon 2015: HBase at Scale in an Online and  High-Demand EnvironmentHBaseCon 2015: HBase at Scale in an Online and  High-Demand Environment
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
 
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
HBaseCon 2012 | Building a Large Search Platform on a Shoestring BudgetHBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
 
HBaseCon 2015: Blackbird Collections - In-situ Stream Processing in HBase
HBaseCon 2015: Blackbird Collections - In-situ  Stream Processing in HBaseHBaseCon 2015: Blackbird Collections - In-situ  Stream Processing in HBase
HBaseCon 2015: Blackbird Collections - In-situ Stream Processing in HBase
 
Rolling Out Apache HBase for Mobile Offerings at Visa
Rolling Out Apache HBase for Mobile Offerings at Visa Rolling Out Apache HBase for Mobile Offerings at Visa
Rolling Out Apache HBase for Mobile Offerings at Visa
 
Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010Facebook - Jonthan Gray - Hadoop World 2010
Facebook - Jonthan Gray - Hadoop World 2010
 
Digital Library Collection Management using HBase
Digital Library Collection Management using HBaseDigital Library Collection Management using HBase
Digital Library Collection Management using HBase
 
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWSHBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
 
HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...
HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...
HBaseCon 2012 | HBase Coprocessors – Deploy Shared Functionality Directly on ...
 
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
 
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseHBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
 
HBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ FlipboardHBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ Flipboard
 
Argus Production Monitoring at Salesforce
Argus Production Monitoring at SalesforceArgus Production Monitoring at Salesforce
Argus Production Monitoring at Salesforce
 

Viewers also liked

HBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web Archiving
HBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web ArchivingHBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web Archiving
HBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web ArchivingHBaseCon
 
Real-time HBase: Lessons from the Cloud
Real-time HBase: Lessons from the CloudReal-time HBase: Lessons from the Cloud
Real-time HBase: Lessons from the CloudHBaseCon
 
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...Cloudera, Inc.
 
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...HBaseCon
 
HBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
HBaseCon 2015: Solving HBase Performance Problems with Apache HTraceHBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
HBaseCon 2015: Solving HBase Performance Problems with Apache HTraceHBaseCon
 
HBaseCon 2013: Real-Time Model Scoring in Recommender Systems
HBaseCon 2013: Real-Time Model Scoring in Recommender Systems HBaseCon 2013: Real-Time Model Scoring in Recommender Systems
HBaseCon 2013: Real-Time Model Scoring in Recommender Systems Cloudera, Inc.
 
Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase HBaseCon
 
HBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDKHBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDKHBaseCon
 
HBase at Bloomberg: High Availability Needs for the Financial Industry
HBase at Bloomberg: High Availability Needs for the Financial IndustryHBase at Bloomberg: High Availability Needs for the Financial Industry
HBase at Bloomberg: High Availability Needs for the Financial IndustryHBaseCon
 
Content Identification using HBase
Content Identification using HBaseContent Identification using HBase
Content Identification using HBaseHBaseCon
 
Apache HBase Improvements and Practices at Xiaomi
Apache HBase Improvements and Practices at XiaomiApache HBase Improvements and Practices at Xiaomi
Apache HBase Improvements and Practices at XiaomiHBaseCon
 
HBaseCon 2015: HBase @ CyberAgent
HBaseCon 2015: HBase @ CyberAgentHBaseCon 2015: HBase @ CyberAgent
HBaseCon 2015: HBase @ CyberAgentHBaseCon
 
HBaseCon 2013: Full-Text Indexing for Apache HBase
HBaseCon 2013: Full-Text Indexing for Apache HBaseHBaseCon 2013: Full-Text Indexing for Apache HBase
HBaseCon 2013: Full-Text Indexing for Apache HBaseCloudera, Inc.
 
HBaseCon 2015: Analyzing HBase Data with Apache Hive
HBaseCon 2015: Analyzing HBase Data with Apache  HiveHBaseCon 2015: Analyzing HBase Data with Apache  Hive
HBaseCon 2015: Analyzing HBase Data with Apache HiveHBaseCon
 
HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...
HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...
HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...Cloudera, Inc.
 
Apache Kylin’s Performance Boost from Apache HBase
Apache Kylin’s Performance Boost from Apache HBaseApache Kylin’s Performance Boost from Apache HBase
Apache Kylin’s Performance Boost from Apache HBaseHBaseCon
 
Solving Multi-tenancy and G1GC in Apache HBase
Solving Multi-tenancy and G1GC in Apache HBase Solving Multi-tenancy and G1GC in Apache HBase
Solving Multi-tenancy and G1GC in Apache HBase HBaseCon
 
Apache HBase at Airbnb
Apache HBase at Airbnb Apache HBase at Airbnb
Apache HBase at Airbnb HBaseCon
 
HBaseCon 2013: Deal Personalization Engine with HBase @ Groupon
HBaseCon 2013: Deal Personalization Engine with HBase @ GrouponHBaseCon 2013: Deal Personalization Engine with HBase @ Groupon
HBaseCon 2013: Deal Personalization Engine with HBase @ GrouponCloudera, Inc.
 
HBaseCon 2015: Industrial Internet Case Study using HBase and TSDB
HBaseCon 2015: Industrial Internet Case Study using HBase and TSDBHBaseCon 2015: Industrial Internet Case Study using HBase and TSDB
HBaseCon 2015: Industrial Internet Case Study using HBase and TSDBHBaseCon
 

Viewers also liked (20)

HBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web Archiving
HBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web ArchivingHBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web Archiving
HBaseCon 2015: Warcbase - Scaling 'Out' and 'Down' HBase for Web Archiving
 
Real-time HBase: Lessons from the Cloud
Real-time HBase: Lessons from the CloudReal-time HBase: Lessons from the Cloud
Real-time HBase: Lessons from the Cloud
 
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
 
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
 
HBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
HBaseCon 2015: Solving HBase Performance Problems with Apache HTraceHBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
HBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
 
HBaseCon 2013: Real-Time Model Scoring in Recommender Systems
HBaseCon 2013: Real-Time Model Scoring in Recommender Systems HBaseCon 2013: Real-Time Model Scoring in Recommender Systems
HBaseCon 2013: Real-Time Model Scoring in Recommender Systems
 
Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase Update on OpenTSDB and AsyncHBase
Update on OpenTSDB and AsyncHBase
 
HBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDKHBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDK
 
HBase at Bloomberg: High Availability Needs for the Financial Industry
HBase at Bloomberg: High Availability Needs for the Financial IndustryHBase at Bloomberg: High Availability Needs for the Financial Industry
HBase at Bloomberg: High Availability Needs for the Financial Industry
 
Content Identification using HBase
Content Identification using HBaseContent Identification using HBase
Content Identification using HBase
 
Apache HBase Improvements and Practices at Xiaomi
Apache HBase Improvements and Practices at XiaomiApache HBase Improvements and Practices at Xiaomi
Apache HBase Improvements and Practices at Xiaomi
 
HBaseCon 2015: HBase @ CyberAgent
HBaseCon 2015: HBase @ CyberAgentHBaseCon 2015: HBase @ CyberAgent
HBaseCon 2015: HBase @ CyberAgent
 
HBaseCon 2013: Full-Text Indexing for Apache HBase
HBaseCon 2013: Full-Text Indexing for Apache HBaseHBaseCon 2013: Full-Text Indexing for Apache HBase
HBaseCon 2013: Full-Text Indexing for Apache HBase
 
HBaseCon 2015: Analyzing HBase Data with Apache Hive
HBaseCon 2015: Analyzing HBase Data with Apache  HiveHBaseCon 2015: Analyzing HBase Data with Apache  Hive
HBaseCon 2015: Analyzing HBase Data with Apache Hive
 
HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...
HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...
HBaseCon 2013: Realtime User Segmentation using Apache HBase -- Architectural...
 
Apache Kylin’s Performance Boost from Apache HBase
Apache Kylin’s Performance Boost from Apache HBaseApache Kylin’s Performance Boost from Apache HBase
Apache Kylin’s Performance Boost from Apache HBase
 
Solving Multi-tenancy and G1GC in Apache HBase
Solving Multi-tenancy and G1GC in Apache HBase Solving Multi-tenancy and G1GC in Apache HBase
Solving Multi-tenancy and G1GC in Apache HBase
 
Apache HBase at Airbnb
Apache HBase at Airbnb Apache HBase at Airbnb
Apache HBase at Airbnb
 
HBaseCon 2013: Deal Personalization Engine with HBase @ Groupon
HBaseCon 2013: Deal Personalization Engine with HBase @ GrouponHBaseCon 2013: Deal Personalization Engine with HBase @ Groupon
HBaseCon 2013: Deal Personalization Engine with HBase @ Groupon
 
HBaseCon 2015: Industrial Internet Case Study using HBase and TSDB
HBaseCon 2015: Industrial Internet Case Study using HBase and TSDBHBaseCon 2015: Industrial Internet Case Study using HBase and TSDB
HBaseCon 2015: Industrial Internet Case Study using HBase and TSDB
 

Similar to HBaseCon 2015: HBase Operations in a Flurry

Zero-downtime Hadoop/HBase Cross-datacenter Migration
Zero-downtime Hadoop/HBase Cross-datacenter MigrationZero-downtime Hadoop/HBase Cross-datacenter Migration
Zero-downtime Hadoop/HBase Cross-datacenter MigrationScott Miao
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APImcsrivas
 
Strata Stinger Talk October 2013
Strata Stinger Talk October 2013Strata Stinger Talk October 2013
Strata Stinger Talk October 2013alanfgates
 
HBaseCon 2015: HBase 2.0 and Beyond Panel
HBaseCon 2015: HBase 2.0 and Beyond PanelHBaseCon 2015: HBase 2.0 and Beyond Panel
HBaseCon 2015: HBase 2.0 and Beyond PanelHBaseCon
 
Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...
Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...
Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...Cloudera, Inc.
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance TuningLars Hofhansl
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ SalesforceHBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ SalesforceHBaseCon
 
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at HuaweiHBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at HuaweiMichael Stack
 
Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!DataWorks Summit
 
Inside MapR's M7
Inside MapR's M7Inside MapR's M7
Inside MapR's M7Ted Dunning
 
Redundancy for Big Hadoop Clusters is hard - Stuart Pook
Redundancy for Big Hadoop Clusters is hard  - Stuart PookRedundancy for Big Hadoop Clusters is hard  - Stuart Pook
Redundancy for Big Hadoop Clusters is hard - Stuart PookEvention
 
HBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
 
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...Sumeet Singh
 
Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016StampedeCon
 

Similar to HBaseCon 2015: HBase Operations in a Flurry (20)

Hbase: an introduction
Hbase: an introductionHbase: an introduction
Hbase: an introduction
 
Zero-downtime Hadoop/HBase Cross-datacenter Migration
Zero-downtime Hadoop/HBase Cross-datacenter MigrationZero-downtime Hadoop/HBase Cross-datacenter Migration
Zero-downtime Hadoop/HBase Cross-datacenter Migration
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase API
 
Hbase 20141003
Hbase 20141003Hbase 20141003
Hbase 20141003
 
Strata Stinger Talk October 2013
Strata Stinger Talk October 2013Strata Stinger Talk October 2013
Strata Stinger Talk October 2013
 
HBaseCon 2015: HBase 2.0 and Beyond Panel
HBaseCon 2015: HBase 2.0 and Beyond PanelHBaseCon 2015: HBase 2.0 and Beyond Panel
HBaseCon 2015: HBase 2.0 and Beyond Panel
 
Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...
Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...
Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - J...
 
Introduction to Galera Cluster
Introduction to Galera ClusterIntroduction to Galera Cluster
Introduction to Galera Cluster
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ SalesforceHBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ Salesforce
 
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at HuaweiHBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
 
Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!
 
Inside MapR's M7
Inside MapR's M7Inside MapR's M7
Inside MapR's M7
 
Hadoop - Introduction to HDFS
Hadoop - Introduction to HDFSHadoop - Introduction to HDFS
Hadoop - Introduction to HDFS
 
Inside MapR's M7
Inside MapR's M7Inside MapR's M7
Inside MapR's M7
 
Redundancy for Big Hadoop Clusters is hard - Stuart Pook
Redundancy for Big Hadoop Clusters is hard  - Stuart PookRedundancy for Big Hadoop Clusters is hard  - Stuart Pook
Redundancy for Big Hadoop Clusters is hard - Stuart Pook
 
February 2014 HUG : Hive On Tez
February 2014 HUG : Hive On TezFebruary 2014 HUG : Hive On Tez
February 2014 HUG : Hive On Tez
 
HBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environment
 
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
 
Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016
 

More from HBaseCon

hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kuberneteshbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
hbaseconasia2017: Building online HBase cluster of Zhihu based on KubernetesHBaseCon
 
hbaseconasia2017: HBase on Beam
hbaseconasia2017: HBase on Beamhbaseconasia2017: HBase on Beam
hbaseconasia2017: HBase on BeamHBaseCon
 
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
hbaseconasia2017: HBase Disaster Recovery Solution at Huaweihbaseconasia2017: HBase Disaster Recovery Solution at Huawei
hbaseconasia2017: HBase Disaster Recovery Solution at HuaweiHBaseCon
 
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinteresthbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
hbaseconasia2017: Removable singularity: a story of HBase upgrade in PinterestHBaseCon
 
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程HBaseCon
 
hbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: Apache HBase at Neteasehbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: Apache HBase at NeteaseHBaseCon
 
hbaseconasia2017: HBase在Hulu的使用和实践
hbaseconasia2017: HBase在Hulu的使用和实践hbaseconasia2017: HBase在Hulu的使用和实践
hbaseconasia2017: HBase在Hulu的使用和实践HBaseCon
 
hbaseconasia2017: 基于HBase的企业级大数据平台
hbaseconasia2017: 基于HBase的企业级大数据平台hbaseconasia2017: 基于HBase的企业级大数据平台
hbaseconasia2017: 基于HBase的企业级大数据平台HBaseCon
 
hbaseconasia2017: HBase at JD.com
hbaseconasia2017: HBase at JD.comhbaseconasia2017: HBase at JD.com
hbaseconasia2017: HBase at JD.comHBaseCon
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architectureHBaseCon
 
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huaweihbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
hbaseconasia2017: Ecosystems with HBase and CloudTable service at HuaweiHBaseCon
 
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMiHBaseCon
 
hbaseconasia2017: hbase-2.0.0
hbaseconasia2017: hbase-2.0.0hbaseconasia2017: hbase-2.0.0
hbaseconasia2017: hbase-2.0.0HBaseCon
 
HBaseCon2017 Democratizing HBase
HBaseCon2017 Democratizing HBaseHBaseCon2017 Democratizing HBase
HBaseCon2017 Democratizing HBaseHBaseCon
 
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
HBaseCon2017 Removable singularity: a story of HBase upgrade in PinterestHBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
HBaseCon2017 Removable singularity: a story of HBase upgrade in PinterestHBaseCon
 
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon
 
HBaseCon2017 Transactions in HBase
HBaseCon2017 Transactions in HBaseHBaseCon2017 Transactions in HBase
HBaseCon2017 Transactions in HBaseHBaseCon
 
HBaseCon2017 Highly-Available HBase
HBaseCon2017 Highly-Available HBaseHBaseCon2017 Highly-Available HBase
HBaseCon2017 Highly-Available HBaseHBaseCon
 
HBaseCon2017 Apache HBase at Didi
HBaseCon2017 Apache HBase at DidiHBaseCon2017 Apache HBase at Didi
HBaseCon2017 Apache HBase at DidiHBaseCon
 
HBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon
 

More from HBaseCon (20)

hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kuberneteshbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
 
hbaseconasia2017: HBase on Beam
hbaseconasia2017: HBase on Beamhbaseconasia2017: HBase on Beam
hbaseconasia2017: HBase on Beam
 
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
hbaseconasia2017: HBase Disaster Recovery Solution at Huaweihbaseconasia2017: HBase Disaster Recovery Solution at Huawei
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
 
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinteresthbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
 
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
 
hbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: Apache HBase at Neteasehbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: Apache HBase at Netease
 
hbaseconasia2017: HBase在Hulu的使用和实践
hbaseconasia2017: HBase在Hulu的使用和实践hbaseconasia2017: HBase在Hulu的使用和实践
hbaseconasia2017: HBase在Hulu的使用和实践
 
hbaseconasia2017: 基于HBase的企业级大数据平台
hbaseconasia2017: 基于HBase的企业级大数据平台hbaseconasia2017: 基于HBase的企业级大数据平台
hbaseconasia2017: 基于HBase的企业级大数据平台
 
hbaseconasia2017: HBase at JD.com
hbaseconasia2017: HBase at JD.comhbaseconasia2017: HBase at JD.com
hbaseconasia2017: HBase at JD.com
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecture
 
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huaweihbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
 
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMi
 
hbaseconasia2017: hbase-2.0.0
hbaseconasia2017: hbase-2.0.0hbaseconasia2017: hbase-2.0.0
hbaseconasia2017: hbase-2.0.0
 
HBaseCon2017 Democratizing HBase
HBaseCon2017 Democratizing HBaseHBaseCon2017 Democratizing HBase
HBaseCon2017 Democratizing HBase
 
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
HBaseCon2017 Removable singularity: a story of HBase upgrade in PinterestHBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
 
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
 
HBaseCon2017 Transactions in HBase
HBaseCon2017 Transactions in HBaseHBaseCon2017 Transactions in HBase
HBaseCon2017 Transactions in HBase
 
HBaseCon2017 Highly-Available HBase
HBaseCon2017 Highly-Available HBaseHBaseCon2017 Highly-Available HBase
HBaseCon2017 Highly-Available HBase
 
HBaseCon2017 Apache HBase at Didi
HBaseCon2017 Apache HBase at DidiHBaseCon2017 Apache HBase at Didi
HBaseCon2017 Apache HBase at Didi
 
HBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 gohbase: Pure Go HBase Client
 

Recently uploaded

2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdfAndrey Devyatkin
 
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfEnhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfRTS corp
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsChristian Birchler
 
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxThe Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxRTS corp
 
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics
 
Strategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsStrategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsJean Silva
 
Osi security architecture in network.pptx
Osi security architecture in network.pptxOsi security architecture in network.pptx
Osi security architecture in network.pptxVinzoCenzo
 
Patterns for automating API delivery. API conference
Patterns for automating API delivery. API conferencePatterns for automating API delivery. API conference
Patterns for automating API delivery. API conferencessuser9e7c64
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLionel Briand
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfDrew Moseley
 
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...OnePlan Solutions
 
2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shards2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shardsChristopher Curtin
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slidesvaideheekore1
 
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...OnePlan Solutions
 
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...Bert Jan Schrijver
 
Best Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITBest Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITmanoharjgpsolutions
 
Post Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on IdentityPost Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on Identityteam-WIBU
 
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingOpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingShane Coughlan
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringHironori Washizaki
 

Recently uploaded (20)

2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
 
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfEnhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
 
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxThe Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
 
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
 
Strategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsStrategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero results
 
Osi security architecture in network.pptx
Osi security architecture in network.pptxOsi security architecture in network.pptx
Osi security architecture in network.pptx
 
Patterns for automating API delivery. API conference
Patterns for automating API delivery. API conferencePatterns for automating API delivery. API conference
Patterns for automating API delivery. API conference
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and Repair
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdf
 
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
 
2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shards2024 DevNexus Patterns for Resiliency: Shuffle shards
2024 DevNexus Patterns for Resiliency: Shuffle shards
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slides
 
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
 
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
 
Best Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITBest Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh IT
 
Post Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on IdentityPost Quantum Cryptography – The Impact on Identity
Post Quantum Cryptography – The Impact on Identity
 
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingOpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their Engineering
 

HBaseCon 2015: HBase Operations in a Flurry

  • 1. HBase Operations in a Flurry Rahul Gidwani Ian Friedman @Yahoo!
  • 2. ● 2008 - Flurry Analytics for Mobile Apps ○ Sharded MySQL, or ○ HBase ● Launched on 0.18.1 with a 3 node cluster ● Great Community ● Now running 0.98.12 (+patches) ● 2 data centers with 3 clusters each ● Bidirectional replication between all History
  • 3. How we use HBase Flurry SDK PROCESSING PIPELINE MOBILE ADVERTISING
  • 4. In each datacenter we have 3 hbase clusters. Our Clusters 1400 Nodes 800 Nodes 60 Nodes 128GB RAM, 4 drives (4TB each) , 10GigE, 2CPU x 6Core x 2HT = 24 procs Each Machine: RegionServer, NodeManager, DataNode 1 Table 60k regions 1.2PB (LZO Compressed) 37 Tables 115k regions 400TB (LZO Compressed) 1 Table 10k regions 2TB (LZO Compressed) Ingestion Pipeline MapReduce Jobs Random Reads Ingestion Pipeline MapReduce Jobs Random Reads Ingestion Pipeline Low Latency Random Reads 99% <= 1ms, Max TP: 1MM rps
  • 5. 1. Start replication to the new datacenter 2. Backfill using Copy Table Data Migration Attempt #1 Old DC New DC Replication CopyTable MR Job Replication Old DC New DC
  • 6.  Pros ● Easy from an operational standpoint ● One Map/Reduce job per table does the job ● No extra copies of the data to keep around  Cons ● Job Failure == Starting Over ● Shipping uncompressed data over the wire ● Slow Data Migration - Attempt #1
  • 7. 1. Start replication to the new datacenter 1. Snapshot Table Data Migration - Attempt #2 Old DC New DC Replication Replication Old DC New DC Snapshot
  • 8. 3. Export Snapshot 4. Bulk Load into destination cluster HDFS Data Migration - Attempt #2 Old DC New DC Replication Snapshot Exported Snapshot ExportSnapshot MR Job HDFS Old DC New DC Replication Exported Snapshot Bulk Load
  • 9.  Pros ● Shipping compressed data over the wire ● Can easily modify the code such that if the Export Job fails, you can resume where you left off ● Much faster than copy table if your compression rates are good  Cons ● When you compact with snapshots you keep the old HFiles ● Potentially storing 2x original table on disk. ● More operational steps than copy table. ● Possibility of resurrecting deleted data. ● Snapshot Data - Delete data - Major Compact - Import Snapshot Data Migration - Attempt #2
  • 10. 1. Start replication to the new datacenter 2. Partial Snapshot Table (HBASE-13031 - Ability to snapshot based on a key range) Data Migration - Attempt #3 Old DC New DC Replication Old DC New DC Snapshot Replication Snapshot Snapshot
  • 11. 3. Export Snapshot 3. Bulk Load into destination cluster Data Migration - Attempt #3 Replication Multiple ExportSnapshot MR Jobs HDFS Old DC New DC Snapshot Snapshot Snapshot Exported Snapshot Exported Snapshot Exported Snapshot Exported Snapshot HDFS Old DC New DC Replication Exported Snapshot Exported Snapshot Multiple Bulk Load runs
  • 12.  Pros ● Same as the previous attempt ● If you have large tables and limited space in DFS you can snapshot a key range, thus limiting the amount of duplicate storage at any time.  Cons ● Adding even more operational overhead. ● Still possibility of resurrecting deleted data Data Migration - Attempt #3
  • 13. No downtime: [Hadoop-1.x,HBase-94.x] => [Hadoop-2.x, HBase-98.x]  Issues we had to iron out ● How to Migrate Data? ● HBase-94.x (Writable) HBase-98.x (Protobufs) ● Hadoop-1.x <-> Hadoop-2.x (Can’t push data, must use HFtp/WebHDFS) ● Snapshots are not compatible between HBase-94.x and HBase-98.x ● Client code compatibility ● Must be compatible with both Hadoop versions and HBase versions for some time ● Migrating our HBase jobs from Mapreduce -> YARN ● We had a few patches to Hadoop which protected HBase which no longer apply ● max_concurrent_map_tasks ● max_tasks_input_split Upgrading our cluster
  • 14. The Requirement Deploy a single code base to either 0.94 or 0.98 clusters Good News: Most of the API calls are identical, so everything can resolve properly at runtime! Bad News: … most, not all. Migrating Client Code from 0.94 to 0.98
  • 15. What we did  Separated out our HBase client code from the rest of the project  Forked that library to have separate 94 and 98 versions  Changed our build process to include either version depending on which cluster we’re building for Migrating Client Code from 0.94 to 0.98 VS
  • 16.  Serialization changed significantly (Hadoop Writable -> Protobufs)  Input value types changed (KeyValue -> Cell) Solution: Added an abstract base class to handle these differences 0.94 to 0.98 - Filters 0.94 Filter 0.98 Filter
  • 17. Instantiation changed too - now each Filter needs its own static Factory method which is found via reflection in the RegionServer code Adding this method causes no backwards compatibility issues! 0.94 to 0.98 - Filters
  • 18. New reversed field on base Filter class broke our serialization unit tests, which expects transient fields to be marked transient even if they aren’t used in Java Serialization See HBASE-12996 0.94 to 0.98 - Filters Why aren’t you transient??
  • 19. - In 0.94 HTables were heavy, so we cached them - We maintained a singleton cache of HTables so we wouldn’t have to reinstantiate them repeatedly 0.94 to 0.98 - HTable Caching TableInstanceCache ● Map<TableName, HTable> ● Lazy initialization HBaseDAOs - setAutoFlush() Give me an HTable!
  • 20.  But in 0.98 HTables are light and you’re expected to create/destroy them as needed.  We still use that cache because we make heavy use of batching writes via setAutoFlush(false)  HTableMultiplexer is the 0.98 way to do this, but has a few issues that prevented us from moving towards it: 1. Backwards API compatibility with 0.94 2. Dropping requests (i.e. put() returns false) if the queue is full 3. No flush() sync point to ensure all buffered data is written 0.94 to 0.98 - HTable Caching
  • 21.  Problem ● Adding Removing Racks causes the balancer to move way too many regions  Solution ● Patch the balancer to limit the number of regions moved per run  Problem ● Regions not written to for a while with only 1 store file were not candidates for major compaction, thus potentially having non-local blocks  Solution ● HBASE-11195 - Potentially improve block locality during major compaction for old regions  Problem ● Balancer does not respect draining nodes  Solution ● HBASE-10528 - DefaultBalancer selects plans to move regions onto draining nodes Some Operational Issues and Solutions
  • 22. Ian Friedman - ianfriedman@yahoo-inc.com Rahul Gidwani - rahulgidwani@yahoo-inc.com Questions?

Editor's Notes

  1. Dave Latham had a choice way back in 2008, sharded mysql or hbase 0.18 and boy are we happy he went with hbase. We have scaled quite a bit since those days and we don’t think that would’ve been possible if Dave would have gone with a sharded MySQL app
  2. We have a flurry sdk which app developers install -> api boxes -> log processing pipeline (kafka) -> hbase Uses HBase: All of our mapreduce jobs developer portal (website) mobile ads
  3. We run with 5 zookeeper instances, 1 namenode, 1 SNN, 1 hmaster, 1 backup hmaster We started off as a 3 node cluster Grew to 800 nodes in a single datacenter decided we needed redundancy, so we spun up a new datacenter (oh the joys of replication - things we found at scale) HBASE-8806 - Row locks are acquired repeatedly in HRegion.doMiniBatchMutation for duplicate rows. HBASE-9208 - ReplicationLogCleaner slow at large scale HBASE-10100 - Hbase replication cluster can have varying peers under certain conditions (transient dns errors caused some hosts to have UnknownHostExceptions) this just logged the line How consumers interact with the cluster: Our large cluster holds raw fact data, which we ingest data to and run jobs to create summarized data which we store in our 800 node cluster. Our 800 node cluster has denormalized data from large cluster table which we need for aggregation jobs, serving and holding secondary indexes. Our 60 node cluster is a low latency key value store which we can hold all of the data in the block cache.
  4. Why do we need to do migrations? Adding data centers, Merging large tables into one, major version hadoop and hbase…. all with zero downtime,
  5. The biggest bottleneck in transferring data is throughput of your pipe. If you have relatively good compression rates, this can be quite inefficient. But the operational complexity is minimal provided you don’t have to babysit your long running job.
  6. Here we again start replication to the new cluster. Take a snapshot of the table(s) we want to ship.
  7. After the snapshot has been transferred to the new datacenter, we have a patched bulk load which allows us to take that exported snapshot and bulk load it into the tables we specified.
  8. If your compression ratio is good this will save time, also we modified the exportSnapshot job a bit such that job failures could be resumed on job failure. Another worry was that if time_between_major_compactions < time_taken_to_ship_snapshot you can potentially be storing 2x the data on disk as the hlog cleaner will skip the files that are snapshot. Another concern is you can resurrect deleted data: Suppose you snapshot your data Run a delete Major compact on your destination side Then you import your snapshot This is something we were willing to live with. You can also run a modified VerifyReplication on your data after shipping your data. Suppose you run a verifyReplication on your data after on the destination cluster. Then all rows or keys that are missing on the source cluster but exist on the destination cluster could be deleted.
  9. So our problem was that we have a very large cluster which has quite a bit of data. Even with a 10Gbs dedicated link it takes nearly a month to ship the data to a new datacenter. Snapshotting those tables with writes to the original tables would have put us over the limit in terms of dfs capacity. We have 1 really large table which is 1.2PB un-replicated on our cluster it was going to be impossible to ship this. We thought about running the job where we exported to sequence files (compressed some more), then distcp’ed them over then ran the import job. This was a viable option upon running import we would have a 1.2PB shuffle which was not desirable. So we decided to split the table up into 4 smaller batches and ship those. The process went as such: Start Replication Take Partial Snapshot Export Partial Snapshot Bulk Load Partial Snapshot Delete partial snapshot Rinse and repeat
  10. This actually provided us the best of both worlds. We were able to ship compressed data over the wire thus saving weeks off our data transfer time We didn’t have to babysit a job whereas if it failed, we would have to start over We weren’t sitting awake late at night worried about whether or not we would run out of dfs capacity.
  11. Francis at Yahoo had had used thrift to replicate between different versions of hbase. We modified this patch a bit to suit our needs as we could not afford to take any cluster downtime to make this happen. used webhdfs, and because hfile format did not change this worked out for us. used shaded jars to do verify replication jobs between clusters. Our client code had to be compatible with both versions of hadoop / hbase while remaining encapsulated from our developers. Another issue was how do we work with YARN. All of our jobs scan our hbase tables and write back to our hbase tables. In MapReduce V1 we had some patches to protect our cluster. a configurable option on a per job basis max_concurrent_map_tasks which limited the amount of concurrent tasks running for any particular job. This patch was mainly to protect things like our database from having too much load sent to it by some jobs. a configurable parameter max_tasks_input_split. When you use the map reduce framework with hbase, your input splits are based on regions which you can aggregate to figure out how many tasks are slated for a particular regionserver. But if some jobs are particularly IO intensive and hammer regionservers you might want to do some sort of rate limiting, but until recently regionservers did not have that capability. So you can limit the number of concurrent tasks running on any regionserver per job.
  12. So since we had to maintain multiple data centers running on different versions, that means we would have to maintain a build of our software that could work on Hadoop 1 and HBase 94, and simultaneously maintain a branch that would work on Hadoop 2 and HBase 98, potentially for months as we worked through the datacenter migrations. So just to see what the delta was, we replaced all of the dependency jars in our project and saw what broke at compile time. Thankfully, not too much changed in HBase between 94 and 98! For the most part, everything would resolve properly at runtime, but this didn’t work for everything. Quite a few method signatures changed, new types were introduced, abstract classes gained/lost members, etc. So we had to come up with a plan to handle maintaining compatibility on both sides for potentially a long time.
  13. Our interfaces that touch the HBase client code are mostly encapsulated, securely behind DAOs and other utility classes. So we extracted all of the base classes that actually touch endpoints like HTable and such, and put them into their own project, and set up a separate build chain with Maven and Jenkins and Artifactory to be able to build this library independent from the rest of our code base. We then forked the project and set up one fork to depend on HBase 94, and the other on HBase 98. Then we introduced variables into our build process for the main flurry projects to be able to inject a switch that would either pull the current version of hbaseCommon 94 or 98. So then we could just have a checkbox on our Jenkins build and deploy jobs and everything is handled for us.
  14. So now I’d like to give you a taste of some of the incompatibilities we encountered when updating our client code from 94 to 98. If you’re planning to make an upgrade like this any time soon, this may help you out. Obviously your mileage may vary and our usecases are not the same as everyone else’s, but you might run into some of these hurdles. We heavily use server side filters in our application, and maintain a bunch of them. A few things changed there, some method signatures, but the big deal for me was figuring out the new serialization changes. For those who don’t use Filters or don’t know how this works, filters have to be able to serialize themselves to be sent to the RegionServer along with a Scan. When I say serialize themselves, what that really means is that they capture any arguments or input values and serialize those, and send them in the request. In 94, this was handled via the Hadoop Writable interface, and implementing custom read/write formats. However in 98, because we now use protobufs to communicate between client and server, the Filters need to be wrapped in a protobuf enclosure even though they have to handle their serialization themselves. As you can see in the slide, we added another abstract class in between the base HBase Filter class and our own code that takes care of some of the method signature and name differences.
  15. Now furthermore, in order to instantiate a filter from a serialized buffer on the regionserver side, HBase 98 requires that the filter class contain a static factory parseFrom method, when previously it just used the regular constructor plus the Hadoop Writable readFields method. Luckily this was a purely additive change, since no such method existed in the 94 world, we could just go ahead and add this static method to all of our Filter subclasses directly, with no worries about backwards compatibility with the 94 client code. Sadly we had to put a slightly different copy into each class so it could call the appropriate constructor. Furthermore, since the base Filter class’s parseFrom throws no exceptions, we had to catch exceptions from our deserialization method and rethrow as a RuntimeException.
  16. This one really puzzled me… Our serialization methods all have unit tests around them that use reflection to automatically populate a random instance of a serializable class, serialize it, then deserialize it, and do a deep comparison of the result. But because the Filter base class gained this new reversed field, those tests broke. After dreading we’d have to add reversed to the serialization of our filters, I did some reading and realized that reversed only ever gets set on the RegionServer itself, and never actually gets sent as part of a request over the wire. So clearly it should be transient, if it’s not intended to be included in serialization? Well I guess we’re the only ones who use the java keyword transient as a hint not just for Java serialization but for any other kind of serialization, since our tests ignore fields marked transient. We both implemented a way to exclude individually named fields from our tests as well as opening a HBase jira to add the transient keyword there just in case anyone else has similar usecases.
  17. So another thing we looked at changing was our use of the HTable interface. In 94, the common wisdom was that creating a new HTable was an expensive operation, so we lazily cached them as necessary. We also wrapped the base HTable class with a semaphore to make it thread-safe, so that multiple threads could issue writes at the same time. This let us turn off autoflushing and buffer our writes from multiple threads, and see a big performance gain in write throughput.
  18. Now in HBase 98, the expectations are different. Since connections to regionservers are pooled and cached behind the scenes, HTable creation and destruction are light operations, and callers are expected to create and destroy them as needed for each set of calls. However doing something like this would break the assumptions of our use cases, as I mentioned in the previous slide we’re making big use of setAutoFlush false to buffer and batch our writes. It seems like the 98 way to do this is to use HTableMultiplexer, but I identified several problems with trying to introduce this to our codebase. the calling API into HTM is very different from our use of HTable, so it would require a lot of modifications. By itself this would not be a dealbreaker, except… Unlike HTable which blocks when its buffer is full and it is trying to flush to HBase, HTableMultiplexer just returns false if a put would overflow the buffer… I decided this would be too much of a difference for how the calling code expects its HBase client to behave. But on top of that... There is no way to ensure all data is flushed like there is with the flush() method in HTable. The best way I could come up with would be to stop all writes and then call getHTableMultiplexerStatus, which seemed like an expensive operation to be doing on a regular basis. So we stuck with the original plan and cached our HTables and so far that seems to work as expected. Now I’ll throw it back over to Rahul
  19. Without good monitoring you can’t pinpoint your problems. It becomes especially hard to figure this out of 1000+ node clusters. We store our data in tsd. Your cluster is only as good as your poorest performing regionserver. Finding it quickly, diagnosing what is wrong is the trick. Jstack is your friend, always profile and figure out where the bottlenecks are happening.