Tales from the Cloudera Field

HBaseCon
Tales From the Cloudera Field
Kevin O’Dell, Kate Ting, Aleks Shulman
{kevin, kate, aleks}@cloudera.com
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Who Are We?
Kevin O’Dell
- Previously HBase Support Team Lead
- Currently Systems Engineer with a focus on HBase deployments
Kate Ting
- Technical Account Manager of Cloudera’s largest HBase deployments
- Co-author of O’Reilly’s Apache Sqoop Cookbook
Aleks Shulman
- HBase Test Engineer focused on ensuring HBase is enterprise ready
- Primary focus on building compatibility frameworks for rolling upgrades
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Cloudera Internal HBase Metrics
• Cloudera uses HBase internally for the Support Team
• We ingest Tickets, Cluster Stats, and Apache Mailing Lists
• Cloudera has ~20K HBase nodes under management
• Over 60% of my accounts use HBase
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Agenda
● Tales Getting Production Started
● Tales Fixing Production Bugs
● Tales Upgrading Production Clusters
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Agenda
● Tales Getting Production Started
● Tales Fixing Production Bugs
● Tales Upgrading Production Clusters
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
HBase Deployment Mistakes
• Cluster Sizing
• Managing Your Regions
• General Recommendations
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Why Cluster Sizing Matters
• Jobs Failing
• Writes Blocking
• Performance Issues
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Heavy Write Sizing
java_max_heap 16GB
memstore_upper .50
java_max_heap * memstore = memstore_total_size
Calculating Total Available Memstore
desired_flush_size 128MB
repl_factor 3 (default)
max_file_size 20GB
Calculating Max Regions
memstore_total_size / desired_flush_size = total_regions_per_rs
max_file_size * (total_regions_per_rs * repl_factor) = raw_storage_per_node
X-axis = Flush_Size
Y-axis = Region_Count
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Update for Known Writes Sizing
write_throughput 20MBs
total_data_size 350TB
hlog_size * number_of_hlogs = amount_of_data_before_flush
Calculating force flushes
hlog_size 128MBs
number_of_hlogs 64
(write_throughput * 60 * 60) / amount_of_data_before_flush =
number_nodes_before_flush
Calculating Max Regions
total_data_size 350TB
maxfile_size 20GB
((total_data_size * 1024) / maxfile_size) / desired_RS_count =
total_regions_per_rs
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Why is Region Management Important
• Initial loads are failing
• Region Servers are crashing from overload
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Region Management Best Practices
Region Split Policy
ConstantSize Split on Max Filesize Use when pre-splitting all
tables
UpperBoundSplitPolicy Split on smarter intervals Use when not able to pre-split
all tables
Balancer Policy
SimpleLoadBalancer Aimlessly balance regions Use with lots of tables with low
region count
ByTable Balance by table Use with few tables with high
region count
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
General Recommendations
Feature Benefit When to Enable
Short Circuit Reads (SCR) Speed up read times by bypassing
datanode layer
Always
Snappy Compression Speed up read times and lower data
consumption
On heavily accessed tables
Bloom Filters Speed up read times when numerous
HFiles are present
Row should always be used,
Row+Column is more accurate but
higher in memory usage
HLog Compression Speed up writes and recovery times Always
Data Block Encoding compress long keys to store more in
cache
Best for short/tall tables with long
like keys. Scans may be slower
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Agenda
● Tales Getting Production Started
● Tales Fixing Production Bugs
● Tales Upgrading Production Clusters
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Tales Fixing Production Bugs
● RegionServer Hotspotting
● Faulty Hardware
● Application Bug
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Tales Fixing Production Bugs
● RegionServer Hotspotting
● Faulty Hardware
● Application Bug
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #1: RegionServer Hotspotting - Solution
● Spread rows over all RS
by salting the row key
● 100’s of regions avail
but increments only
done to 10’s of regions
● While locks wait to time
out, blocked clients hold
onto handlers
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #1: RegionServer Hotspotting - Solution
● Option 1: Change row key to something that scales
○ Reduce contention by reducing connections: each client
picks one salt and writes only to one RS
● Option 2: Implement new coalescing feature in native
HBaseSink, compressing entire batch of Flume events into
single HBase RPC call
[row1, colA+=1] [row1, colB+=1] [row1, colB+=1]
=> [row1 colA+=1 colB+=2]
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Tales Fixing Production Bugs
● RegionServer Hotspotting
● Faulty Hardware
● Application Bug
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #2: Faulty Hardware
● Diagnostics run on bad hardware caused HBase failures
● HBase recoverability = RS back online + locality (compaction)
● Stress test with prod load before needed (i.e. holiday season)
● Imagine financial impact of 7 hours of downtime?
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #2: Faulty Hardware - Solution
● Recover faster by failing fast
○ Too many retries cause HBase task to exit before it can
print exception identifying stuck RS
● Decrease time needed to finish HBase major compaction
○ Run multiple threads during compaction
● Replay in parallel
○ Decrease HLog size to limit # of edits to be replayed,
increase # of HLogs, constrain WAL file size to minimize
time corresponding region is not available
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #2: Faulty Hardware - Solution
● Shorten column family names
○ Reduce scan time, skip bulk loads, reduce memory usage
● Turn off write cache
○ Node crash erases writes in memory, rebuilds block with
outdated data, causing corrupt replica
● Turn on checksum
○ Enables RS to use other replicas from the cluster instead
of failing the operation if there’s a corrupted HFile
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Tales Fixing Production Bugs
● RegionServer Hotspotting
● Faulty Hardware
● Application Bug
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #3: Application Bug
● HBase timestamps were hardcoded to be too far out - new data
written went unused
● Bug put backup system out of commission for one month
○ More vulnerable to HBase outages
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #3: Application Bug
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Fixing #3: Application Bug - Solution
● Detailed knowledge of internals required to undo damage
○ Modified the timestamp to some time in the past for all
records via custom MR jobs over one month:
■ back up data, generate new HFile with correct
timestamp, bulkload data, run MD5
● Don’t muck around with setting the timestamp yourself
● Do use always-increasing timestamps for new puts to a row
● Do use a separate timestamp attribute of the row
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Agenda
● Tales Getting Production Started
● Tales Fixing Production Bugs
● Tales Upgrading Production Clusters
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Internal Case Study
CDH4->C5 (0.94->0.96) Upgrade Automation Failed
What Happened? Root Cause
• HBase Snapshots vs. HDFS Snapshots
• Snapshot directory rename
Outcome
• All issues resolved before C5b1 was
shipped
2013-07-12 17:11:42,656 ERROR org.apache.
hadoop.hdfs.server.namenode.FSEditLogLoader:
Encountered exception on operation MkdirOp
[length=0, inodeId=0, path=/hbase/.snapshot,
timestamp=1373674083434, permissions=hbase:
supergroup:rwxr-xr-x, opCode=OP_MKDIR,
txid=614]
org.apache.hadoop.
HadoopIllegalArgumentException: ".snapshot"
is a reserved name. Please rename it before
upgrade.
Automating Upgrades
Testing the Upgrade lifecycle
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
What is Important?
The Administrator Experience Matters
● Major version upgrades
● Rolling upgrades
The Developer Experience Matters
● API Compatibility Testing
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
And Here Is Why It Is Important
Customer Continuity
• Smooth upgrades
• Curated process
• Understanding of customer cluster lifecycle
Developer Continuity
• Forward and backward compatibility
• Binary Compatibility
• Wire Compatibility
Automation
• You can only really make a guarantee about things that are automated
• Product is easier to support
• Confidence is only possible with testing
Upgrades
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Cold vs. Rolling Upgrades
C3u5 CDH4.0.x CDH4.1.x CDH4.2.x CDH4.3.x CDH4.4.x CDH4.5.x CDH4.6.x C5.0 C5.1
-- Rolling Upgrade --> -- Rolling Upgrade -- >
-- Cold Upgrade -->
-- Cold Upgrade -->
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Upgrades from HBase 0.90 -> 0.98
CDH Version HBase Version
CDH3u5 HBase 0.90.6
CDH4.1.0 HBase 0.92.1
CDH4.2.0 HBase 0.94.2
CDH4.4.0 HBase 0.94.6
CDH4.6.0 HBase 0.94.15
CDH5.0.0 HBase 0.96.1.1
CDH5.1.0 HBase 0.98.1
A
B
C
Upgrade from version A -> Version B -> Version C
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Cold Upgrade Results
● Upgrades work!
● Steps:
○ Start at CDH3u5
○ Upgrade to a version of CDH4
○ Upgrade to CDH5.0.0
● Data Integrity
○ Different bloom filters
○ Different compression formats
● Next Steps
○ CDH 5.1.0 expected to be based on 0.98.1
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Rolling Upgrade Results
● What is tested?
○ Ingest via Java API
○ MapReduce over HBase
■ Bulk load
■ RowCount/Export
● Status
○ Rolling upgrade broken (red)
in CDH <=4.1.2 due to
region_mover issue
○ Soft failure (yellow) for
starting version <CDH4.1.0 -
due to MapReduce JT/TT
version mismatch issue
○ All else green!How to Read This:
Pick a column and read down to see for which versions rolling upgrades are advised
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Improved Supportability Through Testing
Case Study: Customer Rolling Upgrade Simulation
Large Customer
● Upgrading from CDH4.1.4+patches
● Considered several CDH versions to upgrade
○ Custom patches
Automation
● Automated testing added to simulate rolling upgrade
○ CM
○ HA+QJM
○ Parcels
● Scales
○ 4 nodes, 20 nodes, 80 nodes
● Subsequently used for other customers with similar upgrade paths
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.
Here’s to Fewer Tales Next Year..
Automated Testing Better Cluster Mgmt Fewer Tales From the Field
©2014 Cloudera, Inc. All rights reserved.
Kevin O’Dell @kevinrodell
Kate Ting @kate_ting
Aleks Shulman @a_shulman
@clouderaTest
Questions?
1 of 38

More Related Content

What's hot(20)

HBaseCon 2015: HBase and SparkHBaseCon 2015: HBase and Spark
HBaseCon 2015: HBase and Spark
HBaseCon8.7K views
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBase
HBaseCon5.6K views
Scalable HiveServer2 as a ServiceScalable HiveServer2 as a Service
Scalable HiveServer2 as a Service
DataWorks Summit304 views

Viewers also liked(20)

Data Evolution in HBaseData Evolution in HBase
Data Evolution in HBase
HBaseCon5K views
Cross-Site BigTable using HBaseCross-Site BigTable using HBase
Cross-Site BigTable using HBase
HBaseCon3.5K views
HBaseCon 2013: Apache HBase on FlashHBaseCon 2013: Apache HBase on Flash
HBaseCon 2013: Apache HBase on Flash
Cloudera, Inc.4.3K views
HBaseCon 2013: 1500 JIRAs in 20 MinutesHBaseCon 2013: 1500 JIRAs in 20 Minutes
HBaseCon 2013: 1500 JIRAs in 20 Minutes
Cloudera, Inc.4.1K views
HBaseCon 2013: ETL for Apache HBaseHBaseCon 2013: ETL for Apache HBase
HBaseCon 2013: ETL for Apache HBase
Cloudera, Inc.6.9K views

Similar to Tales from the Cloudera Field(20)

Hadoop 3 (2017 hadoop taiwan workshop)Hadoop 3 (2017 hadoop taiwan workshop)
Hadoop 3 (2017 hadoop taiwan workshop)
Wei-Chiu Chuang551 views
Yarns About YarnYarns About Yarn
Yarns About Yarn
Cloudera, Inc.5.7K views
Kudu austin oct 2015.pptxKudu austin oct 2015.pptx
Kudu austin oct 2015.pptx
Felicia Haggarty518 views
Hadoop OperationsHadoop Operations
Hadoop Operations
Cloudera, Inc.3.7K views
HBase tales from the trenchesHBase tales from the trenches
HBase tales from the trenches
wchevreuil123 views
Apache Kudu: Technical Deep Dive

Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive


Cloudera, Inc.4.5K views
The State of HBase ReplicationThe State of HBase Replication
The State of HBase Replication
HBaseCon9.1K views
Apache Hadoop 3Apache Hadoop 3
Apache Hadoop 3
Cloudera, Inc.4.9K views
Empower Hive with SparkEmpower Hive with Spark
Empower Hive with Spark
DataWorks Summit2.9K views
Building a Hadoop Data Warehouse with ImpalaBuilding a Hadoop Data Warehouse with Impala
Building a Hadoop Data Warehouse with Impala
Swiss Big Data User Group7.3K views
Introduction to HBaseIntroduction to HBase
Introduction to HBase
Apekshit Sharma329 views
Apache Accumulo OverviewApache Accumulo Overview
Apache Accumulo Overview
Bill Havanki2.5K views

Recently uploaded(20)

Unleash The MonkeysUnleash The Monkeys
Unleash The Monkeys
Jacob Duijzer7 views
Winter '24 Release Chat.pdfWinter '24 Release Chat.pdf
Winter '24 Release Chat.pdf
melbourneauuser9 views
Tridens DevOpsTridens DevOps
Tridens DevOps
Tridens9 views
Advanced API Mocking TechniquesAdvanced API Mocking Techniques
Advanced API Mocking Techniques
Dimpy Adhikary17 views
El Arte de lo PossibleEl Arte de lo Possible
El Arte de lo Possible
Neo4j28 views

Tales from the Cloudera Field

  • 1. Tales From the Cloudera Field Kevin O’Dell, Kate Ting, Aleks Shulman {kevin, kate, aleks}@cloudera.com
  • 2. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Who Are We? Kevin O’Dell - Previously HBase Support Team Lead - Currently Systems Engineer with a focus on HBase deployments Kate Ting - Technical Account Manager of Cloudera’s largest HBase deployments - Co-author of O’Reilly’s Apache Sqoop Cookbook Aleks Shulman - HBase Test Engineer focused on ensuring HBase is enterprise ready - Primary focus on building compatibility frameworks for rolling upgrades
  • 3. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Cloudera Internal HBase Metrics • Cloudera uses HBase internally for the Support Team • We ingest Tickets, Cluster Stats, and Apache Mailing Lists • Cloudera has ~20K HBase nodes under management • Over 60% of my accounts use HBase
  • 4. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Agenda ● Tales Getting Production Started ● Tales Fixing Production Bugs ● Tales Upgrading Production Clusters
  • 5. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Agenda ● Tales Getting Production Started ● Tales Fixing Production Bugs ● Tales Upgrading Production Clusters
  • 6. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. HBase Deployment Mistakes • Cluster Sizing • Managing Your Regions • General Recommendations
  • 7. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Why Cluster Sizing Matters • Jobs Failing • Writes Blocking • Performance Issues
  • 8. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Heavy Write Sizing java_max_heap 16GB memstore_upper .50 java_max_heap * memstore = memstore_total_size Calculating Total Available Memstore desired_flush_size 128MB repl_factor 3 (default) max_file_size 20GB Calculating Max Regions memstore_total_size / desired_flush_size = total_regions_per_rs max_file_size * (total_regions_per_rs * repl_factor) = raw_storage_per_node X-axis = Flush_Size Y-axis = Region_Count
  • 9. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Update for Known Writes Sizing write_throughput 20MBs total_data_size 350TB hlog_size * number_of_hlogs = amount_of_data_before_flush Calculating force flushes hlog_size 128MBs number_of_hlogs 64 (write_throughput * 60 * 60) / amount_of_data_before_flush = number_nodes_before_flush Calculating Max Regions total_data_size 350TB maxfile_size 20GB ((total_data_size * 1024) / maxfile_size) / desired_RS_count = total_regions_per_rs
  • 10. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Why is Region Management Important • Initial loads are failing • Region Servers are crashing from overload
  • 11. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Region Management Best Practices Region Split Policy ConstantSize Split on Max Filesize Use when pre-splitting all tables UpperBoundSplitPolicy Split on smarter intervals Use when not able to pre-split all tables Balancer Policy SimpleLoadBalancer Aimlessly balance regions Use with lots of tables with low region count ByTable Balance by table Use with few tables with high region count
  • 12. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. General Recommendations Feature Benefit When to Enable Short Circuit Reads (SCR) Speed up read times by bypassing datanode layer Always Snappy Compression Speed up read times and lower data consumption On heavily accessed tables Bloom Filters Speed up read times when numerous HFiles are present Row should always be used, Row+Column is more accurate but higher in memory usage HLog Compression Speed up writes and recovery times Always Data Block Encoding compress long keys to store more in cache Best for short/tall tables with long like keys. Scans may be slower
  • 13. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Agenda ● Tales Getting Production Started ● Tales Fixing Production Bugs ● Tales Upgrading Production Clusters
  • 14. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Tales Fixing Production Bugs ● RegionServer Hotspotting ● Faulty Hardware ● Application Bug
  • 15. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Tales Fixing Production Bugs ● RegionServer Hotspotting ● Faulty Hardware ● Application Bug
  • 16. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #1: RegionServer Hotspotting - Solution ● Spread rows over all RS by salting the row key ● 100’s of regions avail but increments only done to 10’s of regions ● While locks wait to time out, blocked clients hold onto handlers
  • 17. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #1: RegionServer Hotspotting - Solution ● Option 1: Change row key to something that scales ○ Reduce contention by reducing connections: each client picks one salt and writes only to one RS ● Option 2: Implement new coalescing feature in native HBaseSink, compressing entire batch of Flume events into single HBase RPC call [row1, colA+=1] [row1, colB+=1] [row1, colB+=1] => [row1 colA+=1 colB+=2]
  • 18. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Tales Fixing Production Bugs ● RegionServer Hotspotting ● Faulty Hardware ● Application Bug
  • 19. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #2: Faulty Hardware ● Diagnostics run on bad hardware caused HBase failures ● HBase recoverability = RS back online + locality (compaction) ● Stress test with prod load before needed (i.e. holiday season) ● Imagine financial impact of 7 hours of downtime?
  • 20. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #2: Faulty Hardware - Solution ● Recover faster by failing fast ○ Too many retries cause HBase task to exit before it can print exception identifying stuck RS ● Decrease time needed to finish HBase major compaction ○ Run multiple threads during compaction ● Replay in parallel ○ Decrease HLog size to limit # of edits to be replayed, increase # of HLogs, constrain WAL file size to minimize time corresponding region is not available
  • 21. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #2: Faulty Hardware - Solution ● Shorten column family names ○ Reduce scan time, skip bulk loads, reduce memory usage ● Turn off write cache ○ Node crash erases writes in memory, rebuilds block with outdated data, causing corrupt replica ● Turn on checksum ○ Enables RS to use other replicas from the cluster instead of failing the operation if there’s a corrupted HFile
  • 22. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Tales Fixing Production Bugs ● RegionServer Hotspotting ● Faulty Hardware ● Application Bug
  • 23. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #3: Application Bug ● HBase timestamps were hardcoded to be too far out - new data written went unused ● Bug put backup system out of commission for one month ○ More vulnerable to HBase outages
  • 24. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #3: Application Bug
  • 25. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Fixing #3: Application Bug - Solution ● Detailed knowledge of internals required to undo damage ○ Modified the timestamp to some time in the past for all records via custom MR jobs over one month: ■ back up data, generate new HFile with correct timestamp, bulkload data, run MD5 ● Don’t muck around with setting the timestamp yourself ● Do use always-increasing timestamps for new puts to a row ● Do use a separate timestamp attribute of the row
  • 26. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Agenda ● Tales Getting Production Started ● Tales Fixing Production Bugs ● Tales Upgrading Production Clusters
  • 27. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Internal Case Study CDH4->C5 (0.94->0.96) Upgrade Automation Failed What Happened? Root Cause • HBase Snapshots vs. HDFS Snapshots • Snapshot directory rename Outcome • All issues resolved before C5b1 was shipped 2013-07-12 17:11:42,656 ERROR org.apache. hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception on operation MkdirOp [length=0, inodeId=0, path=/hbase/.snapshot, timestamp=1373674083434, permissions=hbase: supergroup:rwxr-xr-x, opCode=OP_MKDIR, txid=614] org.apache.hadoop. HadoopIllegalArgumentException: ".snapshot" is a reserved name. Please rename it before upgrade.
  • 28. Automating Upgrades Testing the Upgrade lifecycle
  • 29. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. What is Important? The Administrator Experience Matters ● Major version upgrades ● Rolling upgrades The Developer Experience Matters ● API Compatibility Testing
  • 30. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. And Here Is Why It Is Important Customer Continuity • Smooth upgrades • Curated process • Understanding of customer cluster lifecycle Developer Continuity • Forward and backward compatibility • Binary Compatibility • Wire Compatibility Automation • You can only really make a guarantee about things that are automated • Product is easier to support • Confidence is only possible with testing
  • 32. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Cold vs. Rolling Upgrades C3u5 CDH4.0.x CDH4.1.x CDH4.2.x CDH4.3.x CDH4.4.x CDH4.5.x CDH4.6.x C5.0 C5.1 -- Rolling Upgrade --> -- Rolling Upgrade -- > -- Cold Upgrade --> -- Cold Upgrade -->
  • 33. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Upgrades from HBase 0.90 -> 0.98 CDH Version HBase Version CDH3u5 HBase 0.90.6 CDH4.1.0 HBase 0.92.1 CDH4.2.0 HBase 0.94.2 CDH4.4.0 HBase 0.94.6 CDH4.6.0 HBase 0.94.15 CDH5.0.0 HBase 0.96.1.1 CDH5.1.0 HBase 0.98.1 A B C Upgrade from version A -> Version B -> Version C
  • 34. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Cold Upgrade Results ● Upgrades work! ● Steps: ○ Start at CDH3u5 ○ Upgrade to a version of CDH4 ○ Upgrade to CDH5.0.0 ● Data Integrity ○ Different bloom filters ○ Different compression formats ● Next Steps ○ CDH 5.1.0 expected to be based on 0.98.1
  • 35. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Rolling Upgrade Results ● What is tested? ○ Ingest via Java API ○ MapReduce over HBase ■ Bulk load ■ RowCount/Export ● Status ○ Rolling upgrade broken (red) in CDH <=4.1.2 due to region_mover issue ○ Soft failure (yellow) for starting version <CDH4.1.0 - due to MapReduce JT/TT version mismatch issue ○ All else green!How to Read This: Pick a column and read down to see for which versions rolling upgrades are advised
  • 36. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Improved Supportability Through Testing Case Study: Customer Rolling Upgrade Simulation Large Customer ● Upgrading from CDH4.1.4+patches ● Considered several CDH versions to upgrade ○ Custom patches Automation ● Automated testing added to simulate rolling upgrade ○ CM ○ HA+QJM ○ Parcels ● Scales ○ 4 nodes, 20 nodes, 80 nodes ● Subsequently used for other customers with similar upgrade paths
  • 37. ©2014 Cloudera, Inc. All rights reserved. ©2014 Cloudera, Inc. All rights reserved. Here’s to Fewer Tales Next Year.. Automated Testing Better Cluster Mgmt Fewer Tales From the Field
  • 38. ©2014 Cloudera, Inc. All rights reserved. Kevin O’Dell @kevinrodell Kate Ting @kate_ting Aleks Shulman @a_shulman @clouderaTest Questions?