Home
Explore
Submit Search
Upload
Login
Signup
Advertisement
Check these out next
Apache HBase Performance Tuning
Lars Hofhansl
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon
RocksDB Performance and Reliability Practices
Yoshinori Matsunobu
HBase and HDFS: Understanding FileSystem Usage in HBase
enissoz
Sqoop
Prashant Gupta
Hive spark-s3acommitter-hbase-nfs
Yifeng Jiang
MyRocks introduction and production deployment
Yoshinori Matsunobu
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Noritaka Sekiyama
1
of
33
Top clipped slide
HBaseCon 2013: Compaction Improvements in Apache HBase
Jul. 8, 2013
•
0 likes
46 likes
×
Be the first to like this
Show More
•
18,934 views
views
×
Total views
0
On Slideshare
0
From embeds
0
Number of embeds
0
Report
Technology
Presented by: Sergey Shelukhin, Hortonworks
Cloudera, Inc.
Follow
Cloudera, Inc.
Advertisement
Advertisement
Advertisement
Recommended
Achieving HBase Multi-Tenancy with RegionServer Groups and Favored Nodes
DataWorks Summit
2.1K views
•
40 slides
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
Cloudera, Inc.
10.9K views
•
33 slides
ORC File - Optimizing Your Big Data
DataWorks Summit
11.4K views
•
26 slides
BlueStore: a new, faster storage backend for Ceph
Sage Weil
5.6K views
•
48 slides
Hadoop Operations - Best Practices from the Field
DataWorks Summit
5K views
•
38 slides
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3
DataWorks Summit
7.5K views
•
39 slides
More Related Content
Slideshows for you
(20)
Apache HBase Performance Tuning
Lars Hofhansl
•
39.6K views
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon
•
14.5K views
RocksDB Performance and Reliability Practices
Yoshinori Matsunobu
•
502 views
HBase and HDFS: Understanding FileSystem Usage in HBase
enissoz
•
73.7K views
Sqoop
Prashant Gupta
•
2.9K views
Hive spark-s3acommitter-hbase-nfs
Yifeng Jiang
•
4.7K views
MyRocks introduction and production deployment
Yoshinori Matsunobu
•
3.2K views
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Noritaka Sekiyama
•
32.4K views
MyRocks Deep Dive
Yoshinori Matsunobu
•
24.1K views
HDFS Overview
JEONGPHIL HAN
•
2.2K views
HBaseCon 2012 | HBase Schema Design - Ian Varley, Salesforce
Cloudera, Inc.
•
41.7K views
Supporting Apache HBase : Troubleshooting and Supportability Improvements
DataWorks Summit
•
1.8K views
Apache Tez – Present and Future
DataWorks Summit
•
3.8K views
Apache Phoenix + Apache HBase
DataWorks Summit/Hadoop Summit
•
7.3K views
Iceberg + Alluxio for Fast Data Analytics
Alluxio, Inc.
•
471 views
HBase Low Latency
DataWorks Summit
•
5.1K views
Apache HBase at Airbnb
HBaseCon
•
5.8K views
Paul Dix [InfluxData] The Journey of InfluxDB | InfluxDays 2022
InfluxData
•
90 views
Ceph and RocksDB
Sage Weil
•
13.6K views
RocksDB detail
MIJIN AN
•
7.1K views
Similar to HBaseCon 2013: Compaction Improvements in Apache HBase
(20)
Ozone and HDFS’s evolution
DataWorks Summit
•
195 views
Evolving HDFS to a Generalized Storage Subsystem
DataWorks Summit/Hadoop Summit
•
1.5K views
Ozone and HDFS's Evolution
DataWorks Summit
•
972 views
Evolving HDFS to Generalized Storage Subsystem
DataWorks Summit/Hadoop Summit
•
1.5K views
Storage Infrastructure Behind Facebook Messages
feng1212
•
668 views
HBase Applications - Atlanta HUG - May 2014
larsgeorge
•
1.6K views
hbaseconasia2017: Large scale data near-line loading method and architecture
HBaseCon
•
594 views
Hadoop & cloud storage object store integration in production (final)
Chris Nauroth
•
2.7K views
Ozone and HDFS’s evolution
DataWorks Summit
•
2.4K views
Large-scale Web Apps @ Pinterest
HBaseCon
•
4.1K views
Meet HBase 2.0 and Phoenix-5.0
DataWorks Summit
•
1.6K views
Hadoop & Cloud Storage: Object Store Integration in Production
DataWorks Summit/Hadoop Summit
•
845 views
Optimizing Hive Queries
DataWorks Summit
•
23K views
Optimizing Hive Queries
Owen O'Malley
•
35.9K views
HBase for Architects
Nick Dimiduk
•
33.7K views
Hadoop & Cloud Storage: Object Store Integration in Production
DataWorks Summit/Hadoop Summit
•
3.2K views
Still All on One Server: Perforce at Scale
Perforce
•
1.5K views
LLAP: Building Cloud First BI
DataWorks Summit
•
1.1K views
Dancing elephants - efficiently working with object stores from Apache Spark ...
DataWorks Summit
•
764 views
Evolving HDFS to a Generalized Distributed Storage Subsystem
DataWorks Summit/Hadoop Summit
•
1.3K views
Advertisement
More from Cloudera, Inc.
(20)
Partner Briefing_January 25 (FINAL).pptx
Cloudera, Inc.
•
78 views
Cloudera Data Impact Awards 2021 - Finalists
Cloudera, Inc.
•
5.6K views
2020 Cloudera Data Impact Awards Finalists
Cloudera, Inc.
•
6.2K views
Edc event vienna presentation 1 oct 2019
Cloudera, Inc.
•
4.5K views
Machine Learning with Limited Labeled Data 4/3/19
Cloudera, Inc.
•
3.6K views
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Cloudera, Inc.
•
2.5K views
Introducing Cloudera DataFlow (CDF) 2.13.19
Cloudera, Inc.
•
4.8K views
Introducing Cloudera Data Science Workbench for HDP 2.12.19
Cloudera, Inc.
•
2.7K views
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Cloudera, Inc.
•
1.6K views
Leveraging the cloud for analytics and machine learning 1.29.19
Cloudera, Inc.
•
1.6K views
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Cloudera, Inc.
•
2.5K views
Leveraging the Cloud for Big Data Analytics 12.11.18
Cloudera, Inc.
•
1.7K views
Modern Data Warehouse Fundamentals Part 3
Cloudera, Inc.
•
1.3K views
Modern Data Warehouse Fundamentals Part 2
Cloudera, Inc.
•
2.3K views
Modern Data Warehouse Fundamentals Part 1
Cloudera, Inc.
•
1.5K views
Extending Cloudera SDX beyond the Platform
Cloudera, Inc.
•
956 views
Federated Learning: ML with Privacy on the Edge 11.15.18
Cloudera, Inc.
•
2.2K views
Analyst Webinar: Doing a 180 on Customer 360
Cloudera, Inc.
•
1.4K views
Build a modern platform for anti-money laundering 9.19.18
Cloudera, Inc.
•
1K views
Introducing the data science sandbox as a service 8.30.18
Cloudera, Inc.
•
1.2K views
Recently uploaded
(20)
What are the Reactjs Properties
TutorialsFreak
•
0 views
Technology Companies Development Story
Hamidreza Soleimani
•
0 views
Excel 2010.docx
RobertoMarcelinodaSi1
•
5 views
Email Signature.pdf
mennaHendy
•
3 views
ChIP-Sequencing
Hajra Qayyum
•
0 views
Blomberg KWD2330X Service Manual.pdf
ssuser78bec11
•
0 views
如何办理一份高仿科克大学毕业证成绩单?
aazepp
•
3 views
SEKISUI - RFID Journal Live - May 2023.pdf
Rich Rogers
•
8 views
pdf.pdf
YenenehMulat
•
0 views
Aristiun Whitepaper- Automated Threat Modelling with Aribot
Aristiun B.V.
•
42 views
Azure Pizza as a Service Model
Carlo Sacchi
•
7 views
如何办理一份高仿东伦敦大学毕业证成绩单?
aazepp
•
3 views
Intro to Text Classification with TensorFlow
Elizabeth (Lizzie) Siegle
•
8 views
【本科生、研究生】英国卡迪夫大学毕业证文凭购买指南
sutseu
•
0 views
RC522 RFID Reader_Write For Arduino.pdf
RoboDJ
•
0 views
bunker.pptx
AnthonyGuarnelli
•
0 views
如何办理一份高仿南达科他大学毕业证成绩单?
aazepp
•
3 views
Networking Hardware Requirements.pptx
JhamaikaPaet
•
1 view
Digital Marketing Plan.pdf
mennaHendy
•
4 views
Plant Disease Detection.pptx
vikasmittal92
•
2 views
Advertisement
HBaseCon 2013: Compaction Improvements in Apache HBase
© Hortonworks Inc.
2011 Compaction Improvements in Apache HBase Sergey Shelukhin sergey@hortonworks.com
© Hortonworks Inc.
2011 About me •HBase committer since February 2013 •Member of Technical Staff at Hortonworks •Twitter @sershe84 Architecting the Future of Big Data
© Hortonworks Inc.
2011 Overview •What are compactions? •Default algorithm and improvements •Enabling different implementations •Algorithms for various scenarios •Conclusions Architecting the Future of Big Data
© Hortonworks Inc.
2011 What are compactions?
© Hortonworks Inc.
2011 What are compactions? •HBase writes out immutable files as data is added –Each Store (CF+region) consists of these rowkey-ordered files –Immutable => more files accumulate over time –More files => slower reads •Compaction rewrites several files into one –Less files => faster reads • Major compaction rewrites all files in a Store into one –Can drop deleted records, tombstones and old versions •In minor compaction, files to compact are selected based on a heuristic Architecting the Future of Big Data
© Hortonworks Inc.
2011 Compactions example Architecting the Future of Big Data •Memstore fills up, files are flushed •When enough files accumulate, they are compacted MemStore HDFS writes HFile … HFile HFile HFileHFile
© Hortonworks Inc.
2011 Reads slow down w/o compactions •If too many files accumulate, reads slow down •Read latency over time without compactions: Architecting the Future of Big Data 0 5 10 15 20 25 0 3600 7200 10800 14400 Readlatency,ms. Load test time, sec
© Hortonworks Inc.
2011 But, compaction cause slowdowns •Looks like lots of I/O for no apparent benefit •Example effect on reads (note better average) Architecting the Future of Big Data 0 5 10 15 20 25 0 3600 7200 10800 Readlatency,ms Load test time, sec
© Hortonworks Inc.
2011 Default algorithm and improvements
© Hortonworks Inc.
2011 Compaction tradeoffs •Hbase resolves key conflicts by file age –Therefore, can only compact contiguous files •Large compactions are more efficient (less total I/O) –However, they can cause long slowdown for clients •Small compactions have less effect on clients –However, in total you do more rewriting •We want to compact similar files Architecting the Future of Big Data
© Hortonworks Inc.
2011 Default algorithm in 0.94 •Ratio-based selection –Look for files at most F times larger than the following files –Also allows limiting file numbers and sizes •Higher ratio => more aggressive (default 1.2) •Example: 2 files minimum, 3 maximum, ratio 1.2 Architecting the Future of Big Data HFile HFile HFile HFile HFile Too big!Too many files!OK. •Usually good for typical accumulation of flushed files •Not good for bulk load – unpredictable file sizes!
© Hortonworks Inc.
2011 Off-peak compactions •Good if you have variable load through the day •HBASE-4463 - present in 0.94 (since 2011) •Compact more aggressively during certain hours of the day, when load is lower •Set off-peak period via – hbase.offpeak.start.hour,hbase.offpeak.end.hour (0-23) •Then, set ratio via – hbase.hstore.compaction.ratio.offpeak (default is 5) •Only one "off-peak" compaction at a time, so load is not totally prohibitive Architecting the Future of Big Data
© Hortonworks Inc.
2011 Inefficiencies in default algorithm •First valid selection is chosen •Ratio is only considered for the first selected file –Thus, other files in compaction may not be similar •The solution found may not be the best one –especially for bulk load, with unpredictable file sizes Architecting the Future of Big Data HFile HFile HFile HFile HFile Matches the ratio, but this is a bad selection HFile
© Hortonworks Inc.
2011 Exploring compaction selection •There are usually not so many files, so looking at all valid permutations and comparing quality is viable •HBASE-7842 - "exploring" compaction selection –Ratio checked for each file to choose good permutations –When store is ok, try to compact the most files –When store has too many files, try to eliminate some as fast as possible •On by default in 0.95/0.96 •Works with your old configuration settings Architecting the Future of Big Data
© Hortonworks Inc.
2011 Examples and results •In previous example Architecting the Future of Big Data HFile HFile HFile HFile HFile Not in ratio, dissimilar files HFile •On bulk loads of random size, depending on settings: –loses only 0-10% efficiency in reducing files count; –While reducing I/O 3-10 times •Best results with ratio 1.3-1.4, 4 minimum files In ratio, may be valid… But this has more files!
© Hortonworks Inc.
2011 Enabling different implementations
© Hortonworks Inc.
2011 Making compactions pluggable •To allow further improvements, the code should be easy to replace; not the case as of 0.94 •Initial implementation – p/o HBASE-7055, HBASE-7516 – make just the selection pluggable •This is called "policy" (CompactionPolicy) •Example usages –exploring selection, mentioned previously –tier-based selection (port from Facebook) Architecting the Future of Big Data
© Hortonworks Inc.
2011 Making compactions more pluggable • Other potential improvements are more involved • Need to change other things (HBASE-7678) • The meta-structure of the files (StoreFileManager, HBASE-7603) –Group files by some key/time/… based scheme –In memory/metadata only - filesystem structure or file format changes would be a compatibility nightmare –Example – LeveDB-style compactions, stripes • Compactor to compact the files (Compactor) –Example – large object store, levels, stripes • Can replace parts together or separately (StoreEngine) –E.g. level compactor only makes sense with level-aware store Architecting the Future of Big Data
© Hortonworks Inc.
2011 Enabling compaction tuning •Different tables (or even column families) have different data and access patterns •Compactions already have large number of knobs •Starting with 0.96, they can be configured on table/CF level (HBASE-7236) •Example from the shell: alter 'table1', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', ... } Architecting the Future of Big Data
© Hortonworks Inc.
2011 Algorithms for various scenarios
© Hortonworks Inc.
2011 Key ways to improve compactions Architecting the Future of Big Data • Read from fewer files –Separate files by row key, version, time, etc. –Allows large number of files to be present, uncompacted • Don't compact the data you don't need to compact –For example, old data in OpenTSDB-like systems –Obviously, results in less I/O • Make compactions smaller –Without too much I/O amplification or too many files –Results in less compaction-related outages • HBase works better with few large regions; however, large compactions cause unavailability
© Hortonworks Inc.
2011 How to avoid large compactions Architecting the Future of Big Data •LevelDB compactions –Files live on multiple levels –Files on each level have non-overlapping row-key ranges –…except level 0 (L0), where memstore flushes go –Compact overlapping subsets of 2 level, data goes up a level –Most read requests need only one file per level, plus all of L0 •Small compactions, few files per read, however... –More I/O, as the data moves from level to level –No major compactions – dropping deletes is not trivial –Messes up file ordering due to file boundary overlaps between levels – not readable correctly by default store
© Hortonworks Inc.
2011 Stripe compactions (HBASE-7667) Architecting the Future of Big Data • Somewhat like LevelDB, partition the keys inside each region/store • But, only 1 level (plus optional L0) • Compared to regions, partitioning is more flexible –The default is a number of ~equal-sized stripes • To read, just read relevant stripes + L0, if present HFile HFile Region start key: ccc eee Row-key axis iii: region end keyggg H HFileHFileHFile HFile L0 get 'hbase'
© Hortonworks Inc.
2011 Stripe compactions – writes Architecting the Future of Big Data •Data flushed from MemStore into several files •Each stripe compacts separately most of the time MemStore HDFS HFile HFile H HFileHFileHFile H H H HFile
© Hortonworks Inc.
2011 Stripe compactions – other Architecting the Future of Big Data •Why L0? –Bulk loaded files go to L0 –Flushes can also go into single L0 files (to avoid tiny files) –Several L0 files are then compacted into striped files •Can drop deletes if compacting one entire stripe +L0 –No need for major compactions, ever •Compact 2 stripes together – rebalance if unbalanced –Very rare, however - unbalanced stripes are not a huge deal • Boundaries could be used to improve region splits in future
© Hortonworks Inc.
2011 Stripe compactions - performance Architecting the Future of Big Data •EC2, c1.xlarge, preload; then measure random read perf –LoadTestTool + deletes + overwrites; measure random reads 0 500 1000 1500 2000 2500 3500 4500 5500 6500 7500 8500 Randomgetspersecond Test time, sec. Default gets-per-second, 30sec. MA Stripe gets-per-second, 30sec. MA
© Hortonworks Inc.
2011 Stripe compactions - performance Architecting the Future of Big Data • On individual request level: median latency – same (1.6ms) • However 90th pct - 15% improvement (~13ms to ~11ms), • 99th pct – 20% improvement (~60 to ~47ms) • While also sending ~18% more reads in ~4% less time 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 14 16 18 20 Latency (ms) CDF Default Stripes (12)
© Hortonworks Inc.
2011 Other stripe boundary schemes •For sharded sequential keys (like OpenTSDB), compacting old data again and again is not useful •What if stripes split dynamically as they grow? –If data is sequential, only a subset of stripes will grow –Non-growing stripes never need to be compacted Architecting the Future of Big Data HFileHFile HFile HFile H H HFile HFile HFile H Rowkey space Too big! HFile H Now this will hardly ever compact
© Hortonworks Inc.
2011 Others in development – tier-based Architecting the Future of Big Data •Tier-based compaction selection (HBASE-7055; originally developed in Facebook) –Old data may not be read as frequently, new data may all be in cache so # of files does not matter, etc. –So, during selection, dynamically arrange files into tiers, and apply different rules (ratios, etc.) to them •Simple example (only 2 tiers) HFile HFile HFile However, if old files are rarely read, it's better to compact new first HFile HFile HFile HFile Looks like a good selection…
© Hortonworks Inc.
2011 Others in development, or considered Architecting the Future of Big Data •Large Object store (HBASE-7949) •Partition files based on versions, timestamp, etc. •LevelDB compactions (HBASE-7519) •…more to come?
© Hortonworks Inc.
2011 Resources •HBase book section contains a lot of details on tuning the default selection –http://hbase.apache.org/book.html#compaction –There are other knobs that may be poorly documented •JIRAs to track the work done for compactions –https://issues.apache.org/jira/browse/HBASE/component/12319905 •Design and configuration documentation for the new compactions are attached to JIRAs –Tier-based: HBASE-7055, stripe: HBASE-7667 –Book will be updated as things make it into trunk Architecting the Future of Big Data
© Hortonworks Inc.
2011 Summary •Compactions are a way to reduce the number of files to read when getting data •Compactions are expensive, so efficiency is important •HBase 0.96 compactions –contain automatic improvements to default algo –are easier to improve, build upon, and configure •Work in progress to improve compactions for Big Data •Scenario-specific compaction algorithms are also possible, and being worked on Architecting the Future of Big Data
© Hortonworks Inc.
2011 Q & A
Editor's Notes
Example of CF delete processing
Advertisement