6. HBase has a lot of activity.
Total JIRAs, all time: ~8700
7. HBase has a lot of activity.
Opened in last year: ~2500
Total JIRAs, all time: ~8700
9. HBase has a lot of activity.
Opened in last year: ~2500
Fixed in last year: 1638
Total JIRAs, all time: ~8700
10. HBase has a lot of activity.
Opened in last year: ~2500
Fixed in last year: 1638
Total JIRAs, all time: ~8700
resolved >= 2012-05-23
AND resolved <= 2013-05-24
AND resolution in (Fixed, Implemented)
36. Top 10 "big topics":
Snapshots:
Replication:
Compaction:
Metrics:
Assignment:
Hadoop 2:
Protobufs:
Security:
Bulk Loading:
Modularization:
82
58
54
53
44
37
34
28
23
21
416(some overlap)
(305 functional, 111 non-functional)
Let's dive in to the top 3.
37. Snapshots
The gist: Take advantage of the fact that files in HDFS are already immutable
to get fast "snapshots" of tables that you can roll back to. This is pretty tricky
when you consider HBase is a distributed system and you want a point in time.
Main JIRAs:
• HBASE-6055 - Offline Snapshots: Take a snapshot after first disabling
the table
• HBASE-7290 - Online Snapshots: Take a snapshot of a live, running
table by splitting the memstore.
• HBASE-7360 - Backport Snapshots to 0.94
Top contributors: Matteo B, Jonathan H, Ted Y, Jesse Y, Enis S
38. Replication
The gist: use asynchronous WAL shipping to replay all edits on a different
(possibly remote) cluster, for Disaster Recovery or other operational purposes.
Main JIRAs:
• HBASE-1295 - Multi-data-center replication: Top level issue. Real meat
was actually implemented in 0.90 (Jan 2010), so not a new feature.
• HBASE-8207- Data loss when machine name contains "-". Doh.
• HBASE-2611 - Handle RS failure while processing failure of another:
This was an ugly issue that took a while to fix. Corner cases matter!
Top contributors: J-D Cryans, Himanshu V, Chris T, Devaraj D, Lars H
39. Replication
The gist: use asynchronous WAL shipping to replay all edits on a different
(possibly remote) cluster, for Disaster Recovery or other operational purposes.
Main JIRAs:
• HBASE-1295 - Multi-data-center replication: Top level issue. Real meat
was actually implemented in 0.90 (Jan 2010), so not a new feature.
• HBASE-8207- Data loss when machine name contains "-". Doh.
• HBASE-2611 - Handle RS failure while processing failure of another:
This was an ugly issue that took a while to fix. Corner cases matter!
Top contributors: J-D Cryans, Himanshu V, Chris T, Devaraj D, Lars H
Theme: corner cases!
40. Replication
The gist: use asynchronous WAL shipping to replay all edits on a different
(possibly remote) cluster, for Disaster Recovery or other operational purposes.
Main JIRAs:
• HBASE-1295 - Multi-data-center replication: Top level issue. Real meat
was actually implemented in 0.90 (Jan 2010), so not a new feature.
• HBASE-8207- Data loss when machine name contains "-". Doh.
• HBASE-2611 - Handle RS failure while processing failure of another:
This was an ugly issue that took a while to fix. Corner cases matter!
Top contributors: J-D Cryans, Himanshu V, Chris T, Devaraj D, Lars H
Plug: stick around next while Chris Trezzo tweets about Replication!!
Theme: corner cases! Corner Case!
41. Compaction
The gist: In an LSM store, if you don't compact the store files, you end up with
lots of 'em, which makes reads slower. Not a new feature, just improvements.
Main JIRAs:
• HBASE-7516 - Make compaction policy pluggable: allow users to
customize which files are included for compaction.
• HBASE-2231 - Compaction events should be written to HLog: deal with
the case when regions have been reassigned since compaction started.
Top contributors: Sergey S, Elliott C, Jimmy X, stack, Matteo B, Jesse Y
42. Compaction
The gist: In an LSM store, if you don't compact the store files, you end up with
lots of 'em, which makes reads slower. Not a new feature, just improvements.
Main JIRAs:
• HBASE-7516 - Make compaction policy pluggable: allow users to
customize which files are included for compaction.
• HBASE-2231 - Compaction events should be written to HLog: deal with
the case when regions have been reassigned since compaction started.
Top contributors: Sergey S, Elliott C, Jimmy X, stack, Matteo B, Jesse Y
Corner Case!
43. Compaction
The gist: In an LSM store, if you don't compact the store files, you end up with
lots of 'em, which makes reads slower. Not a new feature, just improvements.
Main JIRAs:
• HBASE-7516 - Make compaction policy pluggable: allow users to
customize which files are included for compaction.
• HBASE-2231 - Compaction events should be written to HLog: deal with
the case when regions have been reassigned since compaction started.
Look for cool stuff to come in the next year with tiered (aka "leveled")
compaction policies, so you could do stuff like (e.g.) put "recent" data into
smaller files that'll be hit frequently, and the older "long tail" data into bigger
files that'll be hit less frequently.
Top contributors: Sergey S, Elliott C, Jimmy X, stack, Matteo B, Jesse Y
Corner Case!
47. Top 10 "big topics":
Snapshots:
Replication:
Compaction:
Metrics: move to metrics2.
Assignment: it's tricky, yo.
Hadoop 2: support it for HA NN.
Protobufs
Security
Bulk Loading
Modularization
48. Top 10 "big topics":
Snapshots:
Replication:
Compaction:
Metrics: move to metrics2.
Assignment: it's tricky, yo.
Hadoop 2: support it for HA NN.
Protobufs: wire compatibility!
Security
Bulk Loading
Modularization
49. Top 10 "big topics":
Snapshots:
Replication:
Compaction:
Metrics: move to metrics2.
Assignment: it's tricky, yo.
Hadoop 2: support it for HA NN.
Protobufs: wire compatibility!
Security: kerberos, in the core.
Bulk Loading
Modularization
50. Top 10 "big topics":
Snapshots:
Replication:
Compaction:
Metrics: move to metrics2.
Assignment: it's tricky, yo.
Hadoop 2: support it for HA NN.
Protobufs: wire compatibility!
Security: kerberos, in the core.
Bulk Loading: pop in an HFile.
Modularization
51. Top 10 "big topics":
Snapshots:
Replication:
Compaction:
Metrics: move to metrics2.
Assignment: it's tricky, yo.
Hadoop 2: support it for HA NN.
Protobufs: wire compatibility!
Security: kerberos, in the core.
Bulk Loading: pop in an HFile.
Modularization: break up the code.
63. Reenactment ...
Feb 2012:
• Max Lapan: Hey guys, here's a cool patch!
• Nicolas S: This should be an app detail, not in core.
64. Reenactment ...
Feb 2012:
• Max Lapan: Hey guys, here's a cool patch!
• Nicolas S: This should be an app detail, not in core.
• Ted Yu: I fixed your typos while you were asleep!
65. Reenactment ...
Feb 2012:
• Max Lapan: Hey guys, here's a cool patch!
• Nicolas S: This should be an app detail, not in core.
• Ted Yu: I fixed your typos while you were asleep!
• Nick: Not enough utest coverage to put this in core.
• Max: Agree, but I can't find any other way to do this.
66. Reenactment ...
Feb 2012:
• Max Lapan: Hey guys, here's a cool patch!
• Nicolas S: This should be an app detail, not in core.
• Ted Yu: I fixed your typos while you were asleep!
• Nick: Not enough utest coverage to put this in core.
• Max: Agree, but I can't find any other way to do this.
• Kannan: Why don't you try 2-phase w/ multiget?
• Max: OK, ok, I'll try it.
67. Reenactment ...
May 2012:
• Max: Ran in prod w/ 160-node 300TB cluster. Runs like
a champ, 20x the 2-phase approach. Boom.
68. Reenactment ...
May 2012:
• Max: Ran in prod w/ 160-node 300TB cluster. Runs like
a champ, 20x the 2-phase approach. Boom.
69. Reenactment ...
May 2012:
• Max: Ran in prod w/ 160-node 300TB cluster. Runs like
a champ, 20x the 2-phase approach. Boom.
• Ted: Holy guacamole that's a big patch.
70. Reenactment ...
May 2012:
• Max: Ran in prod w/ 160-node 300TB cluster. Runs like
a champ, 20x the 2-phase approach. Boom.
• Ted: Holy guacamole that's a big patch.
July 2012:
• Max: Anybody there? Here's a perf test.
• Ted: Cool!
71. Reenactment ...
May 2012:
• Max: Ran in prod w/ 160-node 300TB cluster. Runs like
a champ, 20x the 2-phase approach. Boom.
• Ted: Holy guacamole that's a big patch.
July 2012:
• Max: Anybody there? Here's a perf test.
• Ted: Cool!
Oct 2012:
• Anoop: A coprocessor would make faster.
• Max: We're on 0.90 and can't use CP.
• Stack: -1, FB guys are right about needing more tests.
72. Reenactment ...
Dec 2012:
• Sergey: I'm on it guys. Rebased on trunk, added the
ability to configure, and integration tests.
73. Reenactment ...
Dec 2012:
• Sergey: I'm on it guys. Rebased on trunk, added the
ability to configure, and integration tests.
• Stack: Still not enough tests. Some new code even
when disabled? Who's reviewing? Go easy lads.
74. Reenactment ...
Dec 2012:
• Sergey: I'm on it guys. Rebased on trunk, added the
ability to configure, and integration tests.
• Stack: Still not enough tests. Some new code even
when disabled? Who's reviewing? Go easy lads.
• Ram: I'm on it. Couple improvements, but looks good.
75. Reenactment ...
Dec 31st, 2012 (while everyone else is partying):
• Lars: Ooh, let's pull this into 0.94! I made a patch.
76. Reenactment ...
Dec 31st, 2012 (while everyone else is partying):
• Lars: Ooh, let's pull this into 0.94! I made a patch.
• Lars: ... hold the phone! This slows down a tight loop
case (even when disabled) by 10-20%.
77. Reenactment ...
Dec 31st, 2012 (while everyone else is partying):
• Lars: Ooh, let's pull this into 0.94! I made a patch.
• Lars: ... hold the phone! This slows down a tight loop
case (even when disabled) by 10-20%.
• Ted: I optimized the disabled path.
• Lars: Sweet.
78. Reenactment ...
Dec 31st, 2012 (while everyone else is partying):
• Lars: Ooh, let's pull this into 0.94! I made a patch.
• Lars: ... hold the phone! This slows down a tight loop
case (even when disabled) by 10-20%.
• Ted: I optimized the disabled path.
• Lars: Sweet.
80. Reenactment ...
Jan, 2013:
• Ram: +1, let's commit.
• Ted: Committed to trunk
• Lars: Committed to 0.94.
And there was much rejoi....
81. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
82. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
• All: Crapface.
83. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
• All: Crapface.
• Stack: We should back this out. SOMA pride!! Also,
Dave is running world's biggest HBase cluster, FYI.
84. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
• All: Crapface.
• Stack: We should back this out. SOMA pride!!
Also, Dave is running world's biggest HBase
cluster, FYI.
• Lars: Filter is internal. Extend FilterBase maybe?
• Ted: If we take it OUT now, it's also a regression.
85. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
• All: Crapface.
• Stack: We should back this out. SOMA pride!! Also,
Dave is running world's biggest HBase cluster, FYI.
• Lars: Filter is internal. Extend FilterBase maybe?
• Ted: If we take it OUT now, it's also a regression.
• Dave: Chill dudes, we can fix by changing our client.
86. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
• All: Crapface.
• Stack: We should back this out. SOMA pride!!
Also, Dave is running world's biggest HBase
cluster, FYI.
• Lars: Filter is internal. Extend FilterBase maybe?
• Ted: If we take it OUT now, it's also a regression.
• Dave: Chill dudes, we can fix by changing our client.
• All: Uhh ... change it? Keep it? Change it?
87. Reenactment ...
Feb, 2013:
• Dave Latham: Stop the presses! This breaks rolling
upgrade for me b/c I directly implement Filter.
• All: Crapface.
• Stack: We should back this out. SOMA pride!!
Also, Dave is running world's biggest HBase
cluster, FYI.
• Lars: Filter is internal. Extend FilterBase maybe?
• Ted: If we take it OUT now, it's also a regression.
• Dave: Chill dudes, we can fix by changing our client.
• All: Uhh ... change it? Keep it? Change it?
Resolution: Change it (HBASE-7920)
88. Moral of the story?
• JIRA comments are a great way to learn.
• Do the work to keep new features from
destabilizing core code paths.
• Careful with changing interfaces.
90. HBASE-4676
Interesting because:most watched (42 watchers), and biggest patch.
Prefix Compression - Trie data block encoding
What? An optimization to compress what we store for key/value prefixes.
How? ~8000 new lines added! (Originally written in git repo, here)
At SFDC, James Taylor reported seeing 5-15x improvement in
Phoenix, with no degradation in scan performance. Woot!
By: Matt Corgan
92. HBASE-7403
Interesting because: It's a cool feature. And went through 33 revisions!
Online Merge
What? The ability to merge regions online and transactionally, just like we
do with splitting regions.
How? The master moves the regions together (on the same regionserver)
and send MERGE RPC to regionserver. Merge happens in a transaction.
Example:
RegionMergeTransaction mt = new
RegionMergeTransaction(conf, parent, midKey)
if (!mt.prepare(services)) return;
try {
mt.execute(server, services);
} catch (IOException ioe) {
try {
mt.rollback(server, services);
return;
} catch (RuntimeException e) {
myAbortable.abort("Failed merge, abort");
}
}
By: Chunhui Shen
94. HBASE-1212
Interesting because:Oldest issue (Feb, 2009) resolved w/ patch this year.
Merge tool expects regions to have diff seq ids
What? With aggregated hfile format, sequence id is written into file, not
along side. In rare case where two store files have same sequence id and
we want to merge the regions, it wouldn't work.
How? In conjucntion with HBASE-7287, removes the code that did this:
--- HRegion.java
List<StoreFile> srcFiles = es.getValue();
- if (srcFiles.size() == 2) {
- long seqA = srcFiles.get(0).getMaxSequenceId();
- long seqB = srcFiles.get(1).getMaxSequenceId();
- if (seqA == seqB) {
- // Can't have same sequenceid since on open store, this is what
- // distingushes the files (see the map of stores how its keyed
by
- // sequenceid).
- throw new IOException("Files have same sequenceid: " + seqA);
- }
- }
By: Jean-Marc Spaggiari
95. HBASE-1212
Interesting because:Oldest issue (Feb, 2009) resolved w/ patch this year.
Merge tool expects regions to have diff seq ids
What? With aggregated hfile format, sequence id is written into file, not
along side. In rare case where two store files have same sequence id and
we want to merge the regions, it wouldn't work.
How? In conjucntion with HBASE-7287, removes the code that did this:
--- HRegion.java
List<StoreFile> srcFiles = es.getValue();
- if (srcFiles.size() == 2) {
- long seqA = srcFiles.get(0).getMaxSequenceId();
- long seqB = srcFiles.get(1).getMaxSequenceId();
- if (seqA == seqB) {
- // Can't have same sequenceid since on open store, this is what
- // distingushes the files (see the map of stores how its keyed
by
- // sequenceid).
- throw new IOException("Files have same sequenceid: " + seqA);
- }
- }
By: Jean-Marc Spaggiari
97. HBASE-7801
Interesting because: has durability implications worth blogging about.
Allow a deferred sync option per Mutation
What? Previously, you could only turn WAL writing off completely, per table
or edit. Now you can choose "none", "async", "sync" or "fsync".
How?
+++ Mutation.java
+ public void setDurability(Durability d) {
+ setAttribute(DURABILITY_ID_ATTR, Bytes.toBytes(d.ordinal()));
+ this.writeToWAL = d != Durability.SKIP_WAL;
+ }
+++ HRegion.java
+ private void syncOrDefer(long txid, Durability durability) {
+ switch(durability) { ...
+ case SKIP_WAL: // nothing to do
+ break;
+ case ASYNC_WAL: // defer the sync, unless we globally can't
+ if (this.deferredLogSyncDisabled) { this.log.sync(txid); }
+ break;
+ case SYNC_WAL:
+ case FSYNC_WAL:
+ // sync the WAL edit (SYNC and FSYNC treated the same for now)
+ this.log.sync(txid);
+ break;
+ }
By: Lars Hofhansl
98. HBASE-7801
Interesting because: has durability implications worth blogging about.
Allow a deferred sync option per Mutation
What? Previously, you could only turn WAL writing off completely, per table
or edit. Now you can choose "none", "async", "sync" or "fsync".
How?
+++ Mutation.java
+ public void setDurability(Durability d) {
+ setAttribute(DURABILITY_ID_ATTR, Bytes.toBytes(d.ordinal()));
+ this.writeToWAL = d != Durability.SKIP_WAL;
+ }
+++ HRegion.java
+ private void syncOrDefer(long txid, Durability durability) {
+ switch(durability) { ...
+ case SKIP_WAL: // nothing to do
+ break;
+ case ASYNC_WAL: // defer the sync, unless we globally can't
+ if (this.deferredLogSyncDisabled) { this.log.sync(txid); }
+ break;
+ case SYNC_WAL:
+ case FSYNC_WAL:
+ // sync the WAL edit (SYNC and FSYNC treated the same for now)
+ this.log.sync(txid);
+ break;
+ }
By: Lars Hofhansl
Wha ... ?
Oh. See HADOOP-6313
100. HBASE-4072
Interesting because: Biggest facepalm.
Disable reading zoo.cfg files
What? Used to be, if two system both use ZK and one needed to override
values, the zoo.cfg values would always win. Caused a lot of goofy bugs in
hbase utils like import/export, integration with other systems like flume.
How? Put reading it behind a config that defaults to false.
+ if (conf.getBoolean(HBASE_CONFIG_READ_ZOOKEEPER_CONFIG, false)) {
+ LOG.warn(
+ "Parsing zoo.cfg is deprecated. Place all ZK related HBase " +
+ "configuration under the hbase-site.xml");
By: Harsh J
101. HBASE-4072
Interesting because: Biggest facepalm.
Disable reading zoo.cfg files
What? Used to be, if two system both use ZK and one needed to override
values, the zoo.cfg values would always win. Caused a lot of goofy bugs in
hbase utils like import/export, integration with other systems like flume.
How? Put reading it behind a config that defaults to false.
+ if (conf.getBoolean(HBASE_CONFIG_READ_ZOOKEEPER_CONFIG, false)) {
+ LOG.warn(
+ "Parsing zoo.cfg is deprecated. Place all ZK related HBase " +
+ "configuration under the hbase-site.xml");
By: Harsh J
103. HBASE-3171
Interesting because: Only HBase JIRA with a downfall parody.
Drop ROOT, store META location in ZooKeeper
What? The ROOT just tells you where the META table is. That's silly.
How? Pretty big patch (59 files changed, 580 insertions(+), 1749 deletions(-))
By: J-D Cryans
http://www.youtube.com/watch?v=tuM9MYDssvg
105. HBASE-6868
Interesting because: tiny fix, but marked as a blocker, and sunk 0.94.2 RC1.
Avoid double checksumming blocks
What? since HBASE-5074 (checksums), sometimes we double checksum.
How? 3 line patch to default to skip checksum if not local fs.
+++ HFileSystem.java // Incorrect data is read and HFileBlocks won't be
able to read
// their header magic numbers. See HBASE-5885
if (useHBaseChecksum && !(fs instanceof LocalFileSystem)) {
+ conf = new Configuration(conf);
+ conf.setBoolean("dfs.client.read.shortcircuit.skip.checksum", true);
this.noChecksumFs = newInstanceFileSystem(conf);...
+++ HRegionServer.java // If hbase checksum verification enabled,
automatically
//switch off hdfs checksum verification.
this.useHBaseChecksum = conf.getBoolean(
- HConstants.HBASE_CHECKSUM_VERIFICATION, true);
+ HConstants.HBASE_CHECKSUM_VERIFICATION, false);
By: Lars Hofhansl
106. What's it all mean?
Active codebase. Good!
Complexity increasing. Bad!
credit: https://www.ohloh.net/p/hbase