HBase for Architects

30,354 views

Published on

HBase can be an intimidating beast for someone considering its adoption. For what kinds of workloads is it well suited? How does it integrate into the rest of my application infrastructure? What are the data semantics upon which applications can be built? What are the deployment and operational concerns? In this talk, I'll address each of these questions in turn. As supporting evidence, both high-level application architecture and internal details will be discussed. This is an interactive talk: bring your questions and your use-cases!

Published in: Technology, Business
0 Comments
70 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
30,354
On SlideShare
0
From Embeds
0
Number of Embeds
6,884
Actions
Shares
0
Downloads
1,253
Comments
0
Likes
70
Embeds 0
No embeds

No notes for slide

HBase for Architects

  1. 1. © Hortonworks Inc. 2011Apache HBaseFor ArchitectsNick DimidukMember of Technical Staff, HBaseSeattle Technical Forum, 2013-05-15Page 1
  2. 2. © Hortonworks Inc. 2011Page 2Architecting the Future of Big Data
  3. 3. © Hortonworks Inc. 2011Agenda•  Background–  (how did we get here?)•  High-level Architecture–  (where are we?)•  Anatomy of a RegionServer–  (how does this thing work?)•  TL;DR–  (what did we learn?)•  Resources–  (where do we go from here?)Page 3Architecting the Future of Big Data
  4. 4. © Hortonworks Inc. 2011BackgroundArchitecting the Future of Big DataPage 4
  5. 5. © Hortonworks Inc. 2011Apache Hadoop in Review•  Apache Hadoop Distributed Filesystem (HDFS)–  Distributed, fault-tolerant, throughput-optimized data storage–  Uses a filesystem analogy, not structured tables–  The Google File System, 2003, Ghemawat et al.–  http://research.google.com/archive/gfs.html•  Apache Hadoop MapReduce (MR)–  Distributed, fault-tolerant, batch-oriented data processing–  Line- or record-oriented processing of the entire dataset *–  “[Application] schema on read”–  MapReduce: Simplified Data Processing on Large Clusters, 2004, Dean andGhemawat–  http://research.google.com/archive/mapreduce.htmlPage 5Architecting the Future of Big Data* For more on writing MapReduce applications, see “MapReducePatterns, Algorithms, and Use Cases”http://highlyscalable.wordpress.com/2012/02/01/mapreduce-patterns/
  6. 6. © Hortonworks Inc. 2011So what is HBase anyway?•  BigTable paper from Google, 2006, Dean et al.–  “Bigtable is a sparse, distributed, persistent multi-dimensional sorted map.”–  http://research.google.com/archive/bigtable.html•  Key Features:–  Distributed storage across cluster of machines–  Random, online read and write data access–  Schemaless data model (“NoSQL”)–  Self-managed data partitionsPage 6Architecting the Future of Big Data
  7. 7. © Hortonworks Inc. 2011High-level ArchitectureArchitecting the Future of Big DataPage 7
  8. 8. © Hortonworks Inc. 2011Page 9Architecting the Future of Big DataLogical ArchitectureDistributed, persistent partitions of a BigTableabdcefhgijlkmnpoTable ARegion 1Region 2Region 3Region 4Region Server 7Table A, Region 1Table A, Region 2Table G, Region 1070Table L, Region 25Region Server 86Table A, Region 3Table C, Region 30Table F, Region 160Table F, Region 776Region Server 367Table A, Region 4Table C, Region 17Table E, Region 52Table P, Region 1116Legend:- A single table is partitioned into Regions of roughly equal size.- Regions are assigned to Region Servers across the cluster.- Region Servers host roughly the same number of regions.
  9. 9. © Hortonworks Inc. 2011Page 11Architecting the Future of Big DataPhysical ArchitectureDistribution and Data Path...ZooKeeperZooKeeperZooKeeperHBaseClientJavaAppHBaseClientJavaAppHBaseClientHBase ShellHBaseClientREST/ThriftGatewayHBaseClientJavaAppHBaseClientJavaAppRegionServerDataNodeRegionServerDataNode...RegionServerDataNodeRegionServerDataNodeHBaseMasterNameNodeLegend:- An HBase RegionServer is collocated with an HDFS DataNode.- HBase clients communicate directly with Region Servers for sending and receiving data.- HMaster manages Region assignment and handles DDL operations.- Online configuration state is maintained in ZooKeeper.- HMaster and ZooKeeper are NOT involved in data path.
  10. 10. © Hortonworks Inc. 2011Page 13Architecting the Future of Big DataLogical Data ModelA sparse, multi-dimensional, sorted mapLegend:- Rows are sorted by rowkey.- Within a row, values are located by column family and qualifier.- Values also carry a timestamp; there can me multiple versions of a value.- Within a column family, data is schemaless. Qualifiers and values are treated as arbitrary bytes.1368387247 [3.6 kb png data]"thumb"cf2bacf11368394583 71368394261 "hello""bar"1368394583 221368394925 13.61368393847 "world""foo"cf21368387684 "almost the loneliest number"1.00011368396302 "fourth of July""2011-07-04"Table Arowkeycolumnfamilycolumnqualifiertimestamp value
  11. 11. © Hortonworks Inc. 2011Anatomy of aRegionServerArchitecting the Future of Big DataPage 14
  12. 12. © Hortonworks Inc. 2011Page 16Architecting the Future of Big DataRegionServerHDFSHLog(WAL)HRegionHStoreStoreFileHFileStoreFileHFileMemStore......HStoreBlockCacheHRegion...HStoreHStore...Legend:- A RegionServer contains a single WAL, single BlockCache, and multiple Regions.- A Region contains multiple Stores, one for each Column Family.- A Store consists of multiple StoreFiles and a MemStore.- A StoreFile corresponds to a single HFile.- HFiles and WAL are persisted on HDFS.Storage MachineryImplementing the data model
  13. 13. © Hortonworks Inc. 2011TL;DRArchitecting the Future of Big DataPage 21
  14. 14. © Hortonworks Inc. 2011For what kinds of workloads is it well suited?•  It depends on how you tune it, but…•  HBase is good for:–  Large datasets–  Sparse datasets–  Loosely coupled (denormalized) records–  Lots of concurrent clients•  Try to avoid:–  Small datasets (unless you have lots of them)–  Highly relational records–  Schema designs requiring transactions *Page 22Architecting the Future of Big Data* Transactions might not be as necessary as you think, see “EricBrewer on why banks are BASE not ACID”http://highscalability.com/blog/2013/5/1/myth-eric-brewer-on-why-banks-are-base-not-acid-availability.html** Or maybe not, “We believe it is better to have applicationprogrammers deal with performance problems due to overuse oftransactions as bottlenecks arise, rather than always coding aroundthe lack of transactions.” – Google Spanner paper, http://research.google.com/archive/spanner.html
  15. 15. © Hortonworks Inc. 2011How does it integrate with my infrastructure?•  Horizontally scale application data–  Highly concurrent, read/write access–  Consistent, persisted shared state–  Distributed online data processing via Coprocessors (experimental)•  Gateway between online services and offline storage/analysis–  Staging area to receive new data–  Serve online, indexed “views” on datasets from HDFS–  Glue between batch (HDFS, MR1) and online (CEP, Storm) systemsPage 23Architecting the Future of Big Data
  16. 16. © Hortonworks Inc. 2011What data semantics does it provide?•  GET, PUT, DELETE key-value operations•  SCAN for queries•  INCREMENT, CAS server-side atomic operations•  Row-level write atomicity•  MapReduce integration–  Online API (today)–  Bulkload (today)–  Snapshots (coming)Page 24Architecting the Future of Big Data
  17. 17. © Hortonworks Inc. 2011What about operational concerns?•  Provision hardware with more spindles/TB•  Balance memory and IO for reads–  Contention between random and sequential access–  Configure Block size, BlockCache, compression, codecs based on access patterns–  Additional resources–  “HBase: Performance Tuners,” http://labs.ericsson.com/blog/hbase-performance-tuners–  “Scanning in HBase,” http://hadoop-hbase.blogspot.com/2012/01/scanning-in-hbase.html•  Balance IO for writes–  Configure C1 (compactions, region size, compression, pre-splits, &c.) based onwrite pattern–  Balance IO contention between maintaining C1 and serving reads–  Additional resources–  “Configuring HBase Memstore: what you should know,” http://blog.sematext.com/2012/07/16/hbase-memstore-what-you-should-know/–  “Visualizing HBase Flushes And Compactions,” http://www.ngdata.com/visualizing-hbase-flushes-and-compactions/Page 25Architecting the Future of Big Data
  18. 18. © Hortonworks Inc. 2011ResourcesArchitecting the Future of Big DataPage 26
  19. 19. © Hortonworks Inc. 2011Join the Community!•  hbase.apache.org–  blogs.apache.org/hbase/•  Mailing lists–  hbase.apache.org/mail-lists.html–  user@hbase.apache.org•  IRC–  irc.freenode.net#hbase•  JIRA–  issues.apache.org/jira/browse/HBASE•  Source–  git clone git://git.apache.org/hbase.git–  svn checkout http://svn.apache.org/repos/asf/hbase/trunk hbase•  Conference Season–  HBaseCon 2013, June 13, hbasecon.com–  Hadoop Summit, June 26-27, hadoopsummit.orgPage 27Architecting the Future of Big Data
  20. 20. © Hortonworks Inc. 2011HBase@Hortonworks•  Mean Time To Recovery (MTTR)–  HDFS improvements, faster recovery of META, log replay instead of log splitting,improving failure detection•  Testing–  Integration test suite, system tests, destructive testing, ChaosMonkey, load tests,Namenode HA, test coverage and consistency•  Compaction Improvements–  Pluggable compaction, tier based compaction, stripe / leveldb compactions, etc•  IPC / Wire compatibility–  Migration to Google’s Protocol Buffers•  HBase MapReduce improvements (Import / Export, etc)–  Performance improvements, API uniformity/usability•  Hardening 0.94–  Assignment Manager, Log splitting, Region splits, Replication•  Not to mention:–  Windows support, Security, Snapshots, Hadoop2, 0.96, LOTS of bug fixes andcommunity reviewsPage 28Architecting the Future of Big Data
  21. 21. © Hortonworks Inc. 2011Thanks!Architecting the Future of Big DataPage 29M A N N I N GNick DimidukAmandeep KhuranaFOREWORD BYMichael Stackhbaseinaction.comNick Dimidukgithub.com/ndimiduk@xefyrn10k.com

×