© 2014 MapR Technologies 1
© MapR Technologies, confidential
Big Data Everywhere
Tel Aviv, June 2014
Building HBase Applic...
© 2014 MapR Technologies 2
Me, Us
• Ted Dunning, Chief Application Architect, MapR
Committer PMC member, Mahout, Zookeeper...
© 2014 MapR Technologies 4
Topics For Today
• What is special about HBase applications?
• Example: Time Series Database
• ...
© 2014 MapR Technologies 5
Disks have gotten slower
then: Fujitsu Eagle
380MB / 1.8MB / s = 221 s
now: WD4001FAEX
4TB / 15...
© 2014 MapR Technologies 6
Memory has gotten smaller
then:
64MB / 1 x Fujitsu Eagle = 0.168
now:
128GB / 12 x WD4001FAEX =...
© 2014 MapR Technologies 7
The Task Has Changed
The primary job for databases
is to discard data
(speaking historically)
© 2014 MapR Technologies 8
Modern Database Goals
• Use modern disks fully
• Work around lack of memory
• Retain all the da...
© 2014 MapR Technologies 9
Modern Database Methods
• Use large sequential I/O transfers
• Use many machines
• Handle write...
© 2014 MapR Technologies 10
How Does This Work?
• Split data into tablets
– Store tablets on many computers
• Allow many c...
© 2014 MapR Technologies 11
MapR, HBase Table Architecture
• Tables are divided into key ranges (tablets or regions)
• Tab...
© 2014 MapR Technologies 12
MapR, HBase Tables are Divided into Regions
• A table is divided into one or more regions
• Ea...
© 2014 MapR Technologies 13
RDBMS versus MapR Tables
RDBMS tables MapR tables
ACID Row based ACID
Sharding/Partitioning Di...
© 2014 MapR Technologies 14
HBase versus MapR Tables
HBase tables MapR tables
Table/region/column family Table/region/colu...
© 2014 MapR Technologies 16
Column Families
• Columns are defined per row
• Columns in HBase and MapR tables are grouped i...
© 2014 MapR Technologies 19
Technical Summary
• Tables are split into tablets or regions
• Regions contain column families...
© 2014 MapR Technologies 20
Technical Summary
• Tables are split into tablets or regions
• Regions contain column families...
© 2014 MapR Technologies 21
Cost/Benefits Summary
• Pro
– Predictable disk layout
– Flexibility in key design, data format...
© 2014 MapR Technologies 22
© 2014 MapR Technologies 23
Let’s build something!
© 2014 MapR Technologies 24
© 2014 MapR Technologies 25
Time Series Database Example
• Let’s build a time series database
• See http://opentsdb.net/
© 2014 MapR Technologies 26
The Problem
• We have about 100,000 metrics with an average of about 10,000
distinct measureme...
© 2014 MapR Technologies 27
Non-solution
• Munin, RDF, Ganglia, Graphite all discard data
– Remember the primary job of a ...
© 2014 MapR Technologies 28
Why is This Hard?
• 10,000 points per second x 84,600 seconds/day x 1000 days
• That is nearly...
© 2014 MapR Technologies 29
Step 1: Compound keys give
control over layout
© 2014 MapR Technologies 30
Key Composition #1
Time Metric Node Value
10667 load1m n1 1.3
10667 load5m n1 1.0
10668 load1m...
© 2014 MapR Technologies 31
Key Composition #2
Metric Time Node Value
load1m 10667 n1 1.3
load1m 10668 n2 0.1
load1m 10727...
© 2014 MapR Technologies 32
Key Composition #3
Node Metric Time Value
n1 load1m 10667 1.3
n1 load1m 10727 0.9
n1 load5m 10...
© 2014 MapR Technologies 33
Lesson: Pick door #2
Maximize density of desired data
© 2014 MapR Technologies 34
Protip: Add key-value pairs to
end of key for additional tags
© 2014 MapR Technologies 35
© 2014 MapR Technologies 36
Step 1: Relational not
required
© 2014 MapR Technologies 37
Tall and Skinny? Or Wide and Fat?
Metric Time Node Value
© 2014 MapR Technologies 38
Tall and Skinny? Or Wide and Fat?
Metric Window Node +1
7
+1
8
+7
7
+7
8
+13
7
load1
m
13:00 n...
© 2014 MapR Technologies 39
Or non-relational?
Metric Window Node Compressed
load1
m
13:00 n1 {t:[17,77],v:[1.3,0.9]}
load...
© 2014 MapR Technologies 40
Lesson: Schemas can be very
flexible and can even
change on the fly
© 2014 MapR Technologies 41
© 2014 MapR Technologies 42
Step 3: Sequential reads hide
many sins if density is high
© 2014 MapR Technologies 43
Which Queries? Which Data?
• Most common is 1-3 metrics for 5-100% of nodes based on tags
– Wh...
© 2014 MapR Technologies 44
Lesson: Have to know the
queries to design in
performance
© 2014 MapR Technologies 45
© 2014 MapR Technologies 46
Step 3: Time to Level up!
© 2014 MapR Technologies 47
What About the Major Leagues?
• Industrial sensors can dwarf current TSDB loads
– Assume 100 (...
© 2014 MapR Technologies 48
What About the Major Leagues? World Cup?
• Industrial sensors can dwarf current TSDB loads
– A...
© 2014 MapR Technologies 49
Rough Design Outline
• Want to record and query 100M samples / s at full resolution
• Each Map...
© 2014 MapR Technologies 50
Data Flow – High Speed TSDB
Web tier
Data
catcherData
catcherData
catcher
Flat
files
Consolida...
© 2014 MapR Technologies 51
Quick Results
• Estimated data volumes
– 100 M p / s / (3600 p/row) = 28 k row / s
• Estimated...
© 2014 MapR Technologies 52
Lesson: Very high rates look
plausible with hybrid design
© 2014 MapR Technologies 53
© 2014 MapR Technologies 54
Quick Example: Xactly
Dashboard
© 2014 MapR Technologies 55
MapR’s higher performance solution is far more efficient and cost-effective.
“I can do somethi...
© 2014 MapR Technologies 56
Dashboard Problem
• Hundreds to thousands of customers have hundreds to
thousands of sales tea...
© 2014 MapR Technologies 57
Lesson: Obviously relational
problems often have effective
non-relational solutions
© 2014 MapR Technologies 58
Summary
• HBase and MapR tables are conceptually very simple
– But require careful design
– Co...
© 2014 MapR Technologies 59
Me, Us
• Ted Dunning, Chief Application Architect, MapR
Committer PMC member, Mahout, Zookeepe...
Upcoming SlideShare
Loading in...5
×

Building HBase Applications - Ted Dunning

1,283

Published on

Ted Dunning, Chief Application Architect at MapR presents at Big Data Everywhere
Tel Aviv, June 2014

Apache HBase applications can turn in some pretty impressive performance and scalability numbers, but designing these applications is not the same as designing relational databases. There are numerous opportunities for a design to make use of special characteristics of HBase and related systems like MapR's tables.

We will talk about the why and the what. Why is it that HBase allows these opportunities and requires special design? What specific strategies can the HBase application designer use to take advantage of the virtues and avoid the pitfalls.

To do this, we will examine several real applications including a time series database and a data cube drill down.

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,283
On Slideshare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
2
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • See http://hbase.apache.org/book.html#regions.arch
    in-memory column-families, that data is high priority to put into memory (cache)
    recently used algorithum
  • See http://outerthought.org/blog/417-ot.html
    While I don’t agree with the suggestion of using versioning to add a dimension because I believe it will help create a bad design pattern, his blog is spot on.
  • Transcript of "Building HBase Applications - Ted Dunning"

    1. 1. © 2014 MapR Technologies 1 © MapR Technologies, confidential Big Data Everywhere Tel Aviv, June 2014 Building HBase Applications
    2. 2. © 2014 MapR Technologies 2 Me, Us • Ted Dunning, Chief Application Architect, MapR Committer PMC member, Mahout, Zookeeper, Drill Bought the beer at the first HUG • MapR Distributes more open source components for Hadoop Adds major technology for performance, HA, industry standard API’s • Info Hash tag - #mapr #DataIsreal See also - @ApacheMahout @ApacheDrill @ted_dunning and @mapR
    3. 3. © 2014 MapR Technologies 4 Topics For Today • What is special about HBase applications? • Example: Time Series Database • Example: Web-fronted Dashboard • Questions and Discussion
    4. 4. © 2014 MapR Technologies 5 Disks have gotten slower then: Fujitsu Eagle 380MB / 1.8MB / s = 221 s now: WD4001FAEX 4TB / 154MB / s = 25 k s = 7.2 hours
    5. 5. © 2014 MapR Technologies 6 Memory has gotten smaller then: 64MB / 1 x Fujitsu Eagle = 0.168 now: 128GB / 12 x WD4001FAEX = 2.7 x 10-3
    6. 6. © 2014 MapR Technologies 7 The Task Has Changed The primary job for databases is to discard data (speaking historically)
    7. 7. © 2014 MapR Technologies 8 Modern Database Goals • Use modern disks fully • Work around lack of memory • Retain all the data
    8. 8. © 2014 MapR Technologies 9 Modern Database Methods • Use large sequential I/O transfers • Use many machines • Handle write-mostly work load • Store related data elements together • Relax constraints (ACID? Schema? Indexes?)
    9. 9. © 2014 MapR Technologies 10 How Does This Work? • Split data into tablets – Store tablets on many computers • Allow many columns – Only store data for live columns – Allows for innovative data arrangement • Allow applications to encode data • Buffer data to allow updates to be organized before writing • Previously written data may be merged periodically to improve organization, but avoid rewrite storms
    10. 10. © 2014 MapR Technologies 11 MapR, HBase Table Architecture • Tables are divided into key ranges (tablets or regions) • Tablets are served automatically by MapR FS or region-server • Columns are divided into access groups (column families) CF1 CF2 CF3 CF4 CF5 R1 R2 R3 R4
    11. 11. © 2014 MapR Technologies 12 MapR, HBase Tables are Divided into Regions • A table is divided into one or more regions • Each region is 1-5 GB in size, contains (start and end keys) • Regions contained within a single container (MapR) • Initially one Region per Table – Support pre-split tables using HBase APIs and HBase shell – You can also pre-splits with known access patterns • Important to spread regions across all available nodes • Region splits into 2 different regions when it becomes too large – Split is very quick • Uses MapR FS to spread data, manage space (MapR)
    12. 12. © 2014 MapR Technologies 13 RDBMS versus MapR Tables RDBMS tables MapR tables ACID Row based ACID Sharding/Partitioning Distributed Regions SQL builtin Key lookup/key range scans No Unix file metadata operation on tables Unix File metadata operation on tables Indexes (B+Tree, R-Tree) row-key, no built in secondary index Primitive data types Byte arrays Inplace update Cell versioning
    13. 13. © 2014 MapR Technologies 14 HBase versus MapR Tables HBase tables MapR tables Table/region/column family Table/region/column family Distributed Regions Distributed Regions Wide variation of latency Consistent latency No Unix file metadata operation on tables Unix File metadata operation on tables Limited column family count 64 column families Fuzzy snapshots Precise snapshots Replication API Not supported
    14. 14. © 2014 MapR Technologies 16 Column Families • Columns are defined per row • Columns in HBase and MapR tables are grouped into column families – MapR supports up-to 64 column families – in-memory column-families • Grouping should facilitate common access patterns, not just reflect logical connection – Columns written or read together make good column families – Rarely needed columns should probably be in own column family – Radically different cardinality may suggest separate column families • Physically, all column family members are stored together on the file system – This makes access fast
    15. 15. © 2014 MapR Technologies 19 Technical Summary • Tables are split into tablets or regions • Regions contain column families, stored together • Columns are only stored where needed • Many, many, many columns are allowed • Rows are accessed by a single key, filters are allowed on scans • Values are byte arrays
    16. 16. © 2014 MapR Technologies 20 Technical Summary • Tables are split into tablets or regions • Regions contain column families, stored together • Columns are only stored where needed • Many, many, many columns are allowed • Rows are accessed by a single key, filters are allowed on scans • Values are byte arrays • You get low-level access to control speed, allow scaling • This is not your father’s RDBMS!
    17. 17. © 2014 MapR Technologies 21 Cost/Benefits Summary • Pro – Predictable disk layout – Flexibility in key design, data format – Allows nested, document or relational models – Superb scalability, speed are possible • Con – Technically more demanding than a small Postgres instance – Hotspot risk requires proper design – Latency can be highly variable (for vanilla HBase, not MapR)
    18. 18. © 2014 MapR Technologies 22
    19. 19. © 2014 MapR Technologies 23 Let’s build something!
    20. 20. © 2014 MapR Technologies 24
    21. 21. © 2014 MapR Technologies 25 Time Series Database Example • Let’s build a time series database • See http://opentsdb.net/
    22. 22. © 2014 MapR Technologies 26 The Problem • We have about 100,000 metrics with an average of about 10,000 distinct measurements per second • Some things change over seconds, some over hours • We want to query over time ranges from seconds to months to produce plots of time window aggregates – What is max latency per hour for last three weeks on all web tier machines?
    23. 23. © 2014 MapR Technologies 27 Non-solution • Munin, RDF, Ganglia, Graphite all discard data – Remember the primary job of a classic database? • We want full resolution for historical comparisons • Size is no longer an issue, big has gotten quite small – 1012 data points << 10 nodes @ 12 x 4 TB per node – We can piggy back on another cluster
    24. 24. © 2014 MapR Technologies 28 Why is This Hard? • 10,000 points per second x 84,600 seconds/day x 1000 days • That is nearly a trillion data points! (0.8 x 1012) • Queries require summarizing hundreds of thousands of points in 200 ms • We want the solution to be low impact and inexpensive – And be ready to scale by several orders of magnitude
    25. 25. © 2014 MapR Technologies 29 Step 1: Compound keys give control over layout
    26. 26. © 2014 MapR Technologies 30 Key Composition #1 Time Metric Node Value 10667 load1m n1 1.3 10667 load5m n1 1.0 10668 load1m n2 0.1 10668 load5m n2 0.1 10727 load1m n1 0.9 10727 load5m n1 0.9 All samples go to a single machine for a long time
    27. 27. © 2014 MapR Technologies 31 Key Composition #2 Metric Time Node Value load1m 10667 n1 1.3 load1m 10668 n2 0.1 load1m 10727 n1 0.9 load5m 10667 n1 1.0 load5m 10668 n2 0.1 load5m 10727 n1 0.9 All samples for same metric go to a single machine Queries commonly focus on one or a few metrics at a time
    28. 28. © 2014 MapR Technologies 32 Key Composition #3 Node Metric Time Value n1 load1m 10667 1.3 n1 load1m 10727 0.9 n1 load5m 10667 1.0 n1 load5m 10727 0.9 n2 load1m 10668 0.1 n2 load5m 10668 0.1 All samples for same node go to a single machine Unfortunately, queries commonly require data for a single metric, but many machines
    29. 29. © 2014 MapR Technologies 33 Lesson: Pick door #2 Maximize density of desired data
    30. 30. © 2014 MapR Technologies 34 Protip: Add key-value pairs to end of key for additional tags
    31. 31. © 2014 MapR Technologies 35
    32. 32. © 2014 MapR Technologies 36 Step 1: Relational not required
    33. 33. © 2014 MapR Technologies 37 Tall and Skinny? Or Wide and Fat? Metric Time Node Value
    34. 34. © 2014 MapR Technologies 38 Tall and Skinny? Or Wide and Fat? Metric Window Node +1 7 +1 8 +7 7 +7 8 +13 7 load1 m 13:00 n1 1.3 0.9 load1 m 13:00 n2 0.1 0.1 load5 m 13:00 n1 1.0 0.9 load5 m 13:00 n2 0.1 Filtering overhead is non-trivial … wide and fat has to filter fewer rows
    35. 35. © 2014 MapR Technologies 39 Or non-relational? Metric Window Node Compressed load1 m 13:00 n1 {t:[17,77],v:[1.3,0.9]} load1 m 13:00 n2 {t:[18,78],v:[0.1,0.1]} load5 m 13:00 n1 {t:[17,77],v:[1.0,0.9]} load5 m 13:00 n2 {t:[18,78],v:[0.1,0.1]} Cleanup process can sweep up old values after the hour is finished. Blob data can be compressed using fancy tricks.
    36. 36. © 2014 MapR Technologies 40 Lesson: Schemas can be very flexible and can even change on the fly
    37. 37. © 2014 MapR Technologies 41
    38. 38. © 2014 MapR Technologies 42 Step 3: Sequential reads hide many sins if density is high
    39. 39. © 2014 MapR Technologies 43 Which Queries? Which Data? • Most common is 1-3 metrics for 5-100% of nodes based on tags – Which nodes have unusual load? – Do any nodes stand out for response latency? – Alarm bots • Also common to get 5-20 metrics for single node – Render dashboard for particular machine • Result density should be high for all common queries • Most data is never read but is retained as insurance policy – Can’t predict what you will need to diagnose future failure modes
    40. 40. © 2014 MapR Technologies 44 Lesson: Have to know the queries to design in performance
    41. 41. © 2014 MapR Technologies 45
    42. 42. © 2014 MapR Technologies 46 Step 3: Time to Level up!
    43. 43. © 2014 MapR Technologies 47 What About the Major Leagues? • Industrial sensors can dwarf current TSDB loads – Assume 100 (drill rigs | generators | heating systems | turbines) – Each has 10,000 sensors – Each is sampled once per second – Total sample rate is 106 samples / s (100x faster than before) • Industrial applications require extensive testing at scale – Want to load years of test data in a few days – Sample rate for testing is 100 x 106 samples / s (10,000x faster) • And you thought the first example was extreme
    44. 44. © 2014 MapR Technologies 48 What About the Major Leagues? World Cup? • Industrial sensors can dwarf current TSDB loads – Assume 100 (drill rigs | generators | heating systems | turbines) – Each has 10,000 sensors – Each is sampled once per second – Total sample rate is 106 samples / s (100x faster than before) • Industrial applications require extensive testing at scale – Want to load years of test data in a few days – Sample rate for testing is 100 x 106 samples / s (10,000x faster) • And you thought the first example was extreme
    45. 45. © 2014 MapR Technologies 49 Rough Design Outline • Want to record and query 100M samples / s at full resolution • Each MapR node serving tables can do ~20-40k inserts per second @ 1kB/record, ~60k inserts/s @ 100B / record • Each MapR node serving files can insert at ~1GB / s • We can buffer data in file system until we get >1000 samples per metric • Once data is sufficient, we do one insert per metric per hour – 3600x fewer inserts
    46. 46. © 2014 MapR Technologies 50 Data Flow – High Speed TSDB Web tier Data catcherData catcherData catcher Flat files Consolidator Consolidator Consolidator Consolidator ConsolidatorMeasurement Systems TSDB tables Browser
    47. 47. © 2014 MapR Technologies 51 Quick Results • Estimated data volumes – 100 M p / s / (3600 p/row) = 28 k row / s • Estimated potential throughput – 4 nodes @ 10 k row / s = 40 k row / s = 144 M p / s • Observed throughput for 2 day prototype – 1 feeder node, 4 table nodes, 10 M p / s – Feeder node CPU bound, table nodes < 5% CPU, disk ~ 0 • Simple prototype is limited by generator • Compare to observed max 100 k p / s on SQL Server – “Only” 100x faster
    48. 48. © 2014 MapR Technologies 52 Lesson: Very high rates look plausible with hybrid design
    49. 49. © 2014 MapR Technologies 53
    50. 50. © 2014 MapR Technologies 54 Quick Example: Xactly Dashboard
    51. 51. © 2014 MapR Technologies 55 MapR’s higher performance solution is far more efficient and cost-effective. “I can do something on a 10-node cluster that might require a 20-node cluster from a different Hadoop vendor”. CTO & SVP of Engineering Xactly: Sales Performance Management Xactly Insights: Delivering incentive compensation data for sales operations • Provide cloud-based performance management solutions to sales ops teams to help them design/manage optimal incentive compensation plans. • RDBMS-based platform was unable to scale in a cost effective way • Stringent performance and responsiveness expectations of users in a SaaS application • Highly responsive application that scaled to a growing customer base • Multi-tenancy capabilities in MapR helped ensure each customer’s data was isolated and separate from other customers in the SaaS application • MapR delivered on Xactly’s need for scale and low operational overhead OBJECTIVES CHALLENGES SOLUTION Business Impact
    52. 52. © 2014 MapR Technologies 56 Dashboard Problem • Hundreds to thousands of customers have hundreds to thousands of sales team members • Want to be able to compare stats for each team member, team, company against relevant roll-ups • Prototyped system in RDB, Mongo, MapR tables – Natural fit to relational cubes – Easy extension to Mongo documents with indexes – HBase application architecture has only one index • Production solution used special key design in MapR tables – Disk-based speed matched in-memory speed of Mongo
    53. 53. © 2014 MapR Technologies 57 Lesson: Obviously relational problems often have effective non-relational solutions
    54. 54. © 2014 MapR Technologies 58 Summary • HBase and MapR tables are conceptually very simple – But require careful design – Composite key design crucial – Non-relational column usage often important • Practical systems can exceed relational throughput by many orders of magnitude with very small clusters • Composite file/table designs can be very powerful – The world is not a database
    55. 55. © 2014 MapR Technologies 59 Me, Us • Ted Dunning, Chief Application Architect, MapR Committer PMC member, Mahout, Zookeeper, Drill Bought the beer at the first HUG • MapR Distributes more open source components for Hadoop Adds major technology for performance, HA, industry standard API’s • Info Hash tag - #mapr See also - @ApacheMahout @ApacheDrill @ted_dunning and @mapR

    ×