SlideShare a Scribd company logo
1 of 43
1 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Apache Phoenix + Apache HBase
An Enterprise Grade Data Warehouse
Ankit Singhal , Rajeshbabu , Josh Elser
June, 30 2016
2 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
About us!!
– Committer and member of Apache Phoenix PMC
– MTS at Hortonworks.
Ankit Singhal
– Committer and member of Apache Phoenix PMC
– Committer in Apache HBase
– MTS at Hortonworks.
RajeshBabu
– Committer in Apache Phoenix
– Committer and Member of Apache Calcite PMC
– MTS at Hortonworks.
Josh Elser
3 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Agenda
Phoenix & HBase as an Enterprise Data Warehouse
Use Cases
Optimizations
Phoenix Query server
Q&A
4 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Data Warehouse
EDW helps organize and aggregate analytical data from various functional domains and
serves as a critical repository for organizations’ operations.
STAGING
Files
IOT
data
Data Warehouse
Mart
OLTP
ETL Visualization
or BI
5 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Phoenix Offerings and Interoperability:-
ETL Data Warehouse Visualization & BI
6 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Table,a,123
Table,,123
RegionServer
HDFS
HBase client
Phoenix client
Phx coproc
ZooKeeper
Table,b,123
Table,a,123
Phx coproc
Table,c,123
Table,b,123
Phx coproc
RegionServer RegionServer
Application
HBase & Phoenix
HBase , a distributed NoSQL store
Phoenix , provides OLTP and Analytics over HBase
7 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Open Source Data Warehouse
Hardware cost
Softwarecost
Specialized H/WCommodity H/W
LicensingcostNoCost SMPMPP
Open
Source MPP
HBase+
Phoenix
8 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Phoenix & HBase as a Data Warehouse
Architecture
Run on
commodity
H/W
True MPP
O/S and
H/W
flexibility
Support
OLTP and
ROLAP
9 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Phoenix & HBase as a Data Warehouse
Scalability
Linear
scalability
for storage
Linear
scalability
for memory
Open to
Third party
storage
10 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Phoenix & HBase as a Data Warehouse
Reliability
Highly
Available
Replication
for disaster
recovery
Fully ACID
for Data
Integrity
11 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Phoenix & HBase as a Data Warehouse
Manageability
Performance
Tuning
Data
Modeling &
Schema
Evolution
Data
pruning
Online
expansion
Or upgrade
Data Backup
and recovery
12 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Agenda
Phoenix & HBase as an Enterprise Data Warehouse
Use cases
13 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Who uses Phoenix !!
14 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Analytics Use case - (Web Advertising company)
 Functional Requirements
– Create a single source of truth
– Cross dimensional query on 50+ dimension and 80+ metrics
– Support fast Top-N queries
 Non-functional requirements
– Less than 3 second Response time for slice and dice
– 250+ concurrent users
– 100k+ Analytics queries/day
– Highly available
– Linear scalability
15 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Data Warehouse Capacity
 Data Size(ETL Input)
– 24TB/day of raw data system wide
– 25 Billion of impressions
 HBase Input(cube)
– 6 Billion rows of aggregated data(100GB/day)
 HBase Cluster size
– 65 Nodes of HBase
– 520 TB of disk
– 4.1 TB of memory
16 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Use Case Architecture
AdServer
Click Tracking
Kafka
Input
Kafka
Input
ETL Filter Aggregate
In- Memory
Store
ETL Filter Aggregate
Real-time
Kafka
CAMUS
HDFS
ETL
HDFS
Data
Uploader
D
A
T
A
A
P
I
HBase
Views
A
N
A
L
Y
T
I
C
S
UI
Batch Processing
Data Ingestion Analytics
Apache
Kafka
17 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Cube
Generation
Cubes are stored in
HBase
A
N
A
L
Y
T
I
C
S
UI
Convert
slice and
dice query
to SQL
query
Data
API
Analytics Data Warehouse Architecture
Bulk
Load
HDFS
ETL
Backup
and
recovery
18 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Time Series Use Case- (Apache Ambari)
 Functional requirements
– Store all cluster metrics collected every second(10k to 100k metrics/second)
– Optimize storage/access for time series data
 Non-functional requirements
– Near real time response time
– Scalable
– Real time ingestion
Ambari Metrics System (AMS)
19 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
AMS architecture
Metric
Monitors
Hosts
Hadoop
Sinks
HBase
Phoenix
Metric
Collector
Ambari
Server
20 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Agenda
Phoenix & HBase as an Enterprise Data Warehouse
Use Cases
Optimizations
21 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Schema Design
 Most important criteria for driving overall performance of queries on the table
 Primary key should be composed from most-used predicate columns in the queries
 In most cases, leading part of primary key should help to convert queries into point
lookups or range scans in HBase
Primary key design
22 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Schema Design
 Use salting to alleviate write hot-spotting
CREATE TABLE …(
…
) SALT_BUCKETS = N
– Number of buckets should be equal to number of RegionServers
 Otherwise, try to presplit the table if you know the row key data set
CREATE TABLE …(
…
) SPLITS(…)
Salting vs pre-split
23 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Schema Design
 Use block encoding and/or compression for better performance
CREATE TABLE …(
…
) DATA_BLOCK_ENCODING= ‘FAST_DIFF’, COMPRESSION=‘SNAPPY’
 Use region replication for read high availability
CREATE TABLE …(
…
) “REGION_REPLICATION” = “2”
Table properties
24 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Schema Design
 Set UPDATE_CACHE_FREQUENCY to bigger value to avoid frequently touching server for
metadata updates
CREATE TABLE …(
…
) UPDATE_CACHE_FREQUENCY = 300000
Table properties
25 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Schema Design
 Divide columns into multiple column families if there are rarely accessed columns
– HBase reads only the files of column families specified in the query to reduce I/O
pk1 pk2
CF1 CF2
Col1 Col2 Col3 Col4 Col5 Col6 Col7
Frequently accessing columns Rarely accessing columns
26 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Secondary Indexes
 Global indexes
– Optimized for read heavy use cases
CREATE INDEX idx on table(…)
 Local Indexes
– Optimized for write heavy and space constrained use cases
CREATE LOCAL INDEX idx on table(…)
 Functional indexes
– Allow you to create indexes on arbitrary expressions.
CREATE INDEX UPPER_NAME_INDEX ON EMP(UPPER(FIRSTNAME||’ ’|| LASTNAME ))
27 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Secondary Indexes
 Use covered indexes to efficiently scan over the index table instead of primary table.
CREATE INDEX idx ON table(…) include(…)
 Pass index hint to guide query optimizer to select the right index for query
SELECT /*+INDEX(<table> <index>)*/..
28 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Row Timestamp Column
 Maps HBase native row timestamp to a Phoenix column
 Leverage optimizations provided by HBase like setting the minimum and maximum time
range for scans to entirely skip the store files which don’t fall in that time range.
 Perfect for time series use cases.
 Syntax
CREATE TABLE …(CREATED_DATE NOT NULL DATE
…
CONSTRAINT PK PRIMARY KEY(CREATED_DATE ROW_TIMESTAMP…
)
29 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Use of Statistics
Region A
Region F
Region L
Region R
Chunk A
Chunk C
Chunk F
Chunk I
Chunk L
Chunk O
Chunk R
Chunk U
A
F
R
L
A
F
R
L
C
I
O
U
Client Client
30 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Skip Scan
 Phoenix supports skip scan to jump to matching keys directly when the query has key
sets in predicate
SELECT * FROM METRIC_RECORD
WHERE METRIC_NAME LIKE 'abc%'
AND HOSTNAME in ('host1’, 'host2');
CLIENT 1-CHUNK PARALLEL 1-WAY SKIP SCAN
ON 2 RANGES OVER METRIC_RECORD
['abc','host1'] - ['abd','host2']
Region1
Region2
Region3
Region4
Client
RS3RS2RS1
Skip scan
31 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Join optimizations
 Hash Join
– Hash join outperforms other types of join algorithms when one of the relations is smaller or
records matching the predicate should fit into memory
 Sort-Merge join
– When the relations are very big in size then use the sort-merge join algorithm
 NO_STAR_JOIN hint
– For multiple inner-join queries, Phoenix applies a star-join optimization by default. Use this hint in
the query if the overall size of all right-hand-side tables would exceed the memory size limit.
 NO_CHILD_PARENT_OPTIMIZATION hint
– Prevents the usage of child-parent-join optimization.
32 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Optimize Writes
 Upsert values
– Call it multiple times before commit for batching mutations
– Use prepared statement when you run the query multiple times
 Upsert select
– Configure phoenix.mutate.batchSize based on row size
– Set auto-commit to true for writing scan results directly to HBase.
– Set auto-commit to true while running upsert selects on the same table so that writes happen at
server.
33 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Hints
 SERIAL SCAN, RANGE SCAN
 SERIAL
 SMALL SCAN
Some important hints
34 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Additional References
 For some more optimizations you can refer to these documents
– http://phoenix.apache.org/tuning.html
– https://hbase.apache.org/book.html#performance
35 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Agenda
Phoenix & HBase as an Enterprise Data Warehouse
Use Cases
Optimizations
Phoenix Query Server
36 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Apache Phoenix Query Server
 A standalone service that proxies user requests to HBase/Phoenix
– Optional
 Reference client implementation via JDBC
– ”Thick” versus “Thin”
 First introduced in Apache Phoenix 4.4.0
 Built on Apache Calcite’s Avatica
– ”A framework for building database drivers”
37 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Traditional Apache Phoenix RPC Model
Table,a,123
Table,,123
RegionServer
HDFS
HBase client
Phoenix client
Phx coproc
ZooKeeper
Table,b,123
Table,a,123
Phx coproc
Table,c,123
Table,b,123
Phx coproc
RegionServer RegionServer
Application
38 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Query Server Model
Table,a,123
Table,,123
RegionServer
HDFS
HBase client
Phoenix client
Phx coproc
ZooKeeper
Table,b,123
Table,a,123
Phx coproc
Table,d,123
Table,b,123
Phx coproc
RegionServer RegionServer
Query Server
Application
39 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Query Server Technology
 HTTP Server and wire API definition
 Pluggable serialization
– Google Protocol Buffers
 “Thin” JDBC Driver (over HTTP)
 Other goodies!
– Pluggable metrics system
– TCK (technology compatibility kit)
– SPNEGO for Kerberos authentication
– Horizontally scalable with load balancing
40 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Query Server Clients
 Go language database/sql/driver
– https://github.com/Boostport/avatica
 .NET driver
– https://github.com/Azure/hdinsight-phoenix-sharp
– https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview
 ODBC
– Built by http://www.simba.com/, also available from Hortonworks
 Python DB API v2.0 (not “battle tested”)
– https://bitbucket.org/lalinsky/python-phoenixdb
Client enablement
41 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Agenda
Phoenix & HBase as an Enterprise Data Warehouse
Use Cases
Optimizations
Phoenix Query Server
Q&A
42 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
We hope to see you all migrating to Phoenix & HBase and expecting more questions on the user mailing
lists.
Get involved in mailing lists:-
user@phoenix.apache.org
user@hbase.apache.org
You can reach us on:-
ankit@apache.org
rajeshbabu@apache.org
elserj@apache.org
Phoenix & HBase
43 © Hortonworks Inc. 2011 – 2016. All Rights Reserved
Thank You

More Related Content

What's hot

Hadoop REST API Security with Apache Knox Gateway
Hadoop REST API Security with Apache Knox GatewayHadoop REST API Security with Apache Knox Gateway
Hadoop REST API Security with Apache Knox Gateway
DataWorks Summit
 

What's hot (20)

Apache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query ProcessingApache Tez: Accelerating Hadoop Query Processing
Apache Tez: Accelerating Hadoop Query Processing
 
Time-Series Apache HBase
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBase
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
 
Hive 3 - a new horizon
Hive 3 - a new horizonHive 3 - a new horizon
Hive 3 - a new horizon
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase
 
Scaling HBase for Big Data
Scaling HBase for Big DataScaling HBase for Big Data
Scaling HBase for Big Data
 
Apache Impalaパフォーマンスチューニング #dbts2018
Apache Impalaパフォーマンスチューニング #dbts2018Apache Impalaパフォーマンスチューニング #dbts2018
Apache Impalaパフォーマンスチューニング #dbts2018
 
Hadoop REST API Security with Apache Knox Gateway
Hadoop REST API Security with Apache Knox GatewayHadoop REST API Security with Apache Knox Gateway
Hadoop REST API Security with Apache Knox Gateway
 
Apache Hudi: The Path Forward
Apache Hudi: The Path ForwardApache Hudi: The Path Forward
Apache Hudi: The Path Forward
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practice
 
LLAP: long-lived execution in Hive
LLAP: long-lived execution in HiveLLAP: long-lived execution in Hive
LLAP: long-lived execution in Hive
 
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and PitfallsRunning Apache Spark on Kubernetes: Best Practices and Pitfalls
Running Apache Spark on Kubernetes: Best Practices and Pitfalls
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
HBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUpon
HBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUponHBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUpon
HBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUpon
 
Hadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, ClouderaHadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
Hadoop World 2011: Advanced HBase Schema Design - Lars George, Cloudera
 
Enabling Vectorized Engine in Apache Spark
Enabling Vectorized Engine in Apache SparkEnabling Vectorized Engine in Apache Spark
Enabling Vectorized Engine in Apache Spark
 
HBase Low Latency
HBase Low LatencyHBase Low Latency
HBase Low Latency
 
Chicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionChicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An Introduction
 
Securing Hadoop with Apache Ranger
Securing Hadoop with Apache RangerSecuring Hadoop with Apache Ranger
Securing Hadoop with Apache Ranger
 
Transactional operations in Apache Hive: present and future
Transactional operations in Apache Hive: present and futureTransactional operations in Apache Hive: present and future
Transactional operations in Apache Hive: present and future
 

Similar to Apache Phoenix + Apache HBase

HBase Read High Availability Using Timeline Consistent Region Replicas
HBase  Read High Availability Using Timeline Consistent Region ReplicasHBase  Read High Availability Using Timeline Consistent Region Replicas
HBase Read High Availability Using Timeline Consistent Region Replicas
enissoz
 
HBase Read High Availabilty using Timeline Consistent Region Replicas
HBase Read High Availabilty using Timeline Consistent Region ReplicasHBase Read High Availabilty using Timeline Consistent Region Replicas
HBase Read High Availabilty using Timeline Consistent Region Replicas
DataWorks Summit
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 

Similar to Apache Phoenix + Apache HBase (20)

HBase Read High Availability Using Timeline Consistent Region Replicas
HBase  Read High Availability Using Timeline Consistent Region ReplicasHBase  Read High Availability Using Timeline Consistent Region Replicas
HBase Read High Availability Using Timeline Consistent Region Replicas
 
Meet HBase 2.0 and Phoenix 5.0
Meet HBase 2.0 and Phoenix 5.0Meet HBase 2.0 and Phoenix 5.0
Meet HBase 2.0 and Phoenix 5.0
 
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBaseApache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
 
HBase Read High Availabilty using Timeline Consistent Region Replicas
HBase Read High Availabilty using Timeline Consistent Region ReplicasHBase Read High Availabilty using Timeline Consistent Region Replicas
HBase Read High Availabilty using Timeline Consistent Region Replicas
 
Big data processing engines, Atlanta Meetup 4/30
Big data processing engines, Atlanta Meetup 4/30Big data processing engines, Atlanta Meetup 4/30
Big data processing engines, Atlanta Meetup 4/30
 
Apache Phoenix and HBase - Hadoop Summit Tokyo, Japan
Apache Phoenix and HBase - Hadoop Summit Tokyo, JapanApache Phoenix and HBase - Hadoop Summit Tokyo, Japan
Apache Phoenix and HBase - Hadoop Summit Tokyo, Japan
 
An Overview on Optimization in Apache Hive: Past, Present Future
An Overview on Optimization in Apache Hive: Past, Present FutureAn Overview on Optimization in Apache Hive: Past, Present Future
An Overview on Optimization in Apache Hive: Past, Present Future
 
HBaseCon 2013: Integration of Apache Hive and HBase
HBaseCon 2013: Integration of Apache Hive and HBaseHBaseCon 2013: Integration of Apache Hive and HBase
HBaseCon 2013: Integration of Apache Hive and HBase
 
Apache HBase Internals you hoped you Never Needed to Understand
Apache HBase Internals you hoped you Never Needed to UnderstandApache HBase Internals you hoped you Never Needed to Understand
Apache HBase Internals you hoped you Never Needed to Understand
 
Enterprise data science at scale
Enterprise data science at scaleEnterprise data science at scale
Enterprise data science at scale
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
 
Driving Enterprise Data Governance for Big Data Systems through Apache Falcon
Driving Enterprise Data Governance for Big Data Systems through Apache FalconDriving Enterprise Data Governance for Big Data Systems through Apache Falcon
Driving Enterprise Data Governance for Big Data Systems through Apache Falcon
 
Hive present-and-feature-shanghai
Hive present-and-feature-shanghaiHive present-and-feature-shanghai
Hive present-and-feature-shanghai
 
Data Governance in Apache Falcon - Hadoop Summit Brussels 2015
Data Governance in Apache Falcon - Hadoop Summit Brussels 2015 Data Governance in Apache Falcon - Hadoop Summit Brussels 2015
Data Governance in Apache Falcon - Hadoop Summit Brussels 2015
 
An Overview on Optimization in Apache Hive: Past, Present, Future
An Overview on Optimization in Apache Hive: Past, Present, FutureAn Overview on Optimization in Apache Hive: Past, Present, Future
An Overview on Optimization in Apache Hive: Past, Present, Future
 
Moving towards enterprise ready Hadoop clusters on the cloud
Moving towards enterprise ready Hadoop clusters on the cloudMoving towards enterprise ready Hadoop clusters on the cloud
Moving towards enterprise ready Hadoop clusters on the cloud
 
Hadoop in adtech
Hadoop in adtechHadoop in adtech
Hadoop in adtech
 
Hadoop & cloud storage object store integration in production (final)
Hadoop & cloud storage  object store integration in production (final)Hadoop & cloud storage  object store integration in production (final)
Hadoop & cloud storage object store integration in production (final)
 
File Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & ParquetFile Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & Parquet
 
File Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & ParquetFile Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & Parquet
 

More from DataWorks Summit/Hadoop Summit

How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 

More from DataWorks Summit/Hadoop Summit (20)

Running Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in ProductionRunning Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in Production
 
State of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache ZeppelinState of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache Zeppelin
 
Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache Ranger
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science Platform
 
Revolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and ZeppelinRevolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and Zeppelin
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSense
 
Hadoop Crash Course
Hadoop Crash CourseHadoop Crash Course
Hadoop Crash Course
 
Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Apache Spark Crash Course
Apache Spark Crash CourseApache Spark Crash Course
Apache Spark Crash Course
 
Dataflow with Apache NiFi
Dataflow with Apache NiFiDataflow with Apache NiFi
Dataflow with Apache NiFi
 
Schema Registry - Set you Data Free
Schema Registry - Set you Data FreeSchema Registry - Set you Data Free
Schema Registry - Set you Data Free
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and ML
 
How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
 
HBase in Practice
HBase in Practice HBase in Practice
HBase in Practice
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
 

Recently uploaded

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Recently uploaded (20)

Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 

Apache Phoenix + Apache HBase

  • 1. 1 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Apache Phoenix + Apache HBase An Enterprise Grade Data Warehouse Ankit Singhal , Rajeshbabu , Josh Elser June, 30 2016
  • 2. 2 © Hortonworks Inc. 2011 – 2016. All Rights Reserved About us!! – Committer and member of Apache Phoenix PMC – MTS at Hortonworks. Ankit Singhal – Committer and member of Apache Phoenix PMC – Committer in Apache HBase – MTS at Hortonworks. RajeshBabu – Committer in Apache Phoenix – Committer and Member of Apache Calcite PMC – MTS at Hortonworks. Josh Elser
  • 3. 3 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Agenda Phoenix & HBase as an Enterprise Data Warehouse Use Cases Optimizations Phoenix Query server Q&A
  • 4. 4 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Data Warehouse EDW helps organize and aggregate analytical data from various functional domains and serves as a critical repository for organizations’ operations. STAGING Files IOT data Data Warehouse Mart OLTP ETL Visualization or BI
  • 5. 5 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Phoenix Offerings and Interoperability:- ETL Data Warehouse Visualization & BI
  • 6. 6 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Table,a,123 Table,,123 RegionServer HDFS HBase client Phoenix client Phx coproc ZooKeeper Table,b,123 Table,a,123 Phx coproc Table,c,123 Table,b,123 Phx coproc RegionServer RegionServer Application HBase & Phoenix HBase , a distributed NoSQL store Phoenix , provides OLTP and Analytics over HBase
  • 7. 7 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Open Source Data Warehouse Hardware cost Softwarecost Specialized H/WCommodity H/W LicensingcostNoCost SMPMPP Open Source MPP HBase+ Phoenix
  • 8. 8 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Phoenix & HBase as a Data Warehouse Architecture Run on commodity H/W True MPP O/S and H/W flexibility Support OLTP and ROLAP
  • 9. 9 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Phoenix & HBase as a Data Warehouse Scalability Linear scalability for storage Linear scalability for memory Open to Third party storage
  • 10. 10 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Phoenix & HBase as a Data Warehouse Reliability Highly Available Replication for disaster recovery Fully ACID for Data Integrity
  • 11. 11 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Phoenix & HBase as a Data Warehouse Manageability Performance Tuning Data Modeling & Schema Evolution Data pruning Online expansion Or upgrade Data Backup and recovery
  • 12. 12 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Agenda Phoenix & HBase as an Enterprise Data Warehouse Use cases
  • 13. 13 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Who uses Phoenix !!
  • 14. 14 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Analytics Use case - (Web Advertising company)  Functional Requirements – Create a single source of truth – Cross dimensional query on 50+ dimension and 80+ metrics – Support fast Top-N queries  Non-functional requirements – Less than 3 second Response time for slice and dice – 250+ concurrent users – 100k+ Analytics queries/day – Highly available – Linear scalability
  • 15. 15 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Data Warehouse Capacity  Data Size(ETL Input) – 24TB/day of raw data system wide – 25 Billion of impressions  HBase Input(cube) – 6 Billion rows of aggregated data(100GB/day)  HBase Cluster size – 65 Nodes of HBase – 520 TB of disk – 4.1 TB of memory
  • 16. 16 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Use Case Architecture AdServer Click Tracking Kafka Input Kafka Input ETL Filter Aggregate In- Memory Store ETL Filter Aggregate Real-time Kafka CAMUS HDFS ETL HDFS Data Uploader D A T A A P I HBase Views A N A L Y T I C S UI Batch Processing Data Ingestion Analytics Apache Kafka
  • 17. 17 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Cube Generation Cubes are stored in HBase A N A L Y T I C S UI Convert slice and dice query to SQL query Data API Analytics Data Warehouse Architecture Bulk Load HDFS ETL Backup and recovery
  • 18. 18 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Time Series Use Case- (Apache Ambari)  Functional requirements – Store all cluster metrics collected every second(10k to 100k metrics/second) – Optimize storage/access for time series data  Non-functional requirements – Near real time response time – Scalable – Real time ingestion Ambari Metrics System (AMS)
  • 19. 19 © Hortonworks Inc. 2011 – 2016. All Rights Reserved AMS architecture Metric Monitors Hosts Hadoop Sinks HBase Phoenix Metric Collector Ambari Server
  • 20. 20 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Agenda Phoenix & HBase as an Enterprise Data Warehouse Use Cases Optimizations
  • 21. 21 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Schema Design  Most important criteria for driving overall performance of queries on the table  Primary key should be composed from most-used predicate columns in the queries  In most cases, leading part of primary key should help to convert queries into point lookups or range scans in HBase Primary key design
  • 22. 22 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Schema Design  Use salting to alleviate write hot-spotting CREATE TABLE …( … ) SALT_BUCKETS = N – Number of buckets should be equal to number of RegionServers  Otherwise, try to presplit the table if you know the row key data set CREATE TABLE …( … ) SPLITS(…) Salting vs pre-split
  • 23. 23 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Schema Design  Use block encoding and/or compression for better performance CREATE TABLE …( … ) DATA_BLOCK_ENCODING= ‘FAST_DIFF’, COMPRESSION=‘SNAPPY’  Use region replication for read high availability CREATE TABLE …( … ) “REGION_REPLICATION” = “2” Table properties
  • 24. 24 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Schema Design  Set UPDATE_CACHE_FREQUENCY to bigger value to avoid frequently touching server for metadata updates CREATE TABLE …( … ) UPDATE_CACHE_FREQUENCY = 300000 Table properties
  • 25. 25 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Schema Design  Divide columns into multiple column families if there are rarely accessed columns – HBase reads only the files of column families specified in the query to reduce I/O pk1 pk2 CF1 CF2 Col1 Col2 Col3 Col4 Col5 Col6 Col7 Frequently accessing columns Rarely accessing columns
  • 26. 26 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Secondary Indexes  Global indexes – Optimized for read heavy use cases CREATE INDEX idx on table(…)  Local Indexes – Optimized for write heavy and space constrained use cases CREATE LOCAL INDEX idx on table(…)  Functional indexes – Allow you to create indexes on arbitrary expressions. CREATE INDEX UPPER_NAME_INDEX ON EMP(UPPER(FIRSTNAME||’ ’|| LASTNAME ))
  • 27. 27 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Secondary Indexes  Use covered indexes to efficiently scan over the index table instead of primary table. CREATE INDEX idx ON table(…) include(…)  Pass index hint to guide query optimizer to select the right index for query SELECT /*+INDEX(<table> <index>)*/..
  • 28. 28 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Row Timestamp Column  Maps HBase native row timestamp to a Phoenix column  Leverage optimizations provided by HBase like setting the minimum and maximum time range for scans to entirely skip the store files which don’t fall in that time range.  Perfect for time series use cases.  Syntax CREATE TABLE …(CREATED_DATE NOT NULL DATE … CONSTRAINT PK PRIMARY KEY(CREATED_DATE ROW_TIMESTAMP… )
  • 29. 29 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Use of Statistics Region A Region F Region L Region R Chunk A Chunk C Chunk F Chunk I Chunk L Chunk O Chunk R Chunk U A F R L A F R L C I O U Client Client
  • 30. 30 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Skip Scan  Phoenix supports skip scan to jump to matching keys directly when the query has key sets in predicate SELECT * FROM METRIC_RECORD WHERE METRIC_NAME LIKE 'abc%' AND HOSTNAME in ('host1’, 'host2'); CLIENT 1-CHUNK PARALLEL 1-WAY SKIP SCAN ON 2 RANGES OVER METRIC_RECORD ['abc','host1'] - ['abd','host2'] Region1 Region2 Region3 Region4 Client RS3RS2RS1 Skip scan
  • 31. 31 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Join optimizations  Hash Join – Hash join outperforms other types of join algorithms when one of the relations is smaller or records matching the predicate should fit into memory  Sort-Merge join – When the relations are very big in size then use the sort-merge join algorithm  NO_STAR_JOIN hint – For multiple inner-join queries, Phoenix applies a star-join optimization by default. Use this hint in the query if the overall size of all right-hand-side tables would exceed the memory size limit.  NO_CHILD_PARENT_OPTIMIZATION hint – Prevents the usage of child-parent-join optimization.
  • 32. 32 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Optimize Writes  Upsert values – Call it multiple times before commit for batching mutations – Use prepared statement when you run the query multiple times  Upsert select – Configure phoenix.mutate.batchSize based on row size – Set auto-commit to true for writing scan results directly to HBase. – Set auto-commit to true while running upsert selects on the same table so that writes happen at server.
  • 33. 33 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Hints  SERIAL SCAN, RANGE SCAN  SERIAL  SMALL SCAN Some important hints
  • 34. 34 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Additional References  For some more optimizations you can refer to these documents – http://phoenix.apache.org/tuning.html – https://hbase.apache.org/book.html#performance
  • 35. 35 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Agenda Phoenix & HBase as an Enterprise Data Warehouse Use Cases Optimizations Phoenix Query Server
  • 36. 36 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Apache Phoenix Query Server  A standalone service that proxies user requests to HBase/Phoenix – Optional  Reference client implementation via JDBC – ”Thick” versus “Thin”  First introduced in Apache Phoenix 4.4.0  Built on Apache Calcite’s Avatica – ”A framework for building database drivers”
  • 37. 37 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Traditional Apache Phoenix RPC Model Table,a,123 Table,,123 RegionServer HDFS HBase client Phoenix client Phx coproc ZooKeeper Table,b,123 Table,a,123 Phx coproc Table,c,123 Table,b,123 Phx coproc RegionServer RegionServer Application
  • 38. 38 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Query Server Model Table,a,123 Table,,123 RegionServer HDFS HBase client Phoenix client Phx coproc ZooKeeper Table,b,123 Table,a,123 Phx coproc Table,d,123 Table,b,123 Phx coproc RegionServer RegionServer Query Server Application
  • 39. 39 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Query Server Technology  HTTP Server and wire API definition  Pluggable serialization – Google Protocol Buffers  “Thin” JDBC Driver (over HTTP)  Other goodies! – Pluggable metrics system – TCK (technology compatibility kit) – SPNEGO for Kerberos authentication – Horizontally scalable with load balancing
  • 40. 40 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Query Server Clients  Go language database/sql/driver – https://github.com/Boostport/avatica  .NET driver – https://github.com/Azure/hdinsight-phoenix-sharp – https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview  ODBC – Built by http://www.simba.com/, also available from Hortonworks  Python DB API v2.0 (not “battle tested”) – https://bitbucket.org/lalinsky/python-phoenixdb Client enablement
  • 41. 41 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Agenda Phoenix & HBase as an Enterprise Data Warehouse Use Cases Optimizations Phoenix Query Server Q&A
  • 42. 42 © Hortonworks Inc. 2011 – 2016. All Rights Reserved We hope to see you all migrating to Phoenix & HBase and expecting more questions on the user mailing lists. Get involved in mailing lists:- user@phoenix.apache.org user@hbase.apache.org You can reach us on:- ankit@apache.org rajeshbabu@apache.org elserj@apache.org Phoenix & HBase
  • 43. 43 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Thank You