SlideShare a Scribd company logo
Hive & HBase For
Transaction Processing
Page 1
Alan Gates
@alanfgates
Agenda
Page 2Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
Agenda
Page 3Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
A Brief History of Hive
Page 4Hive & HBase For Transaction Processing
• Initial goal was to make it easy to execute MapReduce using a familiar
language: SQL
– Most queries took minutes or hours
– Primarily used for batch ETL jobs
• Since 0.11 much has been done to support interactive and ad hoc queries
– Many new features focused on improving performance: ORC and Parquet, Tez and
Spark, vectorization
– As of Hive 0.14 (November 2014) TPC-DS query 3 (star-join, group, order, limit) using
ORC, Tez, and vectorization finishes in 9s for 200GB scale and 32s for 30TB scale.
– Still have ~2-5 second minimum for all queries
• Ongoing performance work with goal of reaching sub-second response time
– Continued investment in vectorization
– LLAP
– Using Apache HBase for metastore
LLAP = Live Long And Process
LLAP: Why?
Page 5Hive & HBase For Transaction Processing
• It is hard to be fast and flexible in Tez
– When SQL session starts Tez AM spun up (first query cost)
– For subsequent queries Tez containers can be
– pre-allocated – fast but not flexible
– allocated and released for each query – flexible but start up cost for every query
• No caching of data between queries
– Even if data is in OS cache much of IO cost is deserialization/vector marshaling
which is not shared
LLAP: What
Page 6Hive & HBase For Transaction Processing
• LLAP is a node resident daemon process
– Low latency by reducing setup cost
– Multi-threaded engine that runs smaller tasks for query
including reads, filter and some joins
– Use regular Tez tasks for larger shuffle and other operators
• LLAP has In-memory columnar data cache
– High throughput IO using Async IO Elevator with dedicated
thread and core per disk
– Low latency by providing data from in-memory (off heap)
cache instead of going to HDFS
– Store data in columnar format for vectorization irrespective
of underlying file type
– Security enforced across queries and users
• Uses YARN for resource management
Node
LLAP Process
Query
Fragment
LLAP In-
Memory
columnar
cache
LLAP
process
running a
task for a
query
HDFS
LLAP: What
Page 7Hive & HBase For Transaction Processing
Node
LLAP
Process
HDFS
Query
Fragm
ent
LLAP In-Memory
columnar cache
LLAP process
running read task
for a query
LLAP process runs on multiple nodes,
accelerating Tez tasks
Node
Hive
Query
Node NodeNode Node
LLAP LLAP LLAP LLAP
LLAP: Is and Is Not
Page 8Hive & HBase For Transaction Processing
• It is not MPP
– Data not shuffled between LLAP nodes (except in limited cases)
• It is not a replacement for Tez or Spark
– Configured engine still used to launch tasks for post-shuffle operations (e.g. hash
joins, distributed aggregations, etc.)
• It is not required, users can still use Hive without installing LLAP
demons
• It is a Map server, or a set of standing map tasks
• It is currently under development on the llap branch
HBase Metastore: Why?
Page 9Hive & HBase For Transaction Processing
HBase Metastore: Why?
Page 10Hive & HBase For Transaction Processing
BUCKETING_COLS
SD_ID BIGINT(20)
BUCKET_COL_NAME VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
CDS
CD_ID BIGINT(20)
Indexes
COLUMNS_V2
CD_ID BIGINT(20)
COMMENT VARCHAR(256)
COLUMN_NAME VARCHAR(128)
TYPE_NAME VARCHAR(4000)
INTEGER_IDX INT(11)
Indexes
DATABASE_PARAMS
DB_ID BIGINT(20)
PARAM_KEY VARCHAR(180)
PARAM_VALUE VARCHAR(4000)
Indexes
DBS
DB_ID BIGINT(20)
DESC VARCHAR(4000)
DB_LOCATION_URI VARCHAR(4000)
NAME VARCHAR(128)
OWNER_NAME VARCHAR(128)
OWNER_TYPE VARCHAR(10)
Indexes
DB_PRIVS
DB_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
DB_ID BIGINT(20)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
DB_PRIV VARCHAR(128)
Indexes
GLOBAL_PRIVS
USER_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
USER_PRIV VARCHAR(128)
Indexes
IDXS
INDEX_ID BIGINT(20)
CREATE_TIME INT(11)
DEFERRED_REBUILD BIT(1)
INDEX_HANDLER_CLASS VARCHAR(4000)
INDEX_NAME VARCHAR(128)
INDEX_TBL_ID BIGINT(20)
LAST_ACCESS_TIME INT(11)
ORIG_TBL_ID BIGINT(20)
SD_ID BIGINT(20)
Indexes
INDEX_PARAMS
INDEX_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
NUCLEUS_TABLES
CLASS_NAME VARCHAR(128)
TABLE_NAME VARCHAR(128)
TYPE VARCHAR(4)
OWNER VARCHAR(2)
VERSION VARCHAR(20)
INTERFACE_NAME VARCHAR(255)
Indexes
PARTITIONS
PART_ID BIGINT(20)
CREATE_TIME INT(11)
LAST_ACCESS_TIME INT(11)
PART_NAME VARCHAR(767)
SD_ID BIGINT(20)
TBL_ID BIGINT(20)
LINK_TARGET_ID BIGINT(20)
Indexes
PARTITION_EVENTS
PART_NAME_ID BIGINT(20)
DB_NAME VARCHAR(128)
EVENT_TIME BIGINT(20)
EVENT_TYPE INT(11)
PARTITION_NAME VARCHAR(767)
TBL_NAME VARCHAR(128)
Indexes
PARTITION_KEYS
TBL_ID BIGINT(20)
PKEY_COMMENT VARCHAR(4000)
PKEY_NAME VARCHAR(128)
PKEY_TYPE VARCHAR(767)
INTEGER_IDX INT(11)
Indexes
PARTITION_KEY_VALS
PART_ID BIGINT(20)
PART_KEY_VAL VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
PARTITION_PARAMS
PART_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
PART_COL_PRIVS
PART_COLUMN_GRANT_ID BIGINT(20)
COLUMN_NAME VARCHAR(128)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PART_ID BIGINT(20)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
PART_COL_PRIV VARCHAR(128)
Indexes
PART_PRIVS
PART_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PART_ID BIGINT(20)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
PART_PRIV VARCHAR(128)
Indexes
ROLES
ROLE_ID BIGINT(20)
CREATE_TIME INT(11)
OWNER_NAME VARCHAR(128)
ROLE_NAME VARCHAR(128)
Indexes
ROLE_MAP
ROLE_GRANT_ID BIGINT(20)
ADD_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
ROLE_ID BIGINT(20)
Indexes
SDS
SD_ID BIGINT(20)
CD_ID BIGINT(20)
INPUT_FORMAT VARCHAR(4000)
IS_COMPRESSED BIT(1)
IS_STOREDASSUBDIRECTORIES BIT(1)
LOCATION VARCHAR(4000)
NUM_BUCKETS INT(11)
OUTPUT_FORMAT VARCHAR(4000)
SERDE_ID BIGINT(20)
Indexes
SD_PARAMS
SD_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
SEQUENCE_TABLE
SEQUENCE_NAME VARCHAR(255)
NEXT_VAL BIGINT(20)
Indexes
SERDES
SERDE_ID BIGINT(20)
NAME VARCHAR(128)
SLIB VARCHAR(4000)
Indexes
SERDE_PARAMS
SERDE_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
SKEWED_COL_NAMES
SD_ID BIGINT(20)
SKEWED_COL_NAME VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
SKEWED_COL_VALUE_LOC_MAP
SD_ID BIGINT(20)
STRING_LIST_ID_KID BIGINT(20)
LOCATION VARCHAR(4000)
Indexes
SKEWED_STRING_LIST
STRING_LIST_ID BIGINT(20)
Indexes
SKEWED_STRING_LIST_VALUES
STRING_LIST_ID BIGINT(20)
STRING_LIST_VALUE VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
SKEWED_VALUES
SD_ID_OID BIGINT(20)
STRING_LIST_ID_EID BIGINT(20)
INTEGER_IDX INT(11)
Indexes
SORT_COLS
SD_ID BIGINT(20)
COLUMN_NAME VARCHAR(128)
ORDER INT(11)
INTEGER_IDX INT(11)
Indexes
TABLE_PARAMS
TBL_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
TBLS
TBL_ID BIGINT(20)
CREATE_TIME INT(11)
DB_ID BIGINT(20)
LAST_ACCESS_TIME INT(11)
OWNER VARCHAR(767)
RETENTION INT(11)
SD_ID BIGINT(20)
TBL_NAME VARCHAR(128)
TBL_TYPE VARCHAR(128)
VIEW_EXPANDED_TEXT MEDIUMTEXT
VIEW_ORIGINAL_TEXT MEDIUMTEXT
LINK_TARGET_ID BIGINT(20)
Indexes
TBL_COL_PRIVS
TBL_COLUMN_GRANT_ID BIGINT(20)
COLUMN_NAME VARCHAR(128)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
TBL_COL_PRIV VARCHAR(128)
TBL_ID BIGINT(20)
Indexes
TBL_PRIVS
TBL_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
TBL_PRIV VARCHAR(128)
TBL_ID BIGINT(20)
Indexes
TAB_COL_STATS
CS_ID BIGINT(20)
DB_NAME VARCHAR(128)
TABLE_NAME VARCHAR(128)
COLUMN_NAME VARCHAR(128)
COLUMN_TYPE VARCHAR(128)
TBL_ID BIGINT(20)
LONG_LOW_VALUE BIGINT(20)
LONG_HIGH_VALUE BIGINT(20)
DOUBLE_HIGH_VALUE DOUBLE(53,4)
DOUBLE_LOW_VALUE DOUBLE(53,4)
BIG_DECIMAL_LOW_VALUE VARCHAR(4000)
BIG_DECIMAL_HIGH_VALUE VARCHAR(4000)
NUM_NULLS BIGINT(20)
NUM_DISTINCTS BIGINT(20)
AVG_COL_LEN DOUBLE(53,4)
MAX_COL_LEN BIGINT(20)
NUM_TRUES BIGINT(20)
NUM_FALSES BIGINT(20)
LAST_ANALYZED BIGINT(20)
Indexes
PART_COL_STATS
CS_ID BIGINT(20)
DB_NAME VARCHAR(128)
TABLE_NAME VARCHAR(128)
PARTITION_NAME VARCHAR(767)
COLUMN_NAME VARCHAR(128)
COLUMN_TYPE VARCHAR(128)
PART_ID BIGINT(20)
LONG_LOW_VALUE BIGINT(20)
LONG_HIGH_VALUE BIGINT(20)
DOUBLE_HIGH_VALUE DOUBLE(53,4)
DOUBLE_LOW_VALUE DOUBLE(53,4)
BIG_DECIMAL_LOW_VALUE VARCHAR(4000)
BIG_DECIMAL_HIGH_VALUE VARCHAR(4000)
NUM_NULLS BIGINT(20)
NUM_DISTINCTS BIGINT(20)
AVG_COL_LEN DOUBLE(53,4)
MAX_COL_LEN BIGINT(20)
NUM_TRUES BIGINT(20)
NUM_FALSES BIGINT(20)
LAST_ANALYZED BIGINT(20)
Indexes
TYPES
TYPES_ID BIGINT(20)
TYPE_NAME VARCHAR(128)
TYPE1 VARCHAR(767)
TYPE2 VARCHAR(767)
Indexes
TYPE_FIELDS
TYPE_NAME BIGINT(20)
COMMENT VARCHAR(256)
FIELD_NAME VARCHAR(128)
FIELD_TYPE VARCHAR(767)
INTEGER_IDX INT(11)
Indexes
MASTER_KEYS
KEY_ID INT
MASTER_KEY VARCHAR(767)
Indexes
DELEGATION_TOKENS
TOKEN_IDENT VARCHAR(767)
TOKEN VARCHAR(767)
Indexes
VERSION
VER_ID BIGINT
SCHEMA_VERSION VARCHAR(127)
VERSION_COMMENT VARCHAR(255)
Indexes
FUNCS
FUNC_ID BIGINT(20)
CLASS_NAME VARCHAR(4000)
CREATE_TIME INT(11)
DB_ID BIGINT(20)
FUNC_NAME VARCHAR(128)
FUNC_TYPE INT(11)
OWNER_NAME VARCHAR(128)
OWNER_TYPE VARCHAR(10)
Indexes
FUNC_RU
FUNC_ID BIGINT(20)
RESOURCE_TYPE INT(11)
RESOURCE_URI VARCHAR(4000)
INTEGER_IDX INT(11)
Indexes
HBase Metastore: Why?
Page 11Hive & HBase For Transaction Processing
> 700 metastore queries to plan
TPC-DS query 27!!!
HBase Metastore: Why?
Page 12Hive & HBase For Transaction Processing
• Object Relational Modeling is an impedance mismatch
• The need to work across different DBs limits tuning opportunities
• No caching of catalog objects or stats in HiveServer2 or Hive metastore
• Hadoop nodes cannot contact RDBMS directly due to scale issues
• Solution: use HBase
– Can store object directly, no need to normalize
– Already scales, performs, etc.
– Can store additional data not stored today due to RDBMS capacity limitations
– Can access the metadata from the cluster (e.g. LLAP, Tez AM)
But...
Page 13Hive & HBase For Transaction Processing
• HBase does not have transactions –
metastore needs them
– Tephra, Omid 2 (Yahoo), others working on this
• HBase is hard to administer and install
– Yes, we will need to improve this
– We will also need embedded option for test/POC
setups to keep HBase from becoming barrier to
adoption
• Basically any work we need to do to HBase
for this is good since it benefits all HBase
users
HBase Metastore: How
Page 14Hive & HBase For Transaction Processing
• HBaseStore, a new implementation of RawStore that stores data in
HBase
• Not default, users still free to use RDBMS
• Less than 10 tables in HBase
– DBS, TBLS, PARTITIONS, ... – basically one for each object type
– Common partition data factored out to significantly reduce size
• Layout highly optimized for SELECT and DML queries, longer
operations moved into DDL (e.g. grant)
• Extensive caching
– Of data catalog objects for length of a query
– Of aggregated stats across queries and users
• On going work in hbase-metastore branch
Agenda
Page 15Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
Apache Phoenix: Putting SQL Back in NoSQL
Page 16Hive & HBase For Transaction Processing
• SQL layer on top of HBase
• Originally oriented toward transaction processing
• Moving to add more analytics type operators
– Adding multiple join implementations
– Requests for OLAP functions (PHOENIX-154)
• Working on adding transactions (PHOENIX-1674)
• Moving to Apache Calcite for optimization (PHOENIX-1488)
Agenda
Page 17Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
What If?
Page 18Hive & HBase For Transaction Processing
• We could share one O/JDBC driver?
• We could share one SQL dialect?
• Phoenix could leverage extensive analytics
functionality in Hive without re-inventing it
• Users could access their transactional and
analytics data in single SQL operations?
How?
Page 19Hive & HBase For Transaction Processing
• Insight #1: LLAP is a storage plus operations
server for Hive; we can swap it out for other
implementations
• Insight #2: Tez and Spark can do post-shuffle
operations (hash join, etc.) with LLAP or HBase
• Insight #3: Calcite (used by both Hive and
Phoenix) is built specifically to integrate
disparate data storage systems
Vision
Page 20Hive & HBase For Transaction Processing
• User picks storage location for table in create
table (LLAP or HBase)
• Transactions more efficient in HBase tables but
work in both
• Analytics more efficient in LLAP tables but work
in both
• Queries that require shuffle use Tez or Spark for
post shuffle operators
HDFS
JDBC Server
Node Node
HBase LLAP
Query
Query
Query
Calcite
used for
planning
Phoenix
used for
execution
Hurdles
Page 21Hive & HBase For Transaction Processing
• Need to integrate types/data representation
• Need to integrate transaction management
• Work to do in Calcite to optimize transactional queries well

More Related Content

What's hot

Can Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, Confluent
Can Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, ConfluentCan Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, Confluent
Can Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, Confluent
HostedbyConfluent
 
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
HostedbyConfluent
 
Introduction to Kafka
Introduction to KafkaIntroduction to Kafka
Introduction to Kafka
Akash Vacher
 
Confluent Kafka and KSQL: Streaming Data Pipelines Made Easy
Confluent Kafka and KSQL: Streaming Data Pipelines Made EasyConfluent Kafka and KSQL: Streaming Data Pipelines Made Easy
Confluent Kafka and KSQL: Streaming Data Pipelines Made Easy
Kairo Tavares
 
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
confluent
 
KSQL Deep Dive - The Open Source Streaming Engine for Apache Kafka
KSQL Deep Dive - The Open Source Streaming Engine for Apache KafkaKSQL Deep Dive - The Open Source Streaming Engine for Apache Kafka
KSQL Deep Dive - The Open Source Streaming Engine for Apache Kafka
Kai Wähner
 
Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQLSteps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
confluent
 
Lambda Architecture in Practice
Lambda Architecture in PracticeLambda Architecture in Practice
Lambda Architecture in Practice
Navneet kumar
 
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
HostedbyConfluent
 
Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...
Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...
Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...
HostedbyConfluent
 
Apache Kafka® Security Overview
Apache Kafka® Security OverviewApache Kafka® Security Overview
Apache Kafka® Security Overview
confluent
 
Event Driven Architectures with Apache Kafka on Heroku
Event Driven Architectures with Apache Kafka on HerokuEvent Driven Architectures with Apache Kafka on Heroku
Event Driven Architectures with Apache Kafka on Heroku
Heroku
 
Slider: Applications on YARN
Slider: Applications on YARNSlider: Applications on YARN
Slider: Applications on YARN
Steve Loughran
 
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
confluent
 
Kafka connect-london-meetup-2016
Kafka connect-london-meetup-2016Kafka connect-london-meetup-2016
Kafka connect-london-meetup-2016
Gwen (Chen) Shapira
 
Data Driven Enterprise with Apache Kafka
Data Driven Enterprise with Apache KafkaData Driven Enterprise with Apache Kafka
Data Driven Enterprise with Apache Kafka
confluent
 
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...
HostedbyConfluent
 
Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013
Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013
Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013
Amazon Web Services
 
Easily Build a Smart Pulsar Stream Processor_Simon Crosby
Easily Build a Smart Pulsar Stream Processor_Simon CrosbyEasily Build a Smart Pulsar Stream Processor_Simon Crosby
Easily Build a Smart Pulsar Stream Processor_Simon Crosby
StreamNative
 
Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...
Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...
Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...
confluent
 

What's hot (20)

Can Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, Confluent
Can Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, ConfluentCan Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, Confluent
Can Apache Kafka Replace a Database? – The 2021 Update | Kai Waehner, Confluent
 
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...
 
Introduction to Kafka
Introduction to KafkaIntroduction to Kafka
Introduction to Kafka
 
Confluent Kafka and KSQL: Streaming Data Pipelines Made Easy
Confluent Kafka and KSQL: Streaming Data Pipelines Made EasyConfluent Kafka and KSQL: Streaming Data Pipelines Made Easy
Confluent Kafka and KSQL: Streaming Data Pipelines Made Easy
 
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
Using Location Data to Showcase Keys, Windows, and Joins in Kafka Streams DSL...
 
KSQL Deep Dive - The Open Source Streaming Engine for Apache Kafka
KSQL Deep Dive - The Open Source Streaming Engine for Apache KafkaKSQL Deep Dive - The Open Source Streaming Engine for Apache Kafka
KSQL Deep Dive - The Open Source Streaming Engine for Apache Kafka
 
Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQLSteps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL
 
Lambda Architecture in Practice
Lambda Architecture in PracticeLambda Architecture in Practice
Lambda Architecture in Practice
 
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
 
Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...
Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...
Developing a custom Kafka connector? Make it shine! | Igor Buzatović, Porsche...
 
Apache Kafka® Security Overview
Apache Kafka® Security OverviewApache Kafka® Security Overview
Apache Kafka® Security Overview
 
Event Driven Architectures with Apache Kafka on Heroku
Event Driven Architectures with Apache Kafka on HerokuEvent Driven Architectures with Apache Kafka on Heroku
Event Driven Architectures with Apache Kafka on Heroku
 
Slider: Applications on YARN
Slider: Applications on YARNSlider: Applications on YARN
Slider: Applications on YARN
 
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
 
Kafka connect-london-meetup-2016
Kafka connect-london-meetup-2016Kafka connect-london-meetup-2016
Kafka connect-london-meetup-2016
 
Data Driven Enterprise with Apache Kafka
Data Driven Enterprise with Apache KafkaData Driven Enterprise with Apache Kafka
Data Driven Enterprise with Apache Kafka
 
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...
Using Kafka as a Database For Real-Time Transaction Processing | Chad Preisle...
 
Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013
Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013
Maximizing Audience Engagement in Media Delivery (MED303) | AWS re:Invent 2013
 
Easily Build a Smart Pulsar Stream Processor_Simon Crosby
Easily Build a Smart Pulsar Stream Processor_Simon CrosbyEasily Build a Smart Pulsar Stream Processor_Simon Crosby
Easily Build a Smart Pulsar Stream Processor_Simon Crosby
 
Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...
Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...
Creating Connector to Bridge the Worlds of Kafka and gRPC at Wework (Anoop Di...
 

Viewers also liked

HBase schema design Big Data TechCon Boston
HBase schema design Big Data TechCon BostonHBase schema design Big Data TechCon Boston
HBase schema design Big Data TechCon Bostonamansk
 
Kudu - Fast Analytics on Fast Data
Kudu - Fast Analytics on Fast DataKudu - Fast Analytics on Fast Data
Kudu - Fast Analytics on Fast Data
Ryan Bosshart
 
Sqoop
SqoopSqoop
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
DataWorks Summit/Hadoop Summit
 
Apache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL DatabaseApache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL DatabaseDataWorks Summit
 
Apache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAseApache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAse
enissoz
 
Introduction to Apache Sqoop
Introduction to Apache SqoopIntroduction to Apache Sqoop
Introduction to Apache Sqoop
Avkash Chauhan
 
Apache Phoenix + Apache HBase
Apache Phoenix + Apache HBaseApache Phoenix + Apache HBase
Apache Phoenix + Apache HBase
DataWorks Summit/Hadoop Summit
 
Kudu: New Hadoop Storage for Fast Analytics on Fast Data
Kudu: New Hadoop Storage for Fast Analytics on Fast DataKudu: New Hadoop Storage for Fast Analytics on Fast Data
Kudu: New Hadoop Storage for Fast Analytics on Fast Data
Cloudera, Inc.
 
Integration of HIve and HBase
Integration of HIve and HBaseIntegration of HIve and HBase
Integration of HIve and HBaseHortonworks
 

Viewers also liked (10)

HBase schema design Big Data TechCon Boston
HBase schema design Big Data TechCon BostonHBase schema design Big Data TechCon Boston
HBase schema design Big Data TechCon Boston
 
Kudu - Fast Analytics on Fast Data
Kudu - Fast Analytics on Fast DataKudu - Fast Analytics on Fast Data
Kudu - Fast Analytics on Fast Data
 
Sqoop
SqoopSqoop
Sqoop
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
Apache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL DatabaseApache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL Database
 
Apache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAseApache phoenix: Past, Present and Future of SQL over HBAse
Apache phoenix: Past, Present and Future of SQL over HBAse
 
Introduction to Apache Sqoop
Introduction to Apache SqoopIntroduction to Apache Sqoop
Introduction to Apache Sqoop
 
Apache Phoenix + Apache HBase
Apache Phoenix + Apache HBaseApache Phoenix + Apache HBase
Apache Phoenix + Apache HBase
 
Kudu: New Hadoop Storage for Fast Analytics on Fast Data
Kudu: New Hadoop Storage for Fast Analytics on Fast DataKudu: New Hadoop Storage for Fast Analytics on Fast Data
Kudu: New Hadoop Storage for Fast Analytics on Fast Data
 
Integration of HIve and HBase
Integration of HIve and HBaseIntegration of HIve and HBase
Integration of HIve and HBase
 

Similar to Hive & HBase For Transaction Processing

Hive & HBase For Transaction Processing
Hive & HBase For Transaction ProcessingHive & HBase For Transaction Processing
Hive & HBase For Transaction Processing
DataWorks Summit
 
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the CloudSpeed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
gluent.
 
LLAP: Building Cloud First BI
LLAP: Building Cloud First BILLAP: Building Cloud First BI
LLAP: Building Cloud First BI
DataWorks Summit
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
DataWorks Summit/Hadoop Summit
 
Stinger.Next by Alan Gates of Hortonworks
Stinger.Next by Alan Gates of HortonworksStinger.Next by Alan Gates of Hortonworks
Stinger.Next by Alan Gates of Hortonworks
Data Con LA
 
HBaseCon2015-final
HBaseCon2015-finalHBaseCon2015-final
HBaseCon2015-finalMaryann Xue
 
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
Etu Solution Day 2014 Track-D: 掌握Impala和SparkEtu Solution Day 2014 Track-D: 掌握Impala和Spark
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
James Chen
 
Apache Hudi: The Path Forward
Apache Hudi: The Path ForwardApache Hudi: The Path Forward
Apache Hudi: The Path Forward
Alluxio, Inc.
 
LLAP: long-lived execution in Hive
LLAP: long-lived execution in HiveLLAP: long-lived execution in Hive
LLAP: long-lived execution in Hive
DataWorks Summit
 
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon
 
Apache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, ScaleApache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, Scale
DataWorks Summit/Hadoop Summit
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
Rahul Jain
 
Apache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, ScaleApache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, Scale
DataWorks Summit/Hadoop Summit
 
Hive 3 New Horizons DataWorks Summit Melbourne February 2019
Hive 3 New Horizons DataWorks Summit Melbourne February 2019Hive 3 New Horizons DataWorks Summit Melbourne February 2019
Hive 3 New Horizons DataWorks Summit Melbourne February 2019
alanfgates
 
What is New in Apache Hive 3.0?
What is New in Apache Hive 3.0?What is New in Apache Hive 3.0?
What is New in Apache Hive 3.0?
DataWorks Summit
 
Hive acid and_2.x new_features
Hive acid and_2.x new_featuresHive acid and_2.x new_features
Hive acid and_2.x new_features
Alberto Romero
 
Sub-second-sql-on-hadoop-at-scale
Sub-second-sql-on-hadoop-at-scaleSub-second-sql-on-hadoop-at-scale
Sub-second-sql-on-hadoop-at-scale
Yifeng Jiang
 
What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?
DataWorks Summit
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
DataWorks Summit/Hadoop Summit
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
DataWorks Summit/Hadoop Summit
 

Similar to Hive & HBase For Transaction Processing (20)

Hive & HBase For Transaction Processing
Hive & HBase For Transaction ProcessingHive & HBase For Transaction Processing
Hive & HBase For Transaction Processing
 
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the CloudSpeed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
 
LLAP: Building Cloud First BI
LLAP: Building Cloud First BILLAP: Building Cloud First BI
LLAP: Building Cloud First BI
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
Stinger.Next by Alan Gates of Hortonworks
Stinger.Next by Alan Gates of HortonworksStinger.Next by Alan Gates of Hortonworks
Stinger.Next by Alan Gates of Hortonworks
 
HBaseCon2015-final
HBaseCon2015-finalHBaseCon2015-final
HBaseCon2015-final
 
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
Etu Solution Day 2014 Track-D: 掌握Impala和SparkEtu Solution Day 2014 Track-D: 掌握Impala和Spark
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
 
Apache Hudi: The Path Forward
Apache Hudi: The Path ForwardApache Hudi: The Path Forward
Apache Hudi: The Path Forward
 
LLAP: long-lived execution in Hive
LLAP: long-lived execution in HiveLLAP: long-lived execution in Hive
LLAP: long-lived execution in Hive
 
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
 
Apache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, ScaleApache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, Scale
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
 
Apache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, ScaleApache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, Scale
 
Hive 3 New Horizons DataWorks Summit Melbourne February 2019
Hive 3 New Horizons DataWorks Summit Melbourne February 2019Hive 3 New Horizons DataWorks Summit Melbourne February 2019
Hive 3 New Horizons DataWorks Summit Melbourne February 2019
 
What is New in Apache Hive 3.0?
What is New in Apache Hive 3.0?What is New in Apache Hive 3.0?
What is New in Apache Hive 3.0?
 
Hive acid and_2.x new_features
Hive acid and_2.x new_featuresHive acid and_2.x new_features
Hive acid and_2.x new_features
 
Sub-second-sql-on-hadoop-at-scale
Sub-second-sql-on-hadoop-at-scaleSub-second-sql-on-hadoop-at-scale
Sub-second-sql-on-hadoop-at-scale
 
What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 

More from DataWorks Summit

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
DataWorks Summit
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
DataWorks Summit
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
DataWorks Summit
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
DataWorks Summit
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
DataWorks Summit
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
DataWorks Summit
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
DataWorks Summit
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
DataWorks Summit
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
DataWorks Summit
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
DataWorks Summit
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
DataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
DataWorks Summit
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
DataWorks Summit
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
DataWorks Summit
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
DataWorks Summit
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
DataWorks Summit
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
DataWorks Summit
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
DataWorks Summit
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
DataWorks Summit
 

More from DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Recently uploaded

Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
Vlad Stirbu
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
Jen Stirrup
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
UiPathCommunity
 
Enhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZEnhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZ
Globus
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 

Recently uploaded (20)

Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
 
Enhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZEnhancing Performance with Globus and the Science DMZ
Enhancing Performance with Globus and the Science DMZ
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 

Hive & HBase For Transaction Processing

  • 1. Hive & HBase For Transaction Processing Page 1 Alan Gates @alanfgates
  • 2. Agenda Page 2Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 3. Agenda Page 3Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 4. A Brief History of Hive Page 4Hive & HBase For Transaction Processing • Initial goal was to make it easy to execute MapReduce using a familiar language: SQL – Most queries took minutes or hours – Primarily used for batch ETL jobs • Since 0.11 much has been done to support interactive and ad hoc queries – Many new features focused on improving performance: ORC and Parquet, Tez and Spark, vectorization – As of Hive 0.14 (November 2014) TPC-DS query 3 (star-join, group, order, limit) using ORC, Tez, and vectorization finishes in 9s for 200GB scale and 32s for 30TB scale. – Still have ~2-5 second minimum for all queries • Ongoing performance work with goal of reaching sub-second response time – Continued investment in vectorization – LLAP – Using Apache HBase for metastore LLAP = Live Long And Process
  • 5. LLAP: Why? Page 5Hive & HBase For Transaction Processing • It is hard to be fast and flexible in Tez – When SQL session starts Tez AM spun up (first query cost) – For subsequent queries Tez containers can be – pre-allocated – fast but not flexible – allocated and released for each query – flexible but start up cost for every query • No caching of data between queries – Even if data is in OS cache much of IO cost is deserialization/vector marshaling which is not shared
  • 6. LLAP: What Page 6Hive & HBase For Transaction Processing • LLAP is a node resident daemon process – Low latency by reducing setup cost – Multi-threaded engine that runs smaller tasks for query including reads, filter and some joins – Use regular Tez tasks for larger shuffle and other operators • LLAP has In-memory columnar data cache – High throughput IO using Async IO Elevator with dedicated thread and core per disk – Low latency by providing data from in-memory (off heap) cache instead of going to HDFS – Store data in columnar format for vectorization irrespective of underlying file type – Security enforced across queries and users • Uses YARN for resource management Node LLAP Process Query Fragment LLAP In- Memory columnar cache LLAP process running a task for a query HDFS
  • 7. LLAP: What Page 7Hive & HBase For Transaction Processing Node LLAP Process HDFS Query Fragm ent LLAP In-Memory columnar cache LLAP process running read task for a query LLAP process runs on multiple nodes, accelerating Tez tasks Node Hive Query Node NodeNode Node LLAP LLAP LLAP LLAP
  • 8. LLAP: Is and Is Not Page 8Hive & HBase For Transaction Processing • It is not MPP – Data not shuffled between LLAP nodes (except in limited cases) • It is not a replacement for Tez or Spark – Configured engine still used to launch tasks for post-shuffle operations (e.g. hash joins, distributed aggregations, etc.) • It is not required, users can still use Hive without installing LLAP demons • It is a Map server, or a set of standing map tasks • It is currently under development on the llap branch
  • 9. HBase Metastore: Why? Page 9Hive & HBase For Transaction Processing
  • 10. HBase Metastore: Why? Page 10Hive & HBase For Transaction Processing BUCKETING_COLS SD_ID BIGINT(20) BUCKET_COL_NAME VARCHAR(256) INTEGER_IDX INT(11) Indexes CDS CD_ID BIGINT(20) Indexes COLUMNS_V2 CD_ID BIGINT(20) COMMENT VARCHAR(256) COLUMN_NAME VARCHAR(128) TYPE_NAME VARCHAR(4000) INTEGER_IDX INT(11) Indexes DATABASE_PARAMS DB_ID BIGINT(20) PARAM_KEY VARCHAR(180) PARAM_VALUE VARCHAR(4000) Indexes DBS DB_ID BIGINT(20) DESC VARCHAR(4000) DB_LOCATION_URI VARCHAR(4000) NAME VARCHAR(128) OWNER_NAME VARCHAR(128) OWNER_TYPE VARCHAR(10) Indexes DB_PRIVS DB_GRANT_ID BIGINT(20) CREATE_TIME INT(11) DB_ID BIGINT(20) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) DB_PRIV VARCHAR(128) Indexes GLOBAL_PRIVS USER_GRANT_ID BIGINT(20) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) USER_PRIV VARCHAR(128) Indexes IDXS INDEX_ID BIGINT(20) CREATE_TIME INT(11) DEFERRED_REBUILD BIT(1) INDEX_HANDLER_CLASS VARCHAR(4000) INDEX_NAME VARCHAR(128) INDEX_TBL_ID BIGINT(20) LAST_ACCESS_TIME INT(11) ORIG_TBL_ID BIGINT(20) SD_ID BIGINT(20) Indexes INDEX_PARAMS INDEX_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes NUCLEUS_TABLES CLASS_NAME VARCHAR(128) TABLE_NAME VARCHAR(128) TYPE VARCHAR(4) OWNER VARCHAR(2) VERSION VARCHAR(20) INTERFACE_NAME VARCHAR(255) Indexes PARTITIONS PART_ID BIGINT(20) CREATE_TIME INT(11) LAST_ACCESS_TIME INT(11) PART_NAME VARCHAR(767) SD_ID BIGINT(20) TBL_ID BIGINT(20) LINK_TARGET_ID BIGINT(20) Indexes PARTITION_EVENTS PART_NAME_ID BIGINT(20) DB_NAME VARCHAR(128) EVENT_TIME BIGINT(20) EVENT_TYPE INT(11) PARTITION_NAME VARCHAR(767) TBL_NAME VARCHAR(128) Indexes PARTITION_KEYS TBL_ID BIGINT(20) PKEY_COMMENT VARCHAR(4000) PKEY_NAME VARCHAR(128) PKEY_TYPE VARCHAR(767) INTEGER_IDX INT(11) Indexes PARTITION_KEY_VALS PART_ID BIGINT(20) PART_KEY_VAL VARCHAR(256) INTEGER_IDX INT(11) Indexes PARTITION_PARAMS PART_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes PART_COL_PRIVS PART_COLUMN_GRANT_ID BIGINT(20) COLUMN_NAME VARCHAR(128) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PART_ID BIGINT(20) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) PART_COL_PRIV VARCHAR(128) Indexes PART_PRIVS PART_GRANT_ID BIGINT(20) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PART_ID BIGINT(20) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) PART_PRIV VARCHAR(128) Indexes ROLES ROLE_ID BIGINT(20) CREATE_TIME INT(11) OWNER_NAME VARCHAR(128) ROLE_NAME VARCHAR(128) Indexes ROLE_MAP ROLE_GRANT_ID BIGINT(20) ADD_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) ROLE_ID BIGINT(20) Indexes SDS SD_ID BIGINT(20) CD_ID BIGINT(20) INPUT_FORMAT VARCHAR(4000) IS_COMPRESSED BIT(1) IS_STOREDASSUBDIRECTORIES BIT(1) LOCATION VARCHAR(4000) NUM_BUCKETS INT(11) OUTPUT_FORMAT VARCHAR(4000) SERDE_ID BIGINT(20) Indexes SD_PARAMS SD_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes SEQUENCE_TABLE SEQUENCE_NAME VARCHAR(255) NEXT_VAL BIGINT(20) Indexes SERDES SERDE_ID BIGINT(20) NAME VARCHAR(128) SLIB VARCHAR(4000) Indexes SERDE_PARAMS SERDE_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes SKEWED_COL_NAMES SD_ID BIGINT(20) SKEWED_COL_NAME VARCHAR(256) INTEGER_IDX INT(11) Indexes SKEWED_COL_VALUE_LOC_MAP SD_ID BIGINT(20) STRING_LIST_ID_KID BIGINT(20) LOCATION VARCHAR(4000) Indexes SKEWED_STRING_LIST STRING_LIST_ID BIGINT(20) Indexes SKEWED_STRING_LIST_VALUES STRING_LIST_ID BIGINT(20) STRING_LIST_VALUE VARCHAR(256) INTEGER_IDX INT(11) Indexes SKEWED_VALUES SD_ID_OID BIGINT(20) STRING_LIST_ID_EID BIGINT(20) INTEGER_IDX INT(11) Indexes SORT_COLS SD_ID BIGINT(20) COLUMN_NAME VARCHAR(128) ORDER INT(11) INTEGER_IDX INT(11) Indexes TABLE_PARAMS TBL_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes TBLS TBL_ID BIGINT(20) CREATE_TIME INT(11) DB_ID BIGINT(20) LAST_ACCESS_TIME INT(11) OWNER VARCHAR(767) RETENTION INT(11) SD_ID BIGINT(20) TBL_NAME VARCHAR(128) TBL_TYPE VARCHAR(128) VIEW_EXPANDED_TEXT MEDIUMTEXT VIEW_ORIGINAL_TEXT MEDIUMTEXT LINK_TARGET_ID BIGINT(20) Indexes TBL_COL_PRIVS TBL_COLUMN_GRANT_ID BIGINT(20) COLUMN_NAME VARCHAR(128) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) TBL_COL_PRIV VARCHAR(128) TBL_ID BIGINT(20) Indexes TBL_PRIVS TBL_GRANT_ID BIGINT(20) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) TBL_PRIV VARCHAR(128) TBL_ID BIGINT(20) Indexes TAB_COL_STATS CS_ID BIGINT(20) DB_NAME VARCHAR(128) TABLE_NAME VARCHAR(128) COLUMN_NAME VARCHAR(128) COLUMN_TYPE VARCHAR(128) TBL_ID BIGINT(20) LONG_LOW_VALUE BIGINT(20) LONG_HIGH_VALUE BIGINT(20) DOUBLE_HIGH_VALUE DOUBLE(53,4) DOUBLE_LOW_VALUE DOUBLE(53,4) BIG_DECIMAL_LOW_VALUE VARCHAR(4000) BIG_DECIMAL_HIGH_VALUE VARCHAR(4000) NUM_NULLS BIGINT(20) NUM_DISTINCTS BIGINT(20) AVG_COL_LEN DOUBLE(53,4) MAX_COL_LEN BIGINT(20) NUM_TRUES BIGINT(20) NUM_FALSES BIGINT(20) LAST_ANALYZED BIGINT(20) Indexes PART_COL_STATS CS_ID BIGINT(20) DB_NAME VARCHAR(128) TABLE_NAME VARCHAR(128) PARTITION_NAME VARCHAR(767) COLUMN_NAME VARCHAR(128) COLUMN_TYPE VARCHAR(128) PART_ID BIGINT(20) LONG_LOW_VALUE BIGINT(20) LONG_HIGH_VALUE BIGINT(20) DOUBLE_HIGH_VALUE DOUBLE(53,4) DOUBLE_LOW_VALUE DOUBLE(53,4) BIG_DECIMAL_LOW_VALUE VARCHAR(4000) BIG_DECIMAL_HIGH_VALUE VARCHAR(4000) NUM_NULLS BIGINT(20) NUM_DISTINCTS BIGINT(20) AVG_COL_LEN DOUBLE(53,4) MAX_COL_LEN BIGINT(20) NUM_TRUES BIGINT(20) NUM_FALSES BIGINT(20) LAST_ANALYZED BIGINT(20) Indexes TYPES TYPES_ID BIGINT(20) TYPE_NAME VARCHAR(128) TYPE1 VARCHAR(767) TYPE2 VARCHAR(767) Indexes TYPE_FIELDS TYPE_NAME BIGINT(20) COMMENT VARCHAR(256) FIELD_NAME VARCHAR(128) FIELD_TYPE VARCHAR(767) INTEGER_IDX INT(11) Indexes MASTER_KEYS KEY_ID INT MASTER_KEY VARCHAR(767) Indexes DELEGATION_TOKENS TOKEN_IDENT VARCHAR(767) TOKEN VARCHAR(767) Indexes VERSION VER_ID BIGINT SCHEMA_VERSION VARCHAR(127) VERSION_COMMENT VARCHAR(255) Indexes FUNCS FUNC_ID BIGINT(20) CLASS_NAME VARCHAR(4000) CREATE_TIME INT(11) DB_ID BIGINT(20) FUNC_NAME VARCHAR(128) FUNC_TYPE INT(11) OWNER_NAME VARCHAR(128) OWNER_TYPE VARCHAR(10) Indexes FUNC_RU FUNC_ID BIGINT(20) RESOURCE_TYPE INT(11) RESOURCE_URI VARCHAR(4000) INTEGER_IDX INT(11) Indexes
  • 11. HBase Metastore: Why? Page 11Hive & HBase For Transaction Processing > 700 metastore queries to plan TPC-DS query 27!!!
  • 12. HBase Metastore: Why? Page 12Hive & HBase For Transaction Processing • Object Relational Modeling is an impedance mismatch • The need to work across different DBs limits tuning opportunities • No caching of catalog objects or stats in HiveServer2 or Hive metastore • Hadoop nodes cannot contact RDBMS directly due to scale issues • Solution: use HBase – Can store object directly, no need to normalize – Already scales, performs, etc. – Can store additional data not stored today due to RDBMS capacity limitations – Can access the metadata from the cluster (e.g. LLAP, Tez AM)
  • 13. But... Page 13Hive & HBase For Transaction Processing • HBase does not have transactions – metastore needs them – Tephra, Omid 2 (Yahoo), others working on this • HBase is hard to administer and install – Yes, we will need to improve this – We will also need embedded option for test/POC setups to keep HBase from becoming barrier to adoption • Basically any work we need to do to HBase for this is good since it benefits all HBase users
  • 14. HBase Metastore: How Page 14Hive & HBase For Transaction Processing • HBaseStore, a new implementation of RawStore that stores data in HBase • Not default, users still free to use RDBMS • Less than 10 tables in HBase – DBS, TBLS, PARTITIONS, ... – basically one for each object type – Common partition data factored out to significantly reduce size • Layout highly optimized for SELECT and DML queries, longer operations moved into DDL (e.g. grant) • Extensive caching – Of data catalog objects for length of a query – Of aggregated stats across queries and users • On going work in hbase-metastore branch
  • 15. Agenda Page 15Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 16. Apache Phoenix: Putting SQL Back in NoSQL Page 16Hive & HBase For Transaction Processing • SQL layer on top of HBase • Originally oriented toward transaction processing • Moving to add more analytics type operators – Adding multiple join implementations – Requests for OLAP functions (PHOENIX-154) • Working on adding transactions (PHOENIX-1674) • Moving to Apache Calcite for optimization (PHOENIX-1488)
  • 17. Agenda Page 17Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 18. What If? Page 18Hive & HBase For Transaction Processing • We could share one O/JDBC driver? • We could share one SQL dialect? • Phoenix could leverage extensive analytics functionality in Hive without re-inventing it • Users could access their transactional and analytics data in single SQL operations?
  • 19. How? Page 19Hive & HBase For Transaction Processing • Insight #1: LLAP is a storage plus operations server for Hive; we can swap it out for other implementations • Insight #2: Tez and Spark can do post-shuffle operations (hash join, etc.) with LLAP or HBase • Insight #3: Calcite (used by both Hive and Phoenix) is built specifically to integrate disparate data storage systems
  • 20. Vision Page 20Hive & HBase For Transaction Processing • User picks storage location for table in create table (LLAP or HBase) • Transactions more efficient in HBase tables but work in both • Analytics more efficient in LLAP tables but work in both • Queries that require shuffle use Tez or Spark for post shuffle operators HDFS JDBC Server Node Node HBase LLAP Query Query Query Calcite used for planning Phoenix used for execution
  • 21. Hurdles Page 21Hive & HBase For Transaction Processing • Need to integrate types/data representation • Need to integrate transaction management • Work to do in Calcite to optimize transactional queries well