SlideShare a Scribd company logo
1 of 41
Page1
Mutable Data in Hive’s Immutable World
Lester Martin – Hortonworks 2015 Hadoop Summit
Page2
Connection before Content
Lester Martin – Hortonworks Professional Services
lmartin@hortonworks.com || lester.martin@gmail.com
http://lester.website (links to blog, twitter,
github, LI, FB, etc)
Page3
“Traditional” Hadoop Data
Time-Series Immutable (TSI) Data – Hive’s sweet spot
Going beyond web logs to more exotic data such as:
Vehicle sensors (ground, air, above/below water – space!)
Patient data (to include the atmosphere around them)
Smart phone/watch (TONS of info)
Clickstream Web
& Social
Geolocation Sensor
& Machine
Server
Logs
Unstructured
SOURCES
Page4
Good TSI Solutions Exist
Hive partitions
•Store as much as you want
•Only read the files you need
Hive Streaming Data Ingest from Flume or Storm
Sqoop’s –-incremental mode of append
•Use appropriate –-check-column
•“Saved Job” remembering –last-value
Page5
Use Case for an Active Archive
Evolving Domain Data – Hive likes immutable data
Need exact copy of mutating tables refreshed periodically
•Structural replica of multiple RDBMS tables
•The data in these tables are being updated
•Don’t need every change; just “as of” content
Existing Systems
ERP CRM SCM
SOURCES
eComm
Page6
Start With a Full Refresh Strategy
The epitome of the KISS principle
•Ingest & load new data
•Drop the existing table
•Rename the newly created table
Surely not elegant, but solves the problem until the reload
takes longer than the refresh period
Page7
Then Evolve to a Merge & Replace Strategy
Typically, deltas are…
•Small % of existing data
•Plus, some totally new records
In practice, differences in sizes
of circles is often much more
pronounced
Page8
Requirements for Merge & Replace
An immutable unique key
•To determine if an addition or a change
•The source table’s (natural or surrogate) PK is perfect
A last-updated timestamp to find the deltas
Leverage Sqoop’s –-incremental mode of
lastmodified to identify the deltas
•Use appropriate –-check-column
•“Saved Job” remembering –last-value
Page9
Processing Steps for Merge & Replace
See blog at http://hortonworks.com/blog/four-step-
strategy-incremental-updates-hive/, but note that merge
can be done in multiple technologies, not just Hive
Ingest – bring over the incremental
data
Reconcile – perform the merge
Compact – replace the existing data
with the newly merged content
Purge – cleanup & prepare to repeat
Page10
Full Merge & Replace Will NOT Scale
The “elephant” eventually gets too big
and merging it with the “mouse” takes
too long!
Example: A Hive structure with 100
billion rows, but only 100,000 delta
records
Page11
What Will? The Classic Hadoop Strategy!
Page12
But… One Size Does NOT Fit All…
Not everything is “big” – in fact, most operational apps’
tables are NOT too big for a simple Full Refresh
Divide & Conquer requires additional per-table research
to ensure the best partitioning strategy is decided upon
Page13
Criteria for Active Archive Partition Values
Non-nullable & immutable
Ensures sliding scale growth with new records generally
creating new partitions
Supports delta records being skewed such that the
percentage of partitions needing merge & replace
operations is relatively small
Classic value is (still) “Date
Created”
Page14
Work on (FEW!) Partitions in Parallel
Page15
Partition-Level Merge & Replace Steps
Generate the delta file
Create list of affected partitions
Perform merge & replace operations for affected partitions
1. Filter the delta file for the current partition
2. Load the Hive table’s current partition
3. Merge the two datasets
4. Delete the existing partition
5. Recreate the partition with the merged content
Page16
What Does This Approach Look Like?
A Lightning-Fast Review of an Indicative Hybrid Pig-Hive Example
Page17
One-Time: Create the Table
CREATE TABLE bogus_info(
bogus_id int,
field_one string,
field_two string,
field_three string)
PARTITIONED BY (date_created STRING)
STORED AS ORC
TBLPROPERTIES ("orc.compress"="ZLIB");
Page18
One-Time: Get Content from the Source
11,2014-09-17,base,base,base
12,2014-09-17,base,base,base
13,2014-09-17,base,base,base
14,2014-09-18,base,base,base
15,2014-09-18,base,base,base
16,2014-09-18,base,base,base
17,2014-09-19,base,base,base
18,2014-09-19,base,base,base
19,2014-09-19,base,base,base
Page19
One-Time: Read Content from HDFS
as_recd = LOAD '/user/fred/original.txt'
USING PigStorage(',') AS
(
bogus_id:int,
date_created:chararray,
field_one:chararray,
field_two:chararray,
field_three:chararray
);
Page20
One-Time: Sort and Insert into Hive Table
sorted_as_recd = ORDER as_recd BY
date_created, bogus_id;
STORE sorted_as_recd INTO 'bogus_info'
USING
org.apache.hcatalog.pig.HCatStorer();
Page21
One-Time: Verify Data are Present
hive> select * from bogus_info;
11 base base base 2014-09-17
12 base base base 2014-09-17
13 base base base 2014-09-17
14 base base base 2014-09-18
15 base base base 2014-09-18
16 base base base 2014-09-18
17 base base base 2014-09-19
18 base base base 2014-09-19
19 base base base 2014-09-19
Page22
One-Time: Verify Partitions are Present
hdfs dfs -ls /apps/hive/warehouse/bogus_info
Found 3 items
…
/apps/hive/warehouse/bogus_info/date_created
=2014-09-17
…
/apps/hive/warehouse/bogus_info/date_created
=2014-09-18
…
/apps/hive/warehouse/bogus_info/date_created
=2014-09-19
Page23
Generate the Delta File
20,2014-09-20,base,base,base
21,2014-09-20,base,base,base
22,2014-09-20,base,base,base
12,2014-09-17,base,CHANGED,base
14,2014-09-18,base,CHANGED,base
16,2014-09-18,base,CHANGED,base
Page24
Read Delta File from HDFS
delta_recd = LOAD '/user/fred/delta1.txt'
USING PigStorage(',') AS
(
bogus_id:int,
date_created:chararray,
field_one:chararray,
field_two:chararray,
field_three:chararray
);
Page25
Create List of Affected Partitions
by_grp = GROUP delta_recd BY date_created;
part_names = FOREACH by_grp GENERATE group;
srtd_part_names = ORDER part_names BY group;
STORE srtd_part_names INTO
'/user/fred/affected_partitions’;
Page26
Loop/Multithread Through Affected Partitions
Pig doesn’t really help you with this problem
This indicative example could be implemented as:
•A simple script that loops through the partitions
•A Java program that multi-threads the partition-aligned processing
Multiple “Control Structures” options exist as described at
http://pig.apache.org/docs/r0.14.0/cont.html
Page27
Loop Step: Filter on the Current Partition
delta_recd = LOAD '/user/fred/delta1.txt'
USING PigStorage(',') AS
( bogus_id:int, date_created:chararray,
field_one:chararray,
field_two:chararray,
field_three:chararray );
deltaP = FILTER delta_recd BY date_created
== '$partition_key’;
Page28
Loop Step: Retrieve Hive’s Current Partition
all_bogus_info = LOAD 'bogus_info' USING
org.apache.hcatalog.pig.HCatLoader();
tblP = FILTER all_bogus_info
BY date_created == '$partition_key';
Page29
Loop Step: Merge the Datasets
partJ = JOIN tblP BY bogus_id FULL OUTER,
deltaP BY bogus_id;
combined_part = FOREACH partJ GENERATE
((deltaP::bogus_id is not null) ?
deltaP::bogus_id: tblP::bogus_id) as
bogus_id, /* do for all fields
and end with “;” */
Page30
Loop Step: Sort and Save the Merged Data
s_combined_part = ORDER combined_part BY
date_created, bogus_id;
STORE s_combined_part INTO '/user/fred/
temp_$partition_key’ USING PigStorage(',');
hdfs dfs –cat temp_2014-09-17/part-r-00000
11,2014-09-17,base,base,base
12,2014-09-17,base,CHANGED,base
13,2014-09-17,base,base,base
Page31
Loop Step: Delete the Partition
ALTER TABLE bogus_info DROP IF EXISTS
PARTITION (date_created='2014-09-17’);
Page32
Loop Step: Recreate the Partition
2load = LOAD '/user/fred/
temp_$partition_key'
USING PigStorage(',') AS
( bogus_id:int, date_created:chararray,
field_one:chararray,
field_two:chararray,
field_three:chararray );
STORE 2load INTO 'bogus_info' using
org.apache.hcatalog.pig.HCatStorer();
Page33
Verify the Loop Step Updates
select * from bogus_info
where date_created = '2014-09-17’;
11 base base base 2014-09-17
12 base CHANGED base 2014-09-17
13 base base base 2014-09-17
Page34
My Head Hurts, Too!
As Promised, We Flew Through That – Take Another Look Later
Page35
What Does Merge & Replace Miss?
If critical, you have options
•Create a delete table sourced by a trigger
•At some wide frequency, start all over with a Full Refresh
Fortunately, ~most~ enterprises
don’t delete anything
Marking items “inactive” is
popular
Page36
Hybrid: Partition-Level Refresh
If most of the partition is modified, just replace it entirely
Especially if the changes are only recent (or highly skewed)
Use a configured number of partitions to refresh and
assume the rest of the data is static
Page37
Active Archive Strategy Review
strategy # of rows % of
chg
chg
skew
handles
deletes
complexity
Full Refresh <= millions any any yes simple
Full Merge &
Replace
<= millions any any no moderate
Partition-Level
Merge & Replace
billions + < 5% < 5% no complex
Partition-Level
Refresh
billions + < 5% < 5% yes complex
Page38
Isn’t There Anything Easier?
HIVE-5317 brought us Insert, Update & Delete
•Alan Gates presented Monday
•More tightly-coupled w/o the same “hazard windows”
•“Driver” logic shifts to be delta-only & row-focused
Thoughts & attempts at true DB replication
•Some COTS solutions have been tried
•Ideally, an open-source alternative is best such as enhancing the
Streaming Data Ingest framework
Page39
Considerations for HIVE-5317
On performance & scalability; your mileage may vary
Does NOT make Hive a RDBMS
Available in Hive .14 onwards
DDL requirements
•Must utilize partitioning & bucketing
•Initially, only supports ORC
Page40
Recommendations
Take another look at this topic once back at “your desk”
As with all things Hadoop…
•Know your data & workloads
•Try several approaches & evaluate results in earnest
•Stick with the KISS principle whenever possible
Share your findings via blogs and local user groups
Expect (even more!) great things from Hive
Page41
Questions?
Lester Martin – Hortonworks Professional Services
lmartin@hortonworks.com || lester.martin@gmail.com
http://lester.website (links to blog, twitter, github, LI, FB, etc)
THANKS FOR YOUR TIME!!

More Related Content

What's hot

Large scale ETL with Hadoop
Large scale ETL with HadoopLarge scale ETL with Hadoop
Large scale ETL with HadoopOReillyStrata
 
Data Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for HadoopData Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for HadoopGwen (Chen) Shapira
 
Hadoop introduction , Why and What is Hadoop ?
Hadoop introduction , Why and What is  Hadoop ?Hadoop introduction , Why and What is  Hadoop ?
Hadoop introduction , Why and What is Hadoop ?sudhakara st
 
XML Parsing with Map Reduce
XML Parsing with Map ReduceXML Parsing with Map Reduce
XML Parsing with Map ReduceEdureka!
 
Hadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
Hadoop Summit 2015: Hive at Yahoo: Letters from the TrenchesHadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
Hadoop Summit 2015: Hive at Yahoo: Letters from the TrenchesMithun Radhakrishnan
 
Apache Tez - A unifying Framework for Hadoop Data Processing
Apache Tez - A unifying Framework for Hadoop Data ProcessingApache Tez - A unifying Framework for Hadoop Data Processing
Apache Tez - A unifying Framework for Hadoop Data ProcessingDataWorks Summit
 
Hoodie - DataEngConf 2017
Hoodie - DataEngConf 2017Hoodie - DataEngConf 2017
Hoodie - DataEngConf 2017Vinoth Chandar
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesNishith Agarwal
 
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...DataWorks Summit/Hadoop Summit
 
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache TezYahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache TezDataWorks Summit
 
Flexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkFlexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkDataWorks Summit
 
Big Data Warehousing: Pig vs. Hive Comparison
Big Data Warehousing: Pig vs. Hive ComparisonBig Data Warehousing: Pig vs. Hive Comparison
Big Data Warehousing: Pig vs. Hive ComparisonCaserta
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceeakasit_dpu
 
Non-Stop Hadoop for Hortonworks
Non-Stop Hadoop for Hortonworks Non-Stop Hadoop for Hortonworks
Non-Stop Hadoop for Hortonworks Hortonworks
 
Leveraging docker for hadoop build automation and big data stack provisioning
Leveraging docker for hadoop build automation and big data stack provisioningLeveraging docker for hadoop build automation and big data stack provisioning
Leveraging docker for hadoop build automation and big data stack provisioningEvans Ye
 
Overview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics FrameworksOverview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics FrameworksDataWorks Summit/Hadoop Summit
 

What's hot (20)

HDFS tiered storage
HDFS tiered storageHDFS tiered storage
HDFS tiered storage
 
Large scale ETL with Hadoop
Large scale ETL with HadoopLarge scale ETL with Hadoop
Large scale ETL with Hadoop
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
 
Data Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for HadoopData Wrangling and Oracle Connectors for Hadoop
Data Wrangling and Oracle Connectors for Hadoop
 
Hadoop introduction , Why and What is Hadoop ?
Hadoop introduction , Why and What is  Hadoop ?Hadoop introduction , Why and What is  Hadoop ?
Hadoop introduction , Why and What is Hadoop ?
 
XML Parsing with Map Reduce
XML Parsing with Map ReduceXML Parsing with Map Reduce
XML Parsing with Map Reduce
 
Hadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
Hadoop Summit 2015: Hive at Yahoo: Letters from the TrenchesHadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
Hadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
 
Apache Tez - A unifying Framework for Hadoop Data Processing
Apache Tez - A unifying Framework for Hadoop Data ProcessingApache Tez - A unifying Framework for Hadoop Data Processing
Apache Tez - A unifying Framework for Hadoop Data Processing
 
Hoodie - DataEngConf 2017
Hoodie - DataEngConf 2017Hoodie - DataEngConf 2017
Hoodie - DataEngConf 2017
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilities
 
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
 
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache TezYahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
 
Flexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkFlexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache Flink
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
 
Big Data Warehousing: Pig vs. Hive Comparison
Big Data Warehousing: Pig vs. Hive ComparisonBig Data Warehousing: Pig vs. Hive Comparison
Big Data Warehousing: Pig vs. Hive Comparison
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
 
Non-Stop Hadoop for Hortonworks
Non-Stop Hadoop for Hortonworks Non-Stop Hadoop for Hortonworks
Non-Stop Hadoop for Hortonworks
 
Leveraging docker for hadoop build automation and big data stack provisioning
Leveraging docker for hadoop build automation and big data stack provisioningLeveraging docker for hadoop build automation and big data stack provisioning
Leveraging docker for hadoop build automation and big data stack provisioning
 
Overview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics FrameworksOverview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics Frameworks
 
Time-oriented event search. A new level of scale
Time-oriented event search. A new level of scale Time-oriented event search. A new level of scale
Time-oriented event search. A new level of scale
 

Viewers also liked

Transformation Processing Smackdown; Spark vs Hive vs Pig
Transformation Processing Smackdown; Spark vs Hive vs PigTransformation Processing Smackdown; Spark vs Hive vs Pig
Transformation Processing Smackdown; Spark vs Hive vs PigLester Martin
 
How to Use Apache Zeppelin with HWX HDB
How to Use Apache Zeppelin with HWX HDBHow to Use Apache Zeppelin with HWX HDB
How to Use Apache Zeppelin with HWX HDBHortonworks
 
Stream Processing using Apache Spark and Apache Kafka
Stream Processing using Apache Spark and Apache KafkaStream Processing using Apache Spark and Apache Kafka
Stream Processing using Apache Spark and Apache KafkaAbhinav Singh
 
Boosting spark performance: An Overview of Techniques
Boosting spark performance: An Overview of TechniquesBoosting spark performance: An Overview of Techniques
Boosting spark performance: An Overview of TechniquesAhsan Javed Awan
 
Deep learning and Apache Spark
Deep learning and Apache SparkDeep learning and Apache Spark
Deep learning and Apache SparkQuantUniversity
 
Dynamic Column Masking and Row-Level Filtering in HDP
Dynamic Column Masking and Row-Level Filtering in HDPDynamic Column Masking and Row-Level Filtering in HDP
Dynamic Column Masking and Row-Level Filtering in HDPHortonworks
 
The path to a Modern Data Architecture in Financial Services
The path to a Modern Data Architecture in Financial ServicesThe path to a Modern Data Architecture in Financial Services
The path to a Modern Data Architecture in Financial ServicesHortonworks
 
Edw Optimization Solution
Edw Optimization Solution Edw Optimization Solution
Edw Optimization Solution Hortonworks
 
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...Spark Summit
 
II FERIA VINO DEL RIBEIRO
II FERIA VINO DEL RIBEIROII FERIA VINO DEL RIBEIRO
II FERIA VINO DEL RIBEIROsociytec
 
mayer alerley
mayer alerleymayer alerley
mayer alerleymayer9
 
7.5.1 Каталог Prisma Pack - Навесные шкафы
7.5.1 Каталог Prisma Pack - Навесные шкафы7.5.1 Каталог Prisma Pack - Навесные шкафы
7.5.1 Каталог Prisma Pack - Навесные шкафыIgor Golovin
 
Carnaval peña 1957
Carnaval peña 1957Carnaval peña 1957
Carnaval peña 1957sociytec
 
Alcumes de Ribadavia
Alcumes de RibadaviaAlcumes de Ribadavia
Alcumes de Ribadaviasociytec
 
Writing for Semantic Search
Writing for Semantic SearchWriting for Semantic Search
Writing for Semantic SearchZara Altair
 
HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)
HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)
HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)Muneaki Nishimura
 
Manejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de Cultivo
Manejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de CultivoManejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de Cultivo
Manejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de CultivoIsabel Rojas Rodríguez
 
Why Do You Need BIO-METRIC Time and Attendance?
Why Do You Need BIO-METRIC Time and Attendance?Why Do You Need BIO-METRIC Time and Attendance?
Why Do You Need BIO-METRIC Time and Attendance?Matrix COSEC
 

Viewers also liked (20)

Transformation Processing Smackdown; Spark vs Hive vs Pig
Transformation Processing Smackdown; Spark vs Hive vs PigTransformation Processing Smackdown; Spark vs Hive vs Pig
Transformation Processing Smackdown; Spark vs Hive vs Pig
 
How to Use Apache Zeppelin with HWX HDB
How to Use Apache Zeppelin with HWX HDBHow to Use Apache Zeppelin with HWX HDB
How to Use Apache Zeppelin with HWX HDB
 
Stream Processing using Apache Spark and Apache Kafka
Stream Processing using Apache Spark and Apache KafkaStream Processing using Apache Spark and Apache Kafka
Stream Processing using Apache Spark and Apache Kafka
 
Boosting spark performance: An Overview of Techniques
Boosting spark performance: An Overview of TechniquesBoosting spark performance: An Overview of Techniques
Boosting spark performance: An Overview of Techniques
 
Deep learning and Apache Spark
Deep learning and Apache SparkDeep learning and Apache Spark
Deep learning and Apache Spark
 
Dynamic Column Masking and Row-Level Filtering in HDP
Dynamic Column Masking and Row-Level Filtering in HDPDynamic Column Masking and Row-Level Filtering in HDP
Dynamic Column Masking and Row-Level Filtering in HDP
 
The path to a Modern Data Architecture in Financial Services
The path to a Modern Data Architecture in Financial ServicesThe path to a Modern Data Architecture in Financial Services
The path to a Modern Data Architecture in Financial Services
 
Edw Optimization Solution
Edw Optimization Solution Edw Optimization Solution
Edw Optimization Solution
 
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
 
II FERIA VINO DEL RIBEIRO
II FERIA VINO DEL RIBEIROII FERIA VINO DEL RIBEIRO
II FERIA VINO DEL RIBEIRO
 
mayer alerley
mayer alerleymayer alerley
mayer alerley
 
7.5.1 Каталог Prisma Pack - Навесные шкафы
7.5.1 Каталог Prisma Pack - Навесные шкафы7.5.1 Каталог Prisma Pack - Навесные шкафы
7.5.1 Каталог Prisma Pack - Навесные шкафы
 
Carnaval peña 1957
Carnaval peña 1957Carnaval peña 1957
Carnaval peña 1957
 
Alcumes de Ribadavia
Alcumes de RibadaviaAlcumes de Ribadavia
Alcumes de Ribadavia
 
Writing for Semantic Search
Writing for Semantic SearchWriting for Semantic Search
Writing for Semantic Search
 
Murga los bragianos carnaval 1981
Murga los bragianos carnaval 1981Murga los bragianos carnaval 1981
Murga los bragianos carnaval 1981
 
Comparsa los zingaros carnavales 1935
Comparsa los zingaros carnavales 1935Comparsa los zingaros carnavales 1935
Comparsa los zingaros carnavales 1935
 
HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)
HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)
HPKP Supercookies (公開鍵ピンニングによるユーザ追跡)
 
Manejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de Cultivo
Manejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de CultivoManejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de Cultivo
Manejo de Cultivos en Fibra de Coco en Suelo y en Bolsa de Cultivo
 
Why Do You Need BIO-METRIC Time and Attendance?
Why Do You Need BIO-METRIC Time and Attendance?Why Do You Need BIO-METRIC Time and Attendance?
Why Do You Need BIO-METRIC Time and Attendance?
 

Similar to Mutable Data in Hive's Immutable World

Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010nzhang
 
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...Cloudera, Inc.
 
Harnessing the Hadoop Ecosystem Optimizations in Apache Hive
Harnessing the Hadoop Ecosystem Optimizations in Apache HiveHarnessing the Hadoop Ecosystem Optimizations in Apache Hive
Harnessing the Hadoop Ecosystem Optimizations in Apache HiveQubole
 
Alluxio Data Orchestration Platform for the Cloud
Alluxio Data Orchestration Platform for the CloudAlluxio Data Orchestration Platform for the Cloud
Alluxio Data Orchestration Platform for the CloudShubham Tagra
 
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...Cloudera, Inc.
 
How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...Alluxio, Inc.
 
Front Range PHP NoSQL Databases
Front Range PHP NoSQL DatabasesFront Range PHP NoSQL Databases
Front Range PHP NoSQL DatabasesJon Meredith
 
data analytics lecture 3.2.ppt
data analytics lecture 3.2.pptdata analytics lecture 3.2.ppt
data analytics lecture 3.2.pptRutujaPatil247341
 
Hadoop and object stores can we do it better
Hadoop and object stores  can we do it betterHadoop and object stores  can we do it better
Hadoop and object stores can we do it bettergvernik
 
Hadoop and object stores: Can we do it better?
Hadoop and object stores: Can we do it better?Hadoop and object stores: Can we do it better?
Hadoop and object stores: Can we do it better?gvernik
 
Data infrastructure at Facebook
Data infrastructure at Facebook Data infrastructure at Facebook
Data infrastructure at Facebook AhmedDoukh
 
JDD2014: Real Big Data - Scott MacGregor
JDD2014: Real Big Data - Scott MacGregorJDD2014: Real Big Data - Scott MacGregor
JDD2014: Real Big Data - Scott MacGregorPROIDEA
 
Optimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public CloudOptimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public CloudQubole
 
Making pig fly optimizing data processing on hadoop presentation
Making pig fly  optimizing data processing on hadoop presentationMaking pig fly  optimizing data processing on hadoop presentation
Making pig fly optimizing data processing on hadoop presentationMd Rasool
 
Building a high-performance data lake analytics engine at Alibaba Cloud with ...
Building a high-performance data lake analytics engine at Alibaba Cloud with ...Building a high-performance data lake analytics engine at Alibaba Cloud with ...
Building a high-performance data lake analytics engine at Alibaba Cloud with ...Alluxio, Inc.
 
SAP HANA SPS10- Enterprise Information Management
SAP HANA SPS10- Enterprise Information ManagementSAP HANA SPS10- Enterprise Information Management
SAP HANA SPS10- Enterprise Information ManagementSAP Technology
 
Disaster Recovery and Cloud Migration for your Apache Hive Warehouse
Disaster Recovery and Cloud Migration for your Apache Hive WarehouseDisaster Recovery and Cloud Migration for your Apache Hive Warehouse
Disaster Recovery and Cloud Migration for your Apache Hive WarehouseDataWorks Summit
 
Slide 2 collecting, storing and analyzing big data
Slide 2 collecting, storing and analyzing big dataSlide 2 collecting, storing and analyzing big data
Slide 2 collecting, storing and analyzing big dataTrieu Nguyen
 

Similar to Mutable Data in Hive's Immutable World (20)

Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010
 
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
 
Harnessing the Hadoop Ecosystem Optimizations in Apache Hive
Harnessing the Hadoop Ecosystem Optimizations in Apache HiveHarnessing the Hadoop Ecosystem Optimizations in Apache Hive
Harnessing the Hadoop Ecosystem Optimizations in Apache Hive
 
Alluxio Data Orchestration Platform for the Cloud
Alluxio Data Orchestration Platform for the CloudAlluxio Data Orchestration Platform for the Cloud
Alluxio Data Orchestration Platform for the Cloud
 
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
HBaseCon 2013: Apache Drill - A Community-driven Initiative to Deliver ANSI S...
 
How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...How the Development Bank of Singapore solves on-prem compute capacity challen...
How the Development Bank of Singapore solves on-prem compute capacity challen...
 
Front Range PHP NoSQL Databases
Front Range PHP NoSQL DatabasesFront Range PHP NoSQL Databases
Front Range PHP NoSQL Databases
 
data analytics lecture 3.2.ppt
data analytics lecture 3.2.pptdata analytics lecture 3.2.ppt
data analytics lecture 3.2.ppt
 
Hadoop and object stores can we do it better
Hadoop and object stores  can we do it betterHadoop and object stores  can we do it better
Hadoop and object stores can we do it better
 
Hadoop and object stores: Can we do it better?
Hadoop and object stores: Can we do it better?Hadoop and object stores: Can we do it better?
Hadoop and object stores: Can we do it better?
 
Data infrastructure at Facebook
Data infrastructure at Facebook Data infrastructure at Facebook
Data infrastructure at Facebook
 
JDD2014: Real Big Data - Scott MacGregor
JDD2014: Real Big Data - Scott MacGregorJDD2014: Real Big Data - Scott MacGregor
JDD2014: Real Big Data - Scott MacGregor
 
Optimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public CloudOptimizing Big Data to run in the Public Cloud
Optimizing Big Data to run in the Public Cloud
 
Datalake Architecture
Datalake ArchitectureDatalake Architecture
Datalake Architecture
 
Making pig fly optimizing data processing on hadoop presentation
Making pig fly  optimizing data processing on hadoop presentationMaking pig fly  optimizing data processing on hadoop presentation
Making pig fly optimizing data processing on hadoop presentation
 
Data Science
Data ScienceData Science
Data Science
 
Building a high-performance data lake analytics engine at Alibaba Cloud with ...
Building a high-performance data lake analytics engine at Alibaba Cloud with ...Building a high-performance data lake analytics engine at Alibaba Cloud with ...
Building a high-performance data lake analytics engine at Alibaba Cloud with ...
 
SAP HANA SPS10- Enterprise Information Management
SAP HANA SPS10- Enterprise Information ManagementSAP HANA SPS10- Enterprise Information Management
SAP HANA SPS10- Enterprise Information Management
 
Disaster Recovery and Cloud Migration for your Apache Hive Warehouse
Disaster Recovery and Cloud Migration for your Apache Hive WarehouseDisaster Recovery and Cloud Migration for your Apache Hive Warehouse
Disaster Recovery and Cloud Migration for your Apache Hive Warehouse
 
Slide 2 collecting, storing and analyzing big data
Slide 2 collecting, storing and analyzing big dataSlide 2 collecting, storing and analyzing big data
Slide 2 collecting, storing and analyzing big data
 

Recently uploaded

Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsAndrey Dotsenko
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 

Recently uploaded (20)

Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 

Mutable Data in Hive's Immutable World

  • 1. Page1 Mutable Data in Hive’s Immutable World Lester Martin – Hortonworks 2015 Hadoop Summit
  • 2. Page2 Connection before Content Lester Martin – Hortonworks Professional Services lmartin@hortonworks.com || lester.martin@gmail.com http://lester.website (links to blog, twitter, github, LI, FB, etc)
  • 3. Page3 “Traditional” Hadoop Data Time-Series Immutable (TSI) Data – Hive’s sweet spot Going beyond web logs to more exotic data such as: Vehicle sensors (ground, air, above/below water – space!) Patient data (to include the atmosphere around them) Smart phone/watch (TONS of info) Clickstream Web & Social Geolocation Sensor & Machine Server Logs Unstructured SOURCES
  • 4. Page4 Good TSI Solutions Exist Hive partitions •Store as much as you want •Only read the files you need Hive Streaming Data Ingest from Flume or Storm Sqoop’s –-incremental mode of append •Use appropriate –-check-column •“Saved Job” remembering –last-value
  • 5. Page5 Use Case for an Active Archive Evolving Domain Data – Hive likes immutable data Need exact copy of mutating tables refreshed periodically •Structural replica of multiple RDBMS tables •The data in these tables are being updated •Don’t need every change; just “as of” content Existing Systems ERP CRM SCM SOURCES eComm
  • 6. Page6 Start With a Full Refresh Strategy The epitome of the KISS principle •Ingest & load new data •Drop the existing table •Rename the newly created table Surely not elegant, but solves the problem until the reload takes longer than the refresh period
  • 7. Page7 Then Evolve to a Merge & Replace Strategy Typically, deltas are… •Small % of existing data •Plus, some totally new records In practice, differences in sizes of circles is often much more pronounced
  • 8. Page8 Requirements for Merge & Replace An immutable unique key •To determine if an addition or a change •The source table’s (natural or surrogate) PK is perfect A last-updated timestamp to find the deltas Leverage Sqoop’s –-incremental mode of lastmodified to identify the deltas •Use appropriate –-check-column •“Saved Job” remembering –last-value
  • 9. Page9 Processing Steps for Merge & Replace See blog at http://hortonworks.com/blog/four-step- strategy-incremental-updates-hive/, but note that merge can be done in multiple technologies, not just Hive Ingest – bring over the incremental data Reconcile – perform the merge Compact – replace the existing data with the newly merged content Purge – cleanup & prepare to repeat
  • 10. Page10 Full Merge & Replace Will NOT Scale The “elephant” eventually gets too big and merging it with the “mouse” takes too long! Example: A Hive structure with 100 billion rows, but only 100,000 delta records
  • 11. Page11 What Will? The Classic Hadoop Strategy!
  • 12. Page12 But… One Size Does NOT Fit All… Not everything is “big” – in fact, most operational apps’ tables are NOT too big for a simple Full Refresh Divide & Conquer requires additional per-table research to ensure the best partitioning strategy is decided upon
  • 13. Page13 Criteria for Active Archive Partition Values Non-nullable & immutable Ensures sliding scale growth with new records generally creating new partitions Supports delta records being skewed such that the percentage of partitions needing merge & replace operations is relatively small Classic value is (still) “Date Created”
  • 14. Page14 Work on (FEW!) Partitions in Parallel
  • 15. Page15 Partition-Level Merge & Replace Steps Generate the delta file Create list of affected partitions Perform merge & replace operations for affected partitions 1. Filter the delta file for the current partition 2. Load the Hive table’s current partition 3. Merge the two datasets 4. Delete the existing partition 5. Recreate the partition with the merged content
  • 16. Page16 What Does This Approach Look Like? A Lightning-Fast Review of an Indicative Hybrid Pig-Hive Example
  • 17. Page17 One-Time: Create the Table CREATE TABLE bogus_info( bogus_id int, field_one string, field_two string, field_three string) PARTITIONED BY (date_created STRING) STORED AS ORC TBLPROPERTIES ("orc.compress"="ZLIB");
  • 18. Page18 One-Time: Get Content from the Source 11,2014-09-17,base,base,base 12,2014-09-17,base,base,base 13,2014-09-17,base,base,base 14,2014-09-18,base,base,base 15,2014-09-18,base,base,base 16,2014-09-18,base,base,base 17,2014-09-19,base,base,base 18,2014-09-19,base,base,base 19,2014-09-19,base,base,base
  • 19. Page19 One-Time: Read Content from HDFS as_recd = LOAD '/user/fred/original.txt' USING PigStorage(',') AS ( bogus_id:int, date_created:chararray, field_one:chararray, field_two:chararray, field_three:chararray );
  • 20. Page20 One-Time: Sort and Insert into Hive Table sorted_as_recd = ORDER as_recd BY date_created, bogus_id; STORE sorted_as_recd INTO 'bogus_info' USING org.apache.hcatalog.pig.HCatStorer();
  • 21. Page21 One-Time: Verify Data are Present hive> select * from bogus_info; 11 base base base 2014-09-17 12 base base base 2014-09-17 13 base base base 2014-09-17 14 base base base 2014-09-18 15 base base base 2014-09-18 16 base base base 2014-09-18 17 base base base 2014-09-19 18 base base base 2014-09-19 19 base base base 2014-09-19
  • 22. Page22 One-Time: Verify Partitions are Present hdfs dfs -ls /apps/hive/warehouse/bogus_info Found 3 items … /apps/hive/warehouse/bogus_info/date_created =2014-09-17 … /apps/hive/warehouse/bogus_info/date_created =2014-09-18 … /apps/hive/warehouse/bogus_info/date_created =2014-09-19
  • 23. Page23 Generate the Delta File 20,2014-09-20,base,base,base 21,2014-09-20,base,base,base 22,2014-09-20,base,base,base 12,2014-09-17,base,CHANGED,base 14,2014-09-18,base,CHANGED,base 16,2014-09-18,base,CHANGED,base
  • 24. Page24 Read Delta File from HDFS delta_recd = LOAD '/user/fred/delta1.txt' USING PigStorage(',') AS ( bogus_id:int, date_created:chararray, field_one:chararray, field_two:chararray, field_three:chararray );
  • 25. Page25 Create List of Affected Partitions by_grp = GROUP delta_recd BY date_created; part_names = FOREACH by_grp GENERATE group; srtd_part_names = ORDER part_names BY group; STORE srtd_part_names INTO '/user/fred/affected_partitions’;
  • 26. Page26 Loop/Multithread Through Affected Partitions Pig doesn’t really help you with this problem This indicative example could be implemented as: •A simple script that loops through the partitions •A Java program that multi-threads the partition-aligned processing Multiple “Control Structures” options exist as described at http://pig.apache.org/docs/r0.14.0/cont.html
  • 27. Page27 Loop Step: Filter on the Current Partition delta_recd = LOAD '/user/fred/delta1.txt' USING PigStorage(',') AS ( bogus_id:int, date_created:chararray, field_one:chararray, field_two:chararray, field_three:chararray ); deltaP = FILTER delta_recd BY date_created == '$partition_key’;
  • 28. Page28 Loop Step: Retrieve Hive’s Current Partition all_bogus_info = LOAD 'bogus_info' USING org.apache.hcatalog.pig.HCatLoader(); tblP = FILTER all_bogus_info BY date_created == '$partition_key';
  • 29. Page29 Loop Step: Merge the Datasets partJ = JOIN tblP BY bogus_id FULL OUTER, deltaP BY bogus_id; combined_part = FOREACH partJ GENERATE ((deltaP::bogus_id is not null) ? deltaP::bogus_id: tblP::bogus_id) as bogus_id, /* do for all fields and end with “;” */
  • 30. Page30 Loop Step: Sort and Save the Merged Data s_combined_part = ORDER combined_part BY date_created, bogus_id; STORE s_combined_part INTO '/user/fred/ temp_$partition_key’ USING PigStorage(','); hdfs dfs –cat temp_2014-09-17/part-r-00000 11,2014-09-17,base,base,base 12,2014-09-17,base,CHANGED,base 13,2014-09-17,base,base,base
  • 31. Page31 Loop Step: Delete the Partition ALTER TABLE bogus_info DROP IF EXISTS PARTITION (date_created='2014-09-17’);
  • 32. Page32 Loop Step: Recreate the Partition 2load = LOAD '/user/fred/ temp_$partition_key' USING PigStorage(',') AS ( bogus_id:int, date_created:chararray, field_one:chararray, field_two:chararray, field_three:chararray ); STORE 2load INTO 'bogus_info' using org.apache.hcatalog.pig.HCatStorer();
  • 33. Page33 Verify the Loop Step Updates select * from bogus_info where date_created = '2014-09-17’; 11 base base base 2014-09-17 12 base CHANGED base 2014-09-17 13 base base base 2014-09-17
  • 34. Page34 My Head Hurts, Too! As Promised, We Flew Through That – Take Another Look Later
  • 35. Page35 What Does Merge & Replace Miss? If critical, you have options •Create a delete table sourced by a trigger •At some wide frequency, start all over with a Full Refresh Fortunately, ~most~ enterprises don’t delete anything Marking items “inactive” is popular
  • 36. Page36 Hybrid: Partition-Level Refresh If most of the partition is modified, just replace it entirely Especially if the changes are only recent (or highly skewed) Use a configured number of partitions to refresh and assume the rest of the data is static
  • 37. Page37 Active Archive Strategy Review strategy # of rows % of chg chg skew handles deletes complexity Full Refresh <= millions any any yes simple Full Merge & Replace <= millions any any no moderate Partition-Level Merge & Replace billions + < 5% < 5% no complex Partition-Level Refresh billions + < 5% < 5% yes complex
  • 38. Page38 Isn’t There Anything Easier? HIVE-5317 brought us Insert, Update & Delete •Alan Gates presented Monday •More tightly-coupled w/o the same “hazard windows” •“Driver” logic shifts to be delta-only & row-focused Thoughts & attempts at true DB replication •Some COTS solutions have been tried •Ideally, an open-source alternative is best such as enhancing the Streaming Data Ingest framework
  • 39. Page39 Considerations for HIVE-5317 On performance & scalability; your mileage may vary Does NOT make Hive a RDBMS Available in Hive .14 onwards DDL requirements •Must utilize partitioning & bucketing •Initially, only supports ORC
  • 40. Page40 Recommendations Take another look at this topic once back at “your desk” As with all things Hadoop… •Know your data & workloads •Try several approaches & evaluate results in earnest •Stick with the KISS principle whenever possible Share your findings via blogs and local user groups Expect (even more!) great things from Hive
  • 41. Page41 Questions? Lester Martin – Hortonworks Professional Services lmartin@hortonworks.com || lester.martin@gmail.com http://lester.website (links to blog, twitter, github, LI, FB, etc) THANKS FOR YOUR TIME!!

Editor's Notes

  1. This diagram shows the typical use case where the deltas represent a small percentage of the existing data as well as the addition of some records that are only present in the delta dataset. In practice, the differences in size of these two circles would be much more pronounced as the existing data includes historical records that have not been modified in a long time and most likely will not be modified again.
  2. If no last-updated timestamp is present, an alter table could create such a column and a combination of a DEFAULT value of current timestamp to have new records populate this and an “ON UPDATE” trigger could be created that takes care of updating the timestamp
  3. You can use other tools than just Hive, such as Pig, to perform these operations. Pig approach could be: Ingest the data (like above) In single Pig script; read old & new, merge them, load into “resolved” table Drop old table, rename new table, and recreate “resolved” table (for next run)
  4. When there is a small number of actual changes compared with a large number of unchanged records the incremental data ingest step brings limits the amount of data that needs to be transferred across the network as well as the amount of raw data that needs to be persisted in HDFS. There will be a point when the merging of a single delta file with a larger existing records file will take longer than just getting the full copy, or at least, the merge & replace processing will be too lengthy to be useful. An example of this could be a model where the existing data size is around 100 billion, but there are only 100,000 delta records.
  5. The classic 80/20 rules applies (maybe even 95/5) for table that don’t/do need partitioning for benefit of the merge & replace strategy. In fact, nothing would be prevent a table that uses Full Refresh or comprehensive Merge & Replace from having partitions.
  6. Bad examples would include a building identifier or the city or zip code that buildings are located within. We could get a wide spread of the data, but delta records would likely cover most, if not all, of the partitions .
  7. The goal is to break down the delta processing at the partition level and to be able to focus only on a subset of the overall partitions. In many ways, the processing is much like the basic merge & replace approach, but that several, much smaller, iterations will occur while leaving the mass majority of the partitions alone.
  8. We’ll go VERY FAST through this during the presentation, but will be useful to review later.
  9. 2014-09-17 2014-09-18 2014-09-20
  10. Here it is in its entirety… (simple db metadata reading & template builders could be created to automate this gnarly big FOREACH statement combined_partition = FOREACH partJ GENERATE ((deltaP::bogus_id is not null) ? deltaP::bogus_id: tblP::bogus_id) as bogus_id, ((deltaP::date_created is not null) ? deltaP::date_created: tblP::date_created) as date_created, ((deltaP::field_one is not null) ? deltaP::field_one: current_tblP::field_one) as field_one, ((deltaP::field_two is not null) ? deltaP::field_two: current_tblP::field_two) as field_two, ((deltaP::field_three is not null) ? deltaP::field_three: current_tblP::field_three) as field_three;
  11. We’ll go VERY FAST through this during the presentation, but will be useful to review later. Also, this is all rather generalizable so individual projects can build a simple framework to drive via metadata.
  12. Not a calculator, rule, or formula – just to drive the conversation of where each strategy might work best
  13. COTS solns from folks like Oracle, Dell & SAP, but none that I have evaluated
  14. Not picking on the CRUD operations performance & scalability opportunities, just pointing out that many variables are at play which could make things better or worse