ORC Deep Dive 2020

Owen O'Malley
Owen O'MalleyPrincipal Engineer at LinkedIn
ORC DEEP DIVE
Owen O’Malley
omalley@apache.org
January 2020
@owen_omalley
OVERVIEW
© 2019 Cloudera, Inc. All rights
reserved.
3
REQUIREMENTS
• Files had to be completely self describing
• Schema
• File version
• Tight compression ⇒ Run Length Encoding (RLE) &
compression
• Column projection ⇒ segregate column data
• Predicate pushdown ⇒ understand & index user’s types
• Files had to be easy & fast to divide
• Compatible with a write once file systems
© 2019 Cloudera, Inc. All rights
reserved.
4
FILE STRUCTURE
• The file footer contains:
• Metadata – schema, file statistics
• Stripe information – metadata and location of stripes
• Postscript with the compression, buffer size, & file version
• ORC file data is divided into stripes.
• Stripes are self contained sets of rows organized by
columns.
• Stripes are the smallest unit of work for tasks.
• Default is ~64MB, but often configured larger.
© 2019 Cloudera, Inc. All rights
reserved.
5
STRIPE STRUCTURE
• Within a stripe, the metadata data is in the stripe
footer.
• List of streams
• Column encoding information (eg. direct or
dictionary)
• Columns are written as a set of streams. There
are 3 kinds:
• Index streams
• Data streams
• Dictionary streams
© 2019 Cloudera, Inc. All rights
reserved.
6
FILE STRUCTURE
© 2019 Cloudera, Inc. All rights
reserved.
7
READ PATH
• The Reader reads last 16k of file, extra as
needed
• The RowReader reads
• Stripe footer
• Required streams
© 2019 Cloudera, Inc. All rights
reserved.
8
STREAMS
• Streams are an independent sequence of bytes
• Serialization into streams depends on column
type & encoding
• Optional pipeline stages:
• Run Length Encoding (RLE) – first pass integer
compression
• Generic compression – Zlib, Snappy, LZO, Zstd
• Encryption – AES/CTR
DATA ENCODING
© 2019 Cloudera, Inc. All rights
reserved.
10
COMPOUND TYPES
• Compound types are serialized as trees of
columns.
• struct, list, map, uniontype all have child
columns
• Types are numbered in a preorder traversal
• The column reading classes are called TreeReadera: int,
b: map<string,
struct<c: string,
d: double>>,
e: timestamp
© 2019 Cloudera, Inc. All rights
reserved.
11
ENCODING COLUMNS
• To interpret a stream, you need three pieces of information:
• Column type
• Column encoding (direct, dictionary)
• Stream kind (present, data, length, etc.)
• All columns, if they have nulls, will have a present stream
• Serialized using a boolean RLE
• Integer columns are serialized with
• A data stream using integer RLE
© 2019 Cloudera, Inc. All rights
reserved.
12
ENCODING COLUMNS
• Binary columns are serialized with:
• Length stream of integer RLE
• Data stream of raw sequence of bytes
• String columns may be direct or dictionary encoded
• Direct looks like binary column, but dictionary is different
• Dictionary_data is raw sequence of dictionary bytes
• Length is an integer RLE stream of the dictionary lengths
• Data is an integer RLE stream of indexes into dictionary
© 2019 Cloudera, Inc. All rights
reserved.
13
ENCODING COLUMNS
• Lists and maps record the number of child
elements
• Length is an integer RLE stream
• Structs only have the present stream
• Timestamps need nanosecond resolution (ouch!)
• Data is an integer RLE of seconds from Jan 2015
• Secondary is an integer RLE of nanoseconds
with 0 suppress
© 2019 Cloudera, Inc. All rights
reserved.
14
RUN LENGTH ENCODING
• Goal is to get some cheap quick compression
• Handles repeating/incrementing values
• Handles integer byte packing
• Two versions
• Version 1 – relative simple repeat/literal
encoding
• Version 2 – complex encoding with 4 variants
• Column encoding of *_V2 means use RLE version
2
COMPRESSION & INDEXES
© 2019 Cloudera, Inc. All rights
reserved.
16
ROW PRUNING
• Three levels of indexing/row pruning
• File – uses file statistics in file footer
• Stripe – uses stripe statistics before file footer
• Row group (default of 10k rows) – uses index
stream
• The index stream for each column includes for
each row group
• Column statistics (min, max, count, sum)
• The start positions of each stream
© 2019 Cloudera, Inc. All rights
reserved.
17
SEARCH ARGUMENTS
• Engines can pass Search Arguments (SArgs) to the
RowReader.
• Limited set of operations (=, <=>, <, <=, in, between, is
null)
• Compare one column to literal(s)
• Can only eliminate entire row groups, stripes, or files.
• Engine must still filter the individual rows afterwards
• For Hive, ensure hive.optimize.index.filter is true.
© 2019 Cloudera, Inc. All rights
reserved.
18
COMPRESSION
• All of the generic compression is done in chunks
• Codec is reinitialized at start of chunk
• Each chunk is compressed separately
• Each uncompressed chunk is at most the buffer
size
• Each chunk has a 3 byte header giving:
• Compressed size of chunk
• Whether it is the original or compressed
© 2019 Cloudera, Inc. All rights
reserved.
19
INDEXES
• Wanted ability to seek to each row group
• Allows fine grain seeking & row pruning
• Could have flushed stream compression pipeline
• Would have dramatically lowered compression
• Instead treat compression & RLE has gray boxes
• Use our knowledge of compression & RLE
• Always start fresh at beginning of chunk or run
© 2019 Cloudera, Inc. All rights
reserved.
20
INDEX POSITIONS
• Records information to
seek to a given row in all
of a column’s streams
• Includes:
• C Compressed bytes
• U Uncompressed bytes
• V RLE values
• C, U, & V jump to RG 4
© 2019 Cloudera, Inc. All rights
reserved.
21
BLOOM FILTERS
• For use cases where you need to find particular
values
• Sorting by that column allows min/max filtering
• But you can only sort on one column effectively
• Bloom filters are probabilistic data structures
• Only useful for equality, not less than or greater
than
• Need ~10 bits/distinct value ⇒ opt in
• ORC uses a bloom_filter_utf8 stream to record a
bloom filter per a row group
© 2019 Cloudera, Inc. All rights
reserved.
22
ROW PRUNING EXAMPLE
• TPC-DS
 from tpch1000.lineitem where l_orderkey = 1212000001;
Index Rows Read Time
Nothing 5,999,989,709 74 sec
Min/Max 540,000 4.5 sec
Bloom 10,000 1.3 sec
VERSIONING
© 2019 Cloudera, Inc. All rights
reserved.
24
COMPATIBILITY
• Within a file version, old readers must be able to read all
files.
• A few exceptions (eg. new codecs, types)
• Version 0 (from Hive 0.11)
• Only RLE V1 & string dictionary encoding
• Version 1 (from Hive 0.12 forward)
• Version 2 (under development)
• The library includes ability to write any file version.
• Enables smooth upgrades across clusters
© 2019 Cloudera, Inc. All rights
reserved.
25
WRITER VERSION
• When fixes or feature additions are made to the
writer, we bump the writer version.
• Allows reader to work around bugs, especially in
index
• Does not affect reader compatibility
• We should require each minor version adds a
new one.
• We also record which writer wrote the file:
• Java, C++, Presto, Go
© 2019 Cloudera, Inc. All rights
reserved.
26
EXAMPLE WORKAROUND FOR HIVE-8746
• Timestamps suck!
• ORC uses an epoch of 01-01-2015 00:00:00.
• Timestamp columns record seconds offset from
epoch
• Unfortunately, the original code use local time
zone.
• If reader and writer were in time zones with the
same rules, it worked.
• Fix involved writing the writer time zone into file.
• Forwards and backwards compatible
ADDITIONAL FEATURES
© 2019 Cloudera, Inc. All rights
reserved.
28
SCHEMA EVOLUTION
• User passes desired schema to RecordReader factory.
• SchemaEvolution class maps between file & reader
schemas.
• The mapping can be positional or name based.
• Conversions based on legacy Hive behavior…
• The RecordReader uses the mapping to translate
• Choosing streams uses the file schema column ids
• Type translation is done by ConvertTreeReaderFactory.
• Adds an additional TreeReader that does conversion.
© 2019 Cloudera, Inc. All rights
reserved.
29
STRIPE CONCATENATION & FLUSH
• ORC has a special operator to concatenate files
• Requires consistent options & schema
• Concatenates stripes without reserialization
• ORC can flush the current contents including a file
footer while still writing to the file.
• Writes a side file with the current offset of the
file tail
• When the file closes the intermediate file footers
are ignored
© 2019 Cloudera, Inc. All rights
reserved.
30
COLUMN ENCRYPTION
• Released in ORC 1.6
• Allows consistent column level access control across engines
• Writes two variants of data
• Encrypted original
• Unencrypted statically masked
• Each variant has its own streams & encodings
• Each column has a unique local key, which is encrypted by
KMS
© 2019 Cloudera, Inc. All rights
reserved.
31
OTHER DEVELOPER TOOLS
• Benchmarks
• Hive & Spark
• Avro, Json, ORC, and Parquet
• Three data sets (taxi, sales, github)
• Docker
• Allows automated builds on all supported Linux
variants
• Site source code is with C++ & Java
USING ORC
© 2019 Cloudera, Inc. All rights
reserved.
33
WHICH VERSION IS IT?
Engine Version ORC Version
Hive 0.11 to 2.2 Hive ORC 0.11 to 2.2
2.3 ORC 1.3
3.0 ORC 1.4
3.1 ORC 1.5
Spark hive * Hive ORC 1.2
Spark native 2.3 ORC 1.4
2.4 to 3.0 ORC 1.5
© 2019 Cloudera, Inc. All rights
reserved.
34
FROM SQL
• Hive:
• Add “stored as orc” to table definition
• Table properties override configuration for ORC
• Spark’s “spark.sql.orc.impl” controls
implementation
• native – Use ORC 1.5
• hive – Use ORC from Hive 1.2
© 2019 Cloudera, Inc. All rights
reserved.
35
FROM JAVA
• Use the ORC project rather than Hive’s ORC.
• Maven group id: org.apache.orc version: 1.6.2
• nohive classifier avoids interfering with Hive’s packages
• Two levels of access
• orc-core – Faster access, but uses Hive’s vectorized API
• orc-mapreduce – Row by row access, simpler OrcStruct API
• MapReduce API implements WritableComparable
• Can be shuffled
• Need to specify type information in configuration for shuffle
or output
© 2019 Cloudera, Inc. All rights
reserved.
36
FROM C++
• Pure C++ client library
• No JNI or JDK so client can estimate and control memory
• Uses pure C++ HDFS client from HDFS-8707
• Reader and writer are stable and in production use.
• Runs on Linux, Mac OS, and Windows.
• Docker scripts for CentOS 6-8, Debian 8-10, Ubuntu 14-18
• CI builds on Mac OS, Ubuntu, and Windows
© 2019 Cloudera, Inc. All rights
reserved.
37
FROM COMMAND LINE
• Using hive –orcfiledump from Hive
• -j -p – pretty prints the metadata as JSON
• -d – prints data as JSON
• Using java -jar orc-tools-*-uber.jar from ORC
• meta -j -p – print the metadata as JSON
• data – print data as JSON
• convert – convert CSV, JSON, or ORC to ORC
• json-schema – scan a set of JSON documents to find
schema
© 2019 Cloudera, Inc. All rights
reserved.
38
DEBUGGING
• Things to look for:
• Stripe size
• Rows/Stripe
• File version
• Writer version
• Width of schema
• Sanity of statistics
• Column encoding
• Size of dictionaries
OPTIMIZATION
© 2019 Cloudera, Inc. All rights
reserved.
40
STRIPE SIZE
• Makes a huge difference in performance
• orc.stripe.size or hive.exec.orc.default.stripe.size
• Controls the amount of buffer in writer. Default is
64MB
• Trade off
• Large = Large more efficient reads
• Small = Less memory and more granular
processing splits
• Multiple files written at the same time will shrink
stripes
© 2019 Cloudera, Inc. All rights
reserved.
41
HDFS BLOCK PADDING
• The stripes don’t align exactly with HDFS
blocks
• Unless orc.write.variable.length.blocks
• HDFS scatters blocks around cluster
• Often want to pad to block boundaries
• Costs space, but improves performance
• orc.default.block.padding
• orc.block.padding.tolerance
© 2019 Cloudera, Inc. All rights
reserved.
42
SPLIT CALCULATION
• BI
Small fast queries
Splits based on HDFS blocks
• ETL
Large queries
Read file footer and apply SearchArg to stripes
Can include footer in splits
(hive.orc.splits.include.file.footer)
• Hybrid
If small files or lots of files, use BI
CONCLUSION
© 2019 Cloudera, Inc. All rights
reserved.
44
FOR MORE INFORMATION
• The orc_proto.proto defines the ORC metadata
• Read code and especially OrcConf, which has all of the knobs
• Website on https://orc.apache.org/
• /bugs ⇒ jira repository
• /src ⇒ github repository
• /specification ⇒ format specification
• Apache email list dev@orc.apache.org
THANK YOU
Owen O’Malley
omalley@apache.org
@owen_omalley
1 of 45

Recommended

File Format Benchmark - Avro, JSON, ORC & Parquet by
File Format Benchmark - Avro, JSON, ORC & ParquetFile Format Benchmark - Avro, JSON, ORC & Parquet
File Format Benchmark - Avro, JSON, ORC & ParquetDataWorks Summit/Hadoop Summit
34.3K views38 slides
ORC File - Optimizing Your Big Data by
ORC File - Optimizing Your Big DataORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataDataWorks Summit
11.6K views26 slides
ORC File and Vectorization - Hadoop Summit 2013 by
ORC File and Vectorization - Hadoop Summit 2013ORC File and Vectorization - Hadoop Summit 2013
ORC File and Vectorization - Hadoop Summit 2013Owen O'Malley
18.4K views30 slides
ORC Files by
ORC FilesORC Files
ORC FilesOwen O'Malley
51.1K views29 slides
Admission Control in Impala by
Admission Control in ImpalaAdmission Control in Impala
Admission Control in ImpalaCloudera, Inc.
5.6K views26 slides
How to build a streaming Lakehouse with Flink, Kafka, and Hudi by
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
488 views16 slides

More Related Content

What's hot

The columnar roadmap: Apache Parquet and Apache Arrow by
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowJulien Le Dem
6.8K views45 slides
Apache Arrow Flight Overview by
Apache Arrow Flight OverviewApache Arrow Flight Overview
Apache Arrow Flight OverviewJacques Nadeau
6K views8 slides
File Format Benchmarks - Avro, JSON, ORC, & Parquet by
File Format Benchmarks - Avro, JSON, ORC, & ParquetFile Format Benchmarks - Avro, JSON, ORC, & Parquet
File Format Benchmarks - Avro, JSON, ORC, & ParquetOwen O'Malley
101.8K views40 slides
HBase and HDFS: Understanding FileSystem Usage in HBase by
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseenissoz
74K views33 slides
Apache Iceberg: An Architectural Look Under the Covers by
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
1.4K views24 slides
An Introduction to Druid by
An Introduction to DruidAn Introduction to Druid
An Introduction to DruidDataWorks Summit
5.3K views55 slides

What's hot(20)

The columnar roadmap: Apache Parquet and Apache Arrow by Julien Le Dem
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
Julien Le Dem6.8K views
File Format Benchmarks - Avro, JSON, ORC, & Parquet by Owen O'Malley
File Format Benchmarks - Avro, JSON, ORC, & ParquetFile Format Benchmarks - Avro, JSON, ORC, & Parquet
File Format Benchmarks - Avro, JSON, ORC, & Parquet
Owen O'Malley101.8K views
HBase and HDFS: Understanding FileSystem Usage in HBase by enissoz
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBase
enissoz74K views
Apache Iceberg: An Architectural Look Under the Covers by ScyllaDB
Apache Iceberg: An Architectural Look Under the CoversApache Iceberg: An Architectural Look Under the Covers
Apache Iceberg: An Architectural Look Under the Covers
ScyllaDB1.4K views
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg by Anant Corporation
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergData Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Data Engineer's Lunch #83: Strategies for Migration to Apache Iceberg
Anant Corporation219 views
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud by Noritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Noritaka Sekiyama33.3K views
Vectorized Query Execution in Apache Spark at Facebook by Databricks
Vectorized Query Execution in Apache Spark at FacebookVectorized Query Execution in Apache Spark at Facebook
Vectorized Query Execution in Apache Spark at Facebook
Databricks1.9K views
Hive, Presto, and Spark on TPC-DS benchmark by Dongwon Kim
Hive, Presto, and Spark on TPC-DS benchmarkHive, Presto, and Spark on TPC-DS benchmark
Hive, Presto, and Spark on TPC-DS benchmark
Dongwon Kim9.6K views
Choosing an HDFS data storage format- Avro vs. Parquet and more - StampedeCon... by StampedeCon
Choosing an HDFS data storage format- Avro vs. Parquet and more - StampedeCon...Choosing an HDFS data storage format- Avro vs. Parquet and more - StampedeCon...
Choosing an HDFS data storage format- Avro vs. Parquet and more - StampedeCon...
StampedeCon129.5K views
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi... by Databricks
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Databricks8.4K views
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD... by InfluxData
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxData414 views
Apache HBase Performance Tuning by Lars Hofhansl
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl39.6K views
Understanding InfluxDB’s New Storage Engine by InfluxData
Understanding InfluxDB’s New Storage EngineUnderstanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage Engine
InfluxData137 views
The Parquet Format and Performance Optimization Opportunities by Databricks
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Databricks8.2K views
Premier Inside-Out: Apache Druid by Hortonworks
Premier Inside-Out: Apache DruidPremier Inside-Out: Apache Druid
Premier Inside-Out: Apache Druid
Hortonworks3.4K views
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018 by Amazon Web Services
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
Amazon Web Services1.8K views
ORC improvement in Apache Spark 2.3 by DataWorks Summit
ORC improvement in Apache Spark 2.3ORC improvement in Apache Spark 2.3
ORC improvement in Apache Spark 2.3
DataWorks Summit7.7K views

Similar to ORC Deep Dive 2020

A Closer Look at Apache Kudu by
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache KuduAndriy Zabavskyy
2.1K views63 slides
The Impala Cookbook by
The Impala CookbookThe Impala Cookbook
The Impala CookbookCloudera, Inc.
90.6K views87 slides
Kafka overview v0.1 by
Kafka overview v0.1Kafka overview v0.1
Kafka overview v0.1Mahendran Ponnusamy
134 views43 slides
A brave new world in mutable big data relational storage (Strata NYC 2017) by
A brave new world in mutable big data  relational storage (Strata NYC 2017)A brave new world in mutable big data  relational storage (Strata NYC 2017)
A brave new world in mutable big data relational storage (Strata NYC 2017)Todd Lipcon
7.3K views52 slides
Intro to Apache Kudu (short) - Big Data Application Meetup by
Intro to Apache Kudu (short) - Big Data Application MeetupIntro to Apache Kudu (short) - Big Data Application Meetup
Intro to Apache Kudu (short) - Big Data Application MeetupMike Percy
2.6K views29 slides
Arm architecture chapter2_steve_furber by
Arm architecture chapter2_steve_furberArm architecture chapter2_steve_furber
Arm architecture chapter2_steve_furberasodariyabhavesh
13.7K views27 slides

Similar to ORC Deep Dive 2020(20)

A brave new world in mutable big data relational storage (Strata NYC 2017) by Todd Lipcon
A brave new world in mutable big data  relational storage (Strata NYC 2017)A brave new world in mutable big data  relational storage (Strata NYC 2017)
A brave new world in mutable big data relational storage (Strata NYC 2017)
Todd Lipcon7.3K views
Intro to Apache Kudu (short) - Big Data Application Meetup by Mike Percy
Intro to Apache Kudu (short) - Big Data Application MeetupIntro to Apache Kudu (short) - Big Data Application Meetup
Intro to Apache Kudu (short) - Big Data Application Meetup
Mike Percy2.6K views
Arm architecture chapter2_steve_furber by asodariyabhavesh
Arm architecture chapter2_steve_furberArm architecture chapter2_steve_furber
Arm architecture chapter2_steve_furber
asodariyabhavesh13.7K views
Parquet Hadoop Summit 2013 by Julien Le Dem
Parquet Hadoop Summit 2013Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013
Julien Le Dem26K views
SYBSC IT SEM IV EMBEDDED SYSTEMS UNIT IV Designing Embedded System with 8051... by Arti Parab Academics
SYBSC IT SEM IV EMBEDDED SYSTEMS UNIT IV  Designing Embedded System with 8051...SYBSC IT SEM IV EMBEDDED SYSTEMS UNIT IV  Designing Embedded System with 8051...
SYBSC IT SEM IV EMBEDDED SYSTEMS UNIT IV Designing Embedded System with 8051...
Pune-Cocoa: Blocks and GCD by Prashant Rane
Pune-Cocoa: Blocks and GCDPune-Cocoa: Blocks and GCD
Pune-Cocoa: Blocks and GCD
Prashant Rane1.3K views
Cloudera Impala technical deep dive by huguk
Cloudera Impala technical deep diveCloudera Impala technical deep dive
Cloudera Impala technical deep dive
huguk12.3K views
HBase Data Modeling and Access Patterns with Kite SDK by HBaseCon
HBase Data Modeling and Access Patterns with Kite SDKHBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDK
HBaseCon4.7K views
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ... by Cloudera, Inc.
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Cloudera, Inc.2.5K views
DataFrames: The Extended Cut by Wes McKinney
DataFrames: The Extended CutDataFrames: The Extended Cut
DataFrames: The Extended Cut
Wes McKinney8.5K views
Why you should care about data layout in the file system with Cheng Lian and ... by Databricks
Why you should care about data layout in the file system with Cheng Lian and ...Why you should care about data layout in the file system with Cheng Lian and ...
Why you should care about data layout in the file system with Cheng Lian and ...
Databricks4.5K views
Simplified instructional computer by Kirby Fabro
Simplified instructional computerSimplified instructional computer
Simplified instructional computer
Kirby Fabro1.9K views

More from Owen O'Malley

Running An Apache Project: 10 Traps and How to Avoid Them by
Running An Apache Project: 10 Traps and How to Avoid ThemRunning An Apache Project: 10 Traps and How to Avoid Them
Running An Apache Project: 10 Traps and How to Avoid ThemOwen O'Malley
237 views20 slides
Big Data's Journey to ACID by
Big Data's Journey to ACIDBig Data's Journey to ACID
Big Data's Journey to ACIDOwen O'Malley
169 views16 slides
Protect your private data with ORC column encryption by
Protect your private data with ORC column encryptionProtect your private data with ORC column encryption
Protect your private data with ORC column encryptionOwen O'Malley
1.1K views35 slides
Fine Grain Access Control for Big Data: ORC Column Encryption by
Fine Grain Access Control for Big Data: ORC Column EncryptionFine Grain Access Control for Big Data: ORC Column Encryption
Fine Grain Access Control for Big Data: ORC Column EncryptionOwen O'Malley
992 views35 slides
Fast Access to Your Data - Avro, JSON, ORC, and Parquet by
Fast Access to Your Data - Avro, JSON, ORC, and ParquetFast Access to Your Data - Avro, JSON, ORC, and Parquet
Fast Access to Your Data - Avro, JSON, ORC, and ParquetOwen O'Malley
1.1K views45 slides
Strata NYC 2018 Iceberg by
Strata NYC 2018  IcebergStrata NYC 2018  Iceberg
Strata NYC 2018 IcebergOwen O'Malley
424 views34 slides

More from Owen O'Malley(19)

Running An Apache Project: 10 Traps and How to Avoid Them by Owen O'Malley
Running An Apache Project: 10 Traps and How to Avoid ThemRunning An Apache Project: 10 Traps and How to Avoid Them
Running An Apache Project: 10 Traps and How to Avoid Them
Owen O'Malley237 views
Big Data's Journey to ACID by Owen O'Malley
Big Data's Journey to ACIDBig Data's Journey to ACID
Big Data's Journey to ACID
Owen O'Malley169 views
Protect your private data with ORC column encryption by Owen O'Malley
Protect your private data with ORC column encryptionProtect your private data with ORC column encryption
Protect your private data with ORC column encryption
Owen O'Malley1.1K views
Fine Grain Access Control for Big Data: ORC Column Encryption by Owen O'Malley
Fine Grain Access Control for Big Data: ORC Column EncryptionFine Grain Access Control for Big Data: ORC Column Encryption
Fine Grain Access Control for Big Data: ORC Column Encryption
Owen O'Malley992 views
Fast Access to Your Data - Avro, JSON, ORC, and Parquet by Owen O'Malley
Fast Access to Your Data - Avro, JSON, ORC, and ParquetFast Access to Your Data - Avro, JSON, ORC, and Parquet
Fast Access to Your Data - Avro, JSON, ORC, and Parquet
Owen O'Malley1.1K views
Fast Spark Access To Your Complex Data - Avro, JSON, ORC, and Parquet by Owen O'Malley
Fast Spark Access To Your Complex Data - Avro, JSON, ORC, and ParquetFast Spark Access To Your Complex Data - Avro, JSON, ORC, and Parquet
Fast Spark Access To Your Complex Data - Avro, JSON, ORC, and Parquet
Owen O'Malley1.6K views
ORC Column Encryption by Owen O'Malley
ORC Column EncryptionORC Column Encryption
ORC Column Encryption
Owen O'Malley1.4K views
Protecting Enterprise Data in Apache Hadoop by Owen O'Malley
Protecting Enterprise Data in Apache HadoopProtecting Enterprise Data in Apache Hadoop
Protecting Enterprise Data in Apache Hadoop
Owen O'Malley2.2K views
Structor - Automated Building of Virtual Hadoop Clusters by Owen O'Malley
Structor - Automated Building of Virtual Hadoop ClustersStructor - Automated Building of Virtual Hadoop Clusters
Structor - Automated Building of Virtual Hadoop Clusters
Owen O'Malley2.7K views
Hadoop Security Architecture by Owen O'Malley
Hadoop Security ArchitectureHadoop Security Architecture
Hadoop Security Architecture
Owen O'Malley30.2K views
Adding ACID Updates to Hive by Owen O'Malley
Adding ACID Updates to HiveAdding ACID Updates to Hive
Adding ACID Updates to Hive
Owen O'Malley3.1K views
ORC File Introduction by Owen O'Malley
ORC File IntroductionORC File Introduction
ORC File Introduction
Owen O'Malley11.8K views
Next Generation Hadoop Operations by Owen O'Malley
Next Generation Hadoop OperationsNext Generation Hadoop Operations
Next Generation Hadoop Operations
Owen O'Malley3.4K views
Next Generation MapReduce by Owen O'Malley
Next Generation MapReduceNext Generation MapReduce
Next Generation MapReduce
Owen O'Malley1.9K views
Bay Area HUG Feb 2011 Intro by Owen O'Malley
Bay Area HUG Feb 2011 IntroBay Area HUG Feb 2011 Intro
Bay Area HUG Feb 2011 Intro
Owen O'Malley1.8K views
Plugging the Holes: Security and Compatability in Hadoop by Owen O'Malley
Plugging the Holes: Security and Compatability in HadoopPlugging the Holes: Security and Compatability in Hadoop
Plugging the Holes: Security and Compatability in Hadoop
Owen O'Malley1.7K views

Recently uploaded

_MAKRIADI-FOTEINI_diploma thesis.pptx by
_MAKRIADI-FOTEINI_diploma thesis.pptx_MAKRIADI-FOTEINI_diploma thesis.pptx
_MAKRIADI-FOTEINI_diploma thesis.pptxfotinimakriadi
8 views32 slides
Renewal Projects in Seismic Construction by
Renewal Projects in Seismic ConstructionRenewal Projects in Seismic Construction
Renewal Projects in Seismic ConstructionEngineering & Seismic Construction
5 views8 slides
GDSC Mikroskil Members Onboarding 2023.pdf by
GDSC Mikroskil Members Onboarding 2023.pdfGDSC Mikroskil Members Onboarding 2023.pdf
GDSC Mikroskil Members Onboarding 2023.pdfgdscmikroskil
58 views62 slides
DevOps-ITverse-2023-IIT-DU.pptx by
DevOps-ITverse-2023-IIT-DU.pptxDevOps-ITverse-2023-IIT-DU.pptx
DevOps-ITverse-2023-IIT-DU.pptxAnowar Hossain
12 views45 slides
Proposal Presentation.pptx by
Proposal Presentation.pptxProposal Presentation.pptx
Proposal Presentation.pptxkeytonallamon
52 views36 slides
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth by
BCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for GrowthBCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for Growth
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for GrowthInnomantra
6 views4 slides

Recently uploaded(20)

_MAKRIADI-FOTEINI_diploma thesis.pptx by fotinimakriadi
_MAKRIADI-FOTEINI_diploma thesis.pptx_MAKRIADI-FOTEINI_diploma thesis.pptx
_MAKRIADI-FOTEINI_diploma thesis.pptx
fotinimakriadi8 views
GDSC Mikroskil Members Onboarding 2023.pdf by gdscmikroskil
GDSC Mikroskil Members Onboarding 2023.pdfGDSC Mikroskil Members Onboarding 2023.pdf
GDSC Mikroskil Members Onboarding 2023.pdf
gdscmikroskil58 views
DevOps-ITverse-2023-IIT-DU.pptx by Anowar Hossain
DevOps-ITverse-2023-IIT-DU.pptxDevOps-ITverse-2023-IIT-DU.pptx
DevOps-ITverse-2023-IIT-DU.pptx
Anowar Hossain12 views
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth by Innomantra
BCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for GrowthBCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for Growth
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth
Innomantra 6 views
Effect of deep chemical mixing columns on properties of surrounding soft clay... by AltinKaradagli
Effect of deep chemical mixing columns on properties of surrounding soft clay...Effect of deep chemical mixing columns on properties of surrounding soft clay...
Effect of deep chemical mixing columns on properties of surrounding soft clay...
AltinKaradagli10 views
Generative AI Models & Their Applications by SN
Generative AI Models & Their ApplicationsGenerative AI Models & Their Applications
Generative AI Models & Their Applications
SN10 views
fakenews_DBDA_Mar23.pptx by deepmitra8
fakenews_DBDA_Mar23.pptxfakenews_DBDA_Mar23.pptx
fakenews_DBDA_Mar23.pptx
deepmitra816 views
SUMIT SQL PROJECT SUPERSTORE 1.pptx by Sumit Jadhav
SUMIT SQL PROJECT SUPERSTORE 1.pptxSUMIT SQL PROJECT SUPERSTORE 1.pptx
SUMIT SQL PROJECT SUPERSTORE 1.pptx
Sumit Jadhav 18 views
Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ... by AltinKaradagli
Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ...Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ...
Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ...
AltinKaradagli15 views
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc... by csegroupvn
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
csegroupvn5 views
Design of machine elements-UNIT 3.pptx by gopinathcreddy
Design of machine elements-UNIT 3.pptxDesign of machine elements-UNIT 3.pptx
Design of machine elements-UNIT 3.pptx
gopinathcreddy33 views
Ansari: Practical experiences with an LLM-based Islamic Assistant by M Waleed Kadous
Ansari: Practical experiences with an LLM-based Islamic AssistantAnsari: Practical experiences with an LLM-based Islamic Assistant
Ansari: Practical experiences with an LLM-based Islamic Assistant
M Waleed Kadous5 views
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx by lwang78
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
lwang78109 views

ORC Deep Dive 2020

  • 1. ORC DEEP DIVE Owen O’Malley omalley@apache.org January 2020 @owen_omalley
  • 3. © 2019 Cloudera, Inc. All rights reserved. 3 REQUIREMENTS • Files had to be completely self describing • Schema • File version • Tight compression ⇒ Run Length Encoding (RLE) & compression • Column projection ⇒ segregate column data • Predicate pushdown ⇒ understand & index user’s types • Files had to be easy & fast to divide • Compatible with a write once file systems
  • 4. © 2019 Cloudera, Inc. All rights reserved. 4 FILE STRUCTURE • The file footer contains: • Metadata – schema, file statistics • Stripe information – metadata and location of stripes • Postscript with the compression, buffer size, & file version • ORC file data is divided into stripes. • Stripes are self contained sets of rows organized by columns. • Stripes are the smallest unit of work for tasks. • Default is ~64MB, but often configured larger.
  • 5. © 2019 Cloudera, Inc. All rights reserved. 5 STRIPE STRUCTURE • Within a stripe, the metadata data is in the stripe footer. • List of streams • Column encoding information (eg. direct or dictionary) • Columns are written as a set of streams. There are 3 kinds: • Index streams • Data streams • Dictionary streams
  • 6. © 2019 Cloudera, Inc. All rights reserved. 6 FILE STRUCTURE
  • 7. © 2019 Cloudera, Inc. All rights reserved. 7 READ PATH • The Reader reads last 16k of file, extra as needed • The RowReader reads • Stripe footer • Required streams
  • 8. © 2019 Cloudera, Inc. All rights reserved. 8 STREAMS • Streams are an independent sequence of bytes • Serialization into streams depends on column type & encoding • Optional pipeline stages: • Run Length Encoding (RLE) – first pass integer compression • Generic compression – Zlib, Snappy, LZO, Zstd • Encryption – AES/CTR
  • 10. © 2019 Cloudera, Inc. All rights reserved. 10 COMPOUND TYPES • Compound types are serialized as trees of columns. • struct, list, map, uniontype all have child columns • Types are numbered in a preorder traversal • The column reading classes are called TreeReadera: int, b: map<string, struct<c: string, d: double>>, e: timestamp
  • 11. © 2019 Cloudera, Inc. All rights reserved. 11 ENCODING COLUMNS • To interpret a stream, you need three pieces of information: • Column type • Column encoding (direct, dictionary) • Stream kind (present, data, length, etc.) • All columns, if they have nulls, will have a present stream • Serialized using a boolean RLE • Integer columns are serialized with • A data stream using integer RLE
  • 12. © 2019 Cloudera, Inc. All rights reserved. 12 ENCODING COLUMNS • Binary columns are serialized with: • Length stream of integer RLE • Data stream of raw sequence of bytes • String columns may be direct or dictionary encoded • Direct looks like binary column, but dictionary is different • Dictionary_data is raw sequence of dictionary bytes • Length is an integer RLE stream of the dictionary lengths • Data is an integer RLE stream of indexes into dictionary
  • 13. © 2019 Cloudera, Inc. All rights reserved. 13 ENCODING COLUMNS • Lists and maps record the number of child elements • Length is an integer RLE stream • Structs only have the present stream • Timestamps need nanosecond resolution (ouch!) • Data is an integer RLE of seconds from Jan 2015 • Secondary is an integer RLE of nanoseconds with 0 suppress
  • 14. © 2019 Cloudera, Inc. All rights reserved. 14 RUN LENGTH ENCODING • Goal is to get some cheap quick compression • Handles repeating/incrementing values • Handles integer byte packing • Two versions • Version 1 – relative simple repeat/literal encoding • Version 2 – complex encoding with 4 variants • Column encoding of *_V2 means use RLE version 2
  • 16. © 2019 Cloudera, Inc. All rights reserved. 16 ROW PRUNING • Three levels of indexing/row pruning • File – uses file statistics in file footer • Stripe – uses stripe statistics before file footer • Row group (default of 10k rows) – uses index stream • The index stream for each column includes for each row group • Column statistics (min, max, count, sum) • The start positions of each stream
  • 17. © 2019 Cloudera, Inc. All rights reserved. 17 SEARCH ARGUMENTS • Engines can pass Search Arguments (SArgs) to the RowReader. • Limited set of operations (=, <=>, <, <=, in, between, is null) • Compare one column to literal(s) • Can only eliminate entire row groups, stripes, or files. • Engine must still filter the individual rows afterwards • For Hive, ensure hive.optimize.index.filter is true.
  • 18. © 2019 Cloudera, Inc. All rights reserved. 18 COMPRESSION • All of the generic compression is done in chunks • Codec is reinitialized at start of chunk • Each chunk is compressed separately • Each uncompressed chunk is at most the buffer size • Each chunk has a 3 byte header giving: • Compressed size of chunk • Whether it is the original or compressed
  • 19. © 2019 Cloudera, Inc. All rights reserved. 19 INDEXES • Wanted ability to seek to each row group • Allows fine grain seeking & row pruning • Could have flushed stream compression pipeline • Would have dramatically lowered compression • Instead treat compression & RLE has gray boxes • Use our knowledge of compression & RLE • Always start fresh at beginning of chunk or run
  • 20. © 2019 Cloudera, Inc. All rights reserved. 20 INDEX POSITIONS • Records information to seek to a given row in all of a column’s streams • Includes: • C Compressed bytes • U Uncompressed bytes • V RLE values • C, U, & V jump to RG 4
  • 21. © 2019 Cloudera, Inc. All rights reserved. 21 BLOOM FILTERS • For use cases where you need to find particular values • Sorting by that column allows min/max filtering • But you can only sort on one column effectively • Bloom filters are probabilistic data structures • Only useful for equality, not less than or greater than • Need ~10 bits/distinct value ⇒ opt in • ORC uses a bloom_filter_utf8 stream to record a bloom filter per a row group
  • 22. © 2019 Cloudera, Inc. All rights reserved. 22 ROW PRUNING EXAMPLE • TPC-DS  from tpch1000.lineitem where l_orderkey = 1212000001; Index Rows Read Time Nothing 5,999,989,709 74 sec Min/Max 540,000 4.5 sec Bloom 10,000 1.3 sec
  • 24. © 2019 Cloudera, Inc. All rights reserved. 24 COMPATIBILITY • Within a file version, old readers must be able to read all files. • A few exceptions (eg. new codecs, types) • Version 0 (from Hive 0.11) • Only RLE V1 & string dictionary encoding • Version 1 (from Hive 0.12 forward) • Version 2 (under development) • The library includes ability to write any file version. • Enables smooth upgrades across clusters
  • 25. © 2019 Cloudera, Inc. All rights reserved. 25 WRITER VERSION • When fixes or feature additions are made to the writer, we bump the writer version. • Allows reader to work around bugs, especially in index • Does not affect reader compatibility • We should require each minor version adds a new one. • We also record which writer wrote the file: • Java, C++, Presto, Go
  • 26. © 2019 Cloudera, Inc. All rights reserved. 26 EXAMPLE WORKAROUND FOR HIVE-8746 • Timestamps suck! • ORC uses an epoch of 01-01-2015 00:00:00. • Timestamp columns record seconds offset from epoch • Unfortunately, the original code use local time zone. • If reader and writer were in time zones with the same rules, it worked. • Fix involved writing the writer time zone into file. • Forwards and backwards compatible
  • 28. © 2019 Cloudera, Inc. All rights reserved. 28 SCHEMA EVOLUTION • User passes desired schema to RecordReader factory. • SchemaEvolution class maps between file & reader schemas. • The mapping can be positional or name based. • Conversions based on legacy Hive behavior… • The RecordReader uses the mapping to translate • Choosing streams uses the file schema column ids • Type translation is done by ConvertTreeReaderFactory. • Adds an additional TreeReader that does conversion.
  • 29. © 2019 Cloudera, Inc. All rights reserved. 29 STRIPE CONCATENATION & FLUSH • ORC has a special operator to concatenate files • Requires consistent options & schema • Concatenates stripes without reserialization • ORC can flush the current contents including a file footer while still writing to the file. • Writes a side file with the current offset of the file tail • When the file closes the intermediate file footers are ignored
  • 30. © 2019 Cloudera, Inc. All rights reserved. 30 COLUMN ENCRYPTION • Released in ORC 1.6 • Allows consistent column level access control across engines • Writes two variants of data • Encrypted original • Unencrypted statically masked • Each variant has its own streams & encodings • Each column has a unique local key, which is encrypted by KMS
  • 31. © 2019 Cloudera, Inc. All rights reserved. 31 OTHER DEVELOPER TOOLS • Benchmarks • Hive & Spark • Avro, Json, ORC, and Parquet • Three data sets (taxi, sales, github) • Docker • Allows automated builds on all supported Linux variants • Site source code is with C++ & Java
  • 33. © 2019 Cloudera, Inc. All rights reserved. 33 WHICH VERSION IS IT? Engine Version ORC Version Hive 0.11 to 2.2 Hive ORC 0.11 to 2.2 2.3 ORC 1.3 3.0 ORC 1.4 3.1 ORC 1.5 Spark hive * Hive ORC 1.2 Spark native 2.3 ORC 1.4 2.4 to 3.0 ORC 1.5
  • 34. © 2019 Cloudera, Inc. All rights reserved. 34 FROM SQL • Hive: • Add “stored as orc” to table definition • Table properties override configuration for ORC • Spark’s “spark.sql.orc.impl” controls implementation • native – Use ORC 1.5 • hive – Use ORC from Hive 1.2
  • 35. © 2019 Cloudera, Inc. All rights reserved. 35 FROM JAVA • Use the ORC project rather than Hive’s ORC. • Maven group id: org.apache.orc version: 1.6.2 • nohive classifier avoids interfering with Hive’s packages • Two levels of access • orc-core – Faster access, but uses Hive’s vectorized API • orc-mapreduce – Row by row access, simpler OrcStruct API • MapReduce API implements WritableComparable • Can be shuffled • Need to specify type information in configuration for shuffle or output
  • 36. © 2019 Cloudera, Inc. All rights reserved. 36 FROM C++ • Pure C++ client library • No JNI or JDK so client can estimate and control memory • Uses pure C++ HDFS client from HDFS-8707 • Reader and writer are stable and in production use. • Runs on Linux, Mac OS, and Windows. • Docker scripts for CentOS 6-8, Debian 8-10, Ubuntu 14-18 • CI builds on Mac OS, Ubuntu, and Windows
  • 37. © 2019 Cloudera, Inc. All rights reserved. 37 FROM COMMAND LINE • Using hive –orcfiledump from Hive • -j -p – pretty prints the metadata as JSON • -d – prints data as JSON • Using java -jar orc-tools-*-uber.jar from ORC • meta -j -p – print the metadata as JSON • data – print data as JSON • convert – convert CSV, JSON, or ORC to ORC • json-schema – scan a set of JSON documents to find schema
  • 38. © 2019 Cloudera, Inc. All rights reserved. 38 DEBUGGING • Things to look for: • Stripe size • Rows/Stripe • File version • Writer version • Width of schema • Sanity of statistics • Column encoding • Size of dictionaries
  • 40. © 2019 Cloudera, Inc. All rights reserved. 40 STRIPE SIZE • Makes a huge difference in performance • orc.stripe.size or hive.exec.orc.default.stripe.size • Controls the amount of buffer in writer. Default is 64MB • Trade off • Large = Large more efficient reads • Small = Less memory and more granular processing splits • Multiple files written at the same time will shrink stripes
  • 41. © 2019 Cloudera, Inc. All rights reserved. 41 HDFS BLOCK PADDING • The stripes don’t align exactly with HDFS blocks • Unless orc.write.variable.length.blocks • HDFS scatters blocks around cluster • Often want to pad to block boundaries • Costs space, but improves performance • orc.default.block.padding • orc.block.padding.tolerance
  • 42. © 2019 Cloudera, Inc. All rights reserved. 42 SPLIT CALCULATION • BI Small fast queries Splits based on HDFS blocks • ETL Large queries Read file footer and apply SearchArg to stripes Can include footer in splits (hive.orc.splits.include.file.footer) • Hybrid If small files or lots of files, use BI
  • 44. © 2019 Cloudera, Inc. All rights reserved. 44 FOR MORE INFORMATION • The orc_proto.proto defines the ORC metadata • Read code and especially OrcConf, which has all of the knobs • Website on https://orc.apache.org/ • /bugs ⇒ jira repository • /src ⇒ github repository • /specification ⇒ format specification • Apache email list dev@orc.apache.org