Planning with Polyalgebra: Bringing Together Relational, Complex and Machine Learning Algebra

Julian Hyde
Julian HydeSoftware Developer at Google, Apache Software Foundation
© Hortonworks Inc. 2016
Polyalgebra
Hadoop Summit, April 14, 2016
© Hortonworks Inc. 2016
Julian Hyde @julianhyde
Apache Calcite (VP), Drill, Kylin
Mondrian OLAP
Hortonworks
Thanks:
Jacques Nadeau @intjesus (Dremio/Drill)
Wes McKinney (Cloudera/Arrow)
Who
Tomer Shiran @tshiran
Apache Drill
Dremio
Page‹#› © Hortonworks Inc. 2014
© Hortonworks Inc. 2016
Polyalgebra
An extended form of relational
algebra that encompasses work with
dynamically-typed data, complex
records, streaming and machine
learning that allows for a single
optimization space.
© Hortonworks Inc. 2016
Ecosystem
Calcite
Drill
Arrow
Ibis
Impala
Kudu
Splunk
Cassandra
JDBC
MongoDB
JDBC
Spark
Flink
© Hortonworks Inc. 2016
Ecosystem - data stores
Calcite
Drill
Arrow
Ibis
Impala
Kudu
Splunk
Cassandra
JDBC
MongoDB
JDBC
Spark
Flink
© Hortonworks Inc. 2016
Ecosystem - Engines
Calcite
Drill
Arrow
Ibis
Impala
Kudu
Splunk
Cassandra
JDBC
MongoDB
JDBC
Spark
Flink
© Hortonworks Inc. 2016
Ecosystem - Focus of this talk
Calcite
Drill
Arrow
Ibis
Impala
Kudu
Splunk
Cassandra
JDBC
MongoDB
JDBC
Spark
Flink
© Hortonworks Inc. 2016
HadoopRDBMS
Old world, new world
• Security
• Metadata
• SQL
• Query planning
• Data independence
• Scale
• Late schema
• Choice of front-end
• Choice of engines
• Workload: batch, interactive,
streaming, ML, graph, …
© Hortonworks Inc. 2016
Many front ends, many engines
SQL
Planning
Execution

engine
Planning
User code
Map

Reduce
Tez User code
in Yarn
Spark MongoDB
Hadoop
External

SQL
SQL Spark Storm Cascading HBase Graph
© Hortonworks Inc. 2016
Extension to mathematical set theory
Devised by E.F. Code (IBM) in 1970
Defines the relational database
Operators: select, filter, join, sort, union, etc.
Intermediate format for query planning/optimization
Relational algebra
SQL
Relational
algebra
Runnable
query
plan
Optimization
© Hortonworks Inc. 2016
select d.name, COUNT(*) as c

from Emps as e

join Depts as d

on e.deptno = d.deptno

where e.age < 30

group by d.deptno

having count(*) > 5

order by c desc
Relational algebra
Scan [Emps] Scan [Depts]
Join [e.deptno

= d.deptno]
Filter [e.age < 30]
Aggregate [deptno, COUNT(*) AS c]
Filter [c > 5]
Project [name, c]
Sort [c DESC]
(Column names are simplified. They would usually

be ordinals, e.g. $0 is the first column of the left input.)
© Hortonworks Inc. 2016
select * from (

select zipcode, state

from Emps

union all

select zipcode, state

from Customers)

where state in (‘CA’, ‘TX’)
Relational algebra - Union and sub-query
Scan [Emps] Scan [Customers]
Union [all]
Project [zipcode, state] Project [zipcode, state]
Filter [state IN (‘CA’, ‘TX’)]
© Hortonworks Inc. 2016
insert into Facts

values (‘Meaning of life’, 42),

(‘Clever as clever’, 6)
Relational algebra - Insert and Values
Insert [Facts]
Values [[‘Meaning of life’, 42],
[‘Clever as clever’, 6]]
© Hortonworks Inc. 2016
MySQL
Splunk
Expression tree
 select p.productName, COUNT(*) as c

from splunk.splunk as s

join mysql.products as p

on s.productId = p.productId

where s.action = 'purchase'

group by p.productName

order by c desc
join
Key: product_id
group
Key: product_name

Agg: count
filter
Condition:

action =

'purchase'
sort
Key: c DESC
scan
scan
Table: splunk
Table: products
© Hortonworks Inc. 2016
Splunk
Expression tree

(optimized)
join
Key: product_id
group
Key: product_name

Agg: count
filter
Condition:

action =

'purchase'
sort
Key: c DESC
scan
Table: splunk
MySQL
scan
Table: products
select p.productName, COUNT(*) as c

from splunk.splunk as s

join mysql.products as p

on s.productId = p.productId

where s.action = 'purchase'

group by p.productName

order by c desc
Page‹#› © Hortonworks Inc. 2014
© Hortonworks Inc. 2016
Demo
{sqlline, apache-calcite, .csv, CsvPushProjectOntoTableRule}
© Hortonworks Inc. 2016
Calcite – APIs and SPIs
Cost, statistics
RelOptCost
RelOptCostFactory
RelMetadataProvider
• RelMdColumnUniquensss
• RelMdDistinctRowCount
• RelMdSelectivity
SQL parser
SqlNode

SqlParser

SqlValidator
Transformation rules
RelOptRule
• MergeFilterRule
• PushAggregateThroughUnionRule
• 100+ more
Global transformations
• Unification (materialized view)
• Column trimming
• De-correlation
Relational algebra
RelNode (operator)
• TableScan
• Filter
• Project
• Union
• Aggregate
• …
RelDataType (type)
RexNode (expression)
RelTrait (physical property)
• RelConvention (calling-convention)
• RelCollation (sortedness)
• TBD (bucketedness/distribution)
JDBC driver
Metadata
Schema
Table
Function
• TableFunction
• TableMacro
Lattice
© Hortonworks Inc. 2016
Calcite Planning Process
SQL
parse
tree
Planner
RelNode
Graph
Sql-to-Rel Converter
SqlNode !
RelNode + RexNode
• Node for each node in
Input Plan
• Each node is a Set of
alternate Sub Plans
• Set further divided into
Subsets: based on traits
like sortedness
1. Plan Graph
• Rule: specifies an Operator
sub-graph to match and
logic to generate equivalent
‘better’ sub-graph
• New and original sub-graph
both remain in contention
2. Rules
• RelNodes have Cost &
Cumulative Cost
3. Cost Model
- Used to plug in Schema,
cost formulas
- Filter selectivity
- Join selectivity
- NDV calculations
4. Metadata Providers
Rule Match Queue
- Add Rule matches to Queue
- Apply Rule match
transformations to plan graph
- Iterate for fixed iterations or until
cost doesn’t change
- Match importance based on
cost of RelNode and height
Best RelNode Graph
Translate to
runtime
Logical Plan
Based on “Volcano” & “Cascades” papers [G. Graefe]
© Hortonworks Inc. 2016
Algebra builder API
produces
final FrameworkConfig config;
final RelBuilder builder = RelBuilder.create(config);
final RelNode node = builder
.scan("EMP")
.aggregate(builder.groupKey("DEPTNO"),
builder.count(false, "C"),
builder.sum(false, "S", builder.field("SAL")))
.filter(
builder.call(SqlStdOperatorTable.GREATER_THAN,
builder.field("C"),
builder.literal(10)))
.build();
System.out.println(RelOptUtil.toString(node));
select deptno,

COUNT(*) as c,
sum(sal) as s

from Emp

having COUNT(*) > 10
LogicalFilter(condition=[>($1, 10)])
LogicalAggregate(group=[{7}], C=[COUNT()], S=[SUM($5)])
LogicalTableScan(table=[[scott, EMP]])
Equivalent SQL:
© Hortonworks Inc. 2016
Non-relational, post-relational
Non-relational stores:
• Document databases — MongoDB
• Key-value stores — HBase, Cassandra
• Graph databases — Neo4J
• Multidimensional OLAP — Microsoft Analysis, Mondrian
• Streams — Kafka, Storm
• Text, audio, video
Non-relational operators — data exploration, machine learning
Late or no schema
© Hortonworks Inc. 2016
Complex data, also known as nested or
document-oriented data. Typically, it can be
represented as JSON.



2 new operators are sufficient:
• UNNEST
• COLLECT aggregate function
Complex data employees: [
{
name: “Bob”,
age: 48,
pets: [
{name: “Jim”, type: “Dog”},
{name: “Frank”, type: “Cat”}
]
}, {
name: “Stacy",
age: 31,

starSign: ‘taurus’,
pets: [
{name: “Jack”, type: “Cat”}
]
}, {
name: “Ken”,
age: 23
}
]
© Hortonworks Inc. 2016
Flatten converts arrays of values to
separate rows:
• New record for each list item
• Empty lists removes record
Flatten is actually just syntactic sugar for
the UNNEST relational operator:
UNNEST and Flatten
name age pets
Bob 48 [{name: Jim, type: dog},
{name:Frank, type: dog}]
Stacy 31 [{name: Jack, type: cat}]
Ken 23 []
name age pet
Bob 48 {name: Jim, type: dog}
Bob 48 {name: Frank, type: dog}
Stacy 31 {name: Jack, type: cat}
select e.name, e.age,

flatten(e.pet)

from Employees as e
select e.name, e.age,

row(a.name, a.type)

from Employees as e, 

unnest e.addresses as a
© Hortonworks Inc. 2016
Optimizing UNNEST
As usual, to optimize, we write planner rules.
We can push filters into the non-nested side,
so we write FilterUnnestTransposeRule.
(There are many other possible rules.)
select e.name, a.name

from Employees as e, 

unnest e.pets as a
where e.age < 30
select e.name, a.street

from (

select *

from Employees 

where e.age < 30) as e,
unnest e.addresses as a
FilterUnnestTransposeRule
© Hortonworks Inc. 2016
Optimizing UNNEST (2)
We can also optimize projects.
If table is stored in a column-oriented file
format, this reduces disk reads significantly.
• Array wildcard projection through flatten
• Non-flattened column inclusion
select e.name,

flatten(pets).name

from Employees as e
scan(name, age, pets)
flatten(pets) as pet
project(name, pet.name)
scan(name, pets[*].name)
flatten(pets) as pet
scan(name, age, pets)
flatten(pets as pet)
project(name, pets[*].name)
Original Plan Project through Flatten
Project into scan
(less data read)
ProjectFlattenTransposeRule ProjectScanRule
© Hortonworks Inc. 2016
Evolution:
• Oracle: Schema before write, strongly typed SQL (like Java)
• Hive: Schema before query, strongly typed SQL
• Drill: Schema on data, weakly typed SQL (like JavaScript)
Late schema
name age starSign pets
Bob 48 [{name: Jim, type: dog},
{name:Frank, type: dog}]
Stacy 31 Taurus [{name: Jack, type: cat}]
Ken 23 []
name age pets
Ken 23 []
select *

from Employees
select *

from Employees
where age < 30
no starSign column!
© Hortonworks Inc. 2016
Expanding *
• Early schema databases expand * at planning time, based on schema
• Drill expands * during query execution
• Each operator needs to be able to propagate column names/types as well as data
Internally, Drill is strongly typed
• Strong typing means efficient code
• JavaScript engines do this too
• Infer type for each batch of records
• Throw away generated code if a column changes type in the next batch of records
Implementing schema-on-data
select e.name

from Employees
where e.age < 30
select e._map[“name”] as name

from Employees
where cast(e._map[“age”] as integer) < 30
© Hortonworks Inc. 2016
A table function is a Java UDF that returns a
relation.
• Its arguments may be relations or scalars.
• It appears in the execution plan.
• Annotations indicate whether it is safe to
push filters, project through
A table macro is a Java function that takes a
parse tree and returns a parse tree.
• Named after Lisp macros.
• It does not appear in the execution plan.
• Views (next slide) are a kind of table macro.
Use a table macro rather than a table function,
if possible. Re-use existing optimizations.
User-defined operators
select e.name

from table(
my_sample(
select * from Employees,
0.15))
select e.name

from table(
my_filter(
select * from Employees,
‘age’, ‘<‘, 30))
© Hortonworks Inc. 2016
Views
Scan [Emps]
Join [$0, $5]
Project [$0, $1, $2, $3]
Filter [age >= 50]
Aggregate [deptno, min(salary)]
Scan [Managers]
Aggregate [manager]
Scan [Emps]
select deptno, min(salary)

from Managers
where age >= 50
group by deptno
create view Managers as
select *
from Emps as e
where exists (
select *
from Emps as underling
where underling.manager = e.id)
© Hortonworks Inc. 2016
Views (after expansion)
select deptno, min(salary)

from Managers
where age >= 50
group by deptno
create view Managers as
select *
from Emps as e
where exists (
select *
from Emps as underling
where underling.manager = e.id)
Scan [Emps] Aggregate [manager]
Join [$0, $5]
Project [$0, $1, $2, $3]
Filter [age >= 50]
Aggregate [deptno, min(salary)]
Scan [Emps]
© Hortonworks Inc. 2016
Views (after pushing down filter)
select deptno, min(salary)

from Managers
where age >= 50
group by deptno
create view Managers as
select *
from Emps as e
where exists (
select *
from Emps as underling
where underling.manager = e.id) Scan [Emps]
Scan [Emps]
Join [$0, $5]
Project [$0, $1, $2, $3]
Filter [age >= 50]
Aggregate [deptno, min(salary)]
© Hortonworks Inc. 2016
Materialized view
create materialized view

EmpSummary as

select deptno,

gender,

count(*) as c, 

sum(sal)

from Emps
group by deptno, gender
select count(*) as c

from Emps
where deptno = 10
and gender = ‘M’
Scan [Emps]
Aggregate [deptno, gender,

COUNT(*), SUM(sal)]
Scan [EmpSummary] =
Scan [Emps]
Filter [deptno = 10 AND gender = ‘M’]
Aggregate [COUNT(*)]
© Hortonworks Inc. 2016
Materialized view, step 2: Rewrite query to match
create materialized view

EmpSummary as

select deptno,

gender,

count(*) as c, 

sum(sal)

from Emps
group by deptno, gender
select count(*) as c

from Emps
where deptno = 10
and gender = ‘M’
Scan [Emps]
Aggregate [deptno, gender,

COUNT(*), SUM(sal)]
Scan [EmpSummary] =
Scan [Emps]
Filter [deptno = 10 AND gender = ‘M’]
Aggregate [deptno, gender,

COUNT(*) AS c, SUM(sal) AS s]
Project [c]
© Hortonworks Inc. 2016
Materialized view, step 3: substitute table
create materialized view

EmpSummary as

select deptno,

gender,

count(*) as c, 

sum(sal)

from Emps
group by deptno, gender
select count(*) as c

from Emps
where deptno = 10
and gender = ‘M’
Scan [Emps]
Aggregate [deptno, gender,

COUNT(*), SUM(sal)]
Scan [EmpSummary] =
Filter [deptno = 10 AND gender = ‘M’]
Project [c]
Scan [EmpSummary]
© Hortonworks Inc. 2016
Streaming queries run forever.
Stream appears in the FROM clause: Orders.
Without the stream keyword, Orders means the
history of the stream (a table).
Calcite streaming SQL: in Samza, Storm, Flink.
Streaming queries
select stream *

from Orders
select *

from Orders
© Hortonworks Inc. 2016
• Orders is used as both stream and table
• System determines where to find the records
• Query is invalid if records are not available
Combining past and future
select stream *

from Orders as o

where units > ( 

select avg(units)

from Orders as h

where h.productId = o.productId

and h.rowtime >

o.rowtime - interval ‘1’ year)
© Hortonworks Inc. 2016
Hybrid systems combine more than one data source and/or engine.
Examples:
• Splunk join to MySQL
• User-defined table written in Python reading from an in-memory temporary table just
created by Drill.
• Streaming query populating a table summarizing the last hour’s activity that will be
used to populate a pie chart in a web dashboard.
Two challenges:
• Planning the query to take advantage of each system’s strengths.
• Efficient interchange of data at run time.
Hybrid systems
© Hortonworks Inc. 2016
Hybrid algebra, hybrid run-time
Calcite
Drill
Arrow
Ibis
Impala
Kudu
Splunk
Cassandra
JDBC
MongoDB
JDBC
Spark
Flink
DREMIO© Hortonworks Inc. 2016
Arrow in a Slide
• New Top-level Apache Software Foundation project
– Announced Feb 17, 2016
• Focused on Columnar In-Memory Analytics
1. 10-100x speedup on many workloads
2. Common data layer enables companies to choose best of breed
systems
3. Designed to work with any programming language
4. Support for both relational and complex data as-is
• Developers from 13+ major open source projects involved
– A significant % of the world’s data will be processed through Arrow!
Calcite
Cassandra
Deeplearning4j
Drill
Hadoop
HBase
Ibis
Impala
Kudu
Pandas
Parquet
Phoenix
Spark
Storm
R
DREMIO© Hortonworks Inc. 2016
Focus on CPU Efficiency
Traditional
Memory Buffer
Arrow
Memory Buffer
• Cache Locality
• Super-scalar & vectorized
operation
• Minimal Structure Overhead
• Constant value access
– With minimal structure
overhead
• Operate directly on columnar
compressed data
DREMIO© Hortonworks Inc. 2016
High Performance Sharing & Interchange
Today With Arrow
• Each system has its own internal memory
format
• 70-80% CPU wasted on serialization and
deserialization
• Similar functionality implemented in
multiple projects
• All systems utilize the same memory
format
• No overhead for cross-system
communication
• Projects can share functionality (eg,
Parquet-to-Arrow reader)
© Hortonworks Inc. 2016
• Algebra-centric approach
• Optimize by applying transformation rules
• User-defined operators (table functions, table macros, custom
RelNode classes)
• Complex data
• Late-schema queries
• Streaming queries
• Calcite enables planning hybrid queries
• Arrow enables hybrid runtime
Summary
© Hortonworks Inc. 2016
@julianhyde
@tshiran
@ApacheCalcite
@ApacheDrill
@ApacheArrow
Get involved:
• http://calcite.apache.org
• http://drill.apache.org
• http://arrow.apache.org
Thanks!
1 of 42

More Related Content

What's hot(20)

Druid deep diveDruid deep dive
Druid deep dive
Kashif Khan3K views
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
Alexey Grishchenko75.9K views
The delta architectureThe delta architecture
The delta architecture
Prakash Chockalingam537 views
Apache Arrow: In Theory, In PracticeApache Arrow: In Theory, In Practice
Apache Arrow: In Theory, In Practice
Dremio Corporation9.5K views
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
DataStax Academy31.9K views
Scala IntroScala Intro
Scala Intro
Alexey (Mr_Mig) Migutsky7.6K views
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
Julian Hyde7.7K views
Hive AnatomyHive Anatomy
Hive Anatomy
nzhang17.5K views

Viewers also liked(20)

Apache Calcite: One planner fits allApache Calcite: One planner fits all
Apache Calcite: One planner fits all
Julian Hyde6.7K views
Apache Calcite overviewApache Calcite overview
Apache Calcite overview
Julian Hyde19.5K views
Streaming SQLStreaming SQL
Streaming SQL
Julian Hyde3K views
Calcite meetup-2016-04-20Calcite meetup-2016-04-20
Calcite meetup-2016-04-20
Josh Elser1.1K views
SQL on everything, in memorySQL on everything, in memory
SQL on everything, in memory
Julian Hyde7.4K views
Introduction to Apache CalciteIntroduction to Apache Calcite
Introduction to Apache Calcite
Jordan Halterman16.3K views
Streaming SQLStreaming SQL
Streaming SQL
Julian Hyde1.3K views
Streaming SQLStreaming SQL
Streaming SQL
Julian Hyde11.8K views
Streaming SQLStreaming SQL
Streaming SQL
Julian Hyde6.3K views
PolyalgebraPolyalgebra
Polyalgebra
DataWorks Summit/Hadoop Summit809 views
phoenix-on-calcite-nyc-meetupphoenix-on-calcite-nyc-meetup
phoenix-on-calcite-nyc-meetup
Maryann Xue143 views
What's new in Mondrian 4?What's new in Mondrian 4?
What's new in Mondrian 4?
Julian Hyde14.5K views

Similar to Planning with Polyalgebra: Bringing Together Relational, Complex and Machine Learning Algebra(20)

Intro to Spark and Spark SQLIntro to Spark and Spark SQL
Intro to Spark and Spark SQL
jeykottalam51.2K views
Full Stack ScalaFull Stack Scala
Full Stack Scala
Ramnivas Laddad8.6K views
20170126 big data processing20170126 big data processing
20170126 big data processing
Vienna Data Science Group1.3K views
Introduction to MongoDB and WorkshopIntroduction to MongoDB and Workshop
Introduction to MongoDB and Workshop
AhmedabadJavaMeetup834 views

Recently uploaded(20)

ChatGPT and AI for Web DevelopersChatGPT and AI for Web Developers
ChatGPT and AI for Web Developers
Maximiliano Firtman152 views
The Research Portal of Catalonia: Growing more (information) & more (services)The Research Portal of Catalonia: Growing more (information) & more (services)
The Research Portal of Catalonia: Growing more (information) & more (services)
CSUC - Consorci de Serveis Universitaris de Catalunya51 views
CXL at OCPCXL at OCP
CXL at OCP
CXL Forum183 views

Planning with Polyalgebra: Bringing Together Relational, Complex and Machine Learning Algebra

  • 1. © Hortonworks Inc. 2016 Polyalgebra Hadoop Summit, April 14, 2016
  • 2. © Hortonworks Inc. 2016 Julian Hyde @julianhyde Apache Calcite (VP), Drill, Kylin Mondrian OLAP Hortonworks Thanks: Jacques Nadeau @intjesus (Dremio/Drill) Wes McKinney (Cloudera/Arrow) Who Tomer Shiran @tshiran Apache Drill Dremio
  • 3. Page‹#› © Hortonworks Inc. 2014 © Hortonworks Inc. 2016 Polyalgebra An extended form of relational algebra that encompasses work with dynamically-typed data, complex records, streaming and machine learning that allows for a single optimization space.
  • 4. © Hortonworks Inc. 2016 Ecosystem Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  • 5. © Hortonworks Inc. 2016 Ecosystem - data stores Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  • 6. © Hortonworks Inc. 2016 Ecosystem - Engines Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  • 7. © Hortonworks Inc. 2016 Ecosystem - Focus of this talk Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  • 8. © Hortonworks Inc. 2016 HadoopRDBMS Old world, new world • Security • Metadata • SQL • Query planning • Data independence • Scale • Late schema • Choice of front-end • Choice of engines • Workload: batch, interactive, streaming, ML, graph, …
  • 9. © Hortonworks Inc. 2016 Many front ends, many engines SQL Planning Execution
 engine Planning User code Map
 Reduce Tez User code in Yarn Spark MongoDB Hadoop External
 SQL SQL Spark Storm Cascading HBase Graph
  • 10. © Hortonworks Inc. 2016 Extension to mathematical set theory Devised by E.F. Code (IBM) in 1970 Defines the relational database Operators: select, filter, join, sort, union, etc. Intermediate format for query planning/optimization Relational algebra SQL Relational algebra Runnable query plan Optimization
  • 11. © Hortonworks Inc. 2016 select d.name, COUNT(*) as c
 from Emps as e
 join Depts as d
 on e.deptno = d.deptno
 where e.age < 30
 group by d.deptno
 having count(*) > 5
 order by c desc Relational algebra Scan [Emps] Scan [Depts] Join [e.deptno
 = d.deptno] Filter [e.age < 30] Aggregate [deptno, COUNT(*) AS c] Filter [c > 5] Project [name, c] Sort [c DESC] (Column names are simplified. They would usually
 be ordinals, e.g. $0 is the first column of the left input.)
  • 12. © Hortonworks Inc. 2016 select * from (
 select zipcode, state
 from Emps
 union all
 select zipcode, state
 from Customers)
 where state in (‘CA’, ‘TX’) Relational algebra - Union and sub-query Scan [Emps] Scan [Customers] Union [all] Project [zipcode, state] Project [zipcode, state] Filter [state IN (‘CA’, ‘TX’)]
  • 13. © Hortonworks Inc. 2016 insert into Facts
 values (‘Meaning of life’, 42),
 (‘Clever as clever’, 6) Relational algebra - Insert and Values Insert [Facts] Values [[‘Meaning of life’, 42], [‘Clever as clever’, 6]]
  • 14. © Hortonworks Inc. 2016 MySQL Splunk Expression tree
 select p.productName, COUNT(*) as c
 from splunk.splunk as s
 join mysql.products as p
 on s.productId = p.productId
 where s.action = 'purchase'
 group by p.productName
 order by c desc join Key: product_id group Key: product_name
 Agg: count filter Condition:
 action =
 'purchase' sort Key: c DESC scan scan Table: splunk Table: products
  • 15. © Hortonworks Inc. 2016 Splunk Expression tree
 (optimized) join Key: product_id group Key: product_name
 Agg: count filter Condition:
 action =
 'purchase' sort Key: c DESC scan Table: splunk MySQL scan Table: products select p.productName, COUNT(*) as c
 from splunk.splunk as s
 join mysql.products as p
 on s.productId = p.productId
 where s.action = 'purchase'
 group by p.productName
 order by c desc
  • 16. Page‹#› © Hortonworks Inc. 2014 © Hortonworks Inc. 2016 Demo {sqlline, apache-calcite, .csv, CsvPushProjectOntoTableRule}
  • 17. © Hortonworks Inc. 2016 Calcite – APIs and SPIs Cost, statistics RelOptCost RelOptCostFactory RelMetadataProvider • RelMdColumnUniquensss • RelMdDistinctRowCount • RelMdSelectivity SQL parser SqlNode
 SqlParser
 SqlValidator Transformation rules RelOptRule • MergeFilterRule • PushAggregateThroughUnionRule • 100+ more Global transformations • Unification (materialized view) • Column trimming • De-correlation Relational algebra RelNode (operator) • TableScan • Filter • Project • Union • Aggregate • … RelDataType (type) RexNode (expression) RelTrait (physical property) • RelConvention (calling-convention) • RelCollation (sortedness) • TBD (bucketedness/distribution) JDBC driver Metadata Schema Table Function • TableFunction • TableMacro Lattice
  • 18. © Hortonworks Inc. 2016 Calcite Planning Process SQL parse tree Planner RelNode Graph Sql-to-Rel Converter SqlNode ! RelNode + RexNode • Node for each node in Input Plan • Each node is a Set of alternate Sub Plans • Set further divided into Subsets: based on traits like sortedness 1. Plan Graph • Rule: specifies an Operator sub-graph to match and logic to generate equivalent ‘better’ sub-graph • New and original sub-graph both remain in contention 2. Rules • RelNodes have Cost & Cumulative Cost 3. Cost Model - Used to plug in Schema, cost formulas - Filter selectivity - Join selectivity - NDV calculations 4. Metadata Providers Rule Match Queue - Add Rule matches to Queue - Apply Rule match transformations to plan graph - Iterate for fixed iterations or until cost doesn’t change - Match importance based on cost of RelNode and height Best RelNode Graph Translate to runtime Logical Plan Based on “Volcano” & “Cascades” papers [G. Graefe]
  • 19. © Hortonworks Inc. 2016 Algebra builder API produces final FrameworkConfig config; final RelBuilder builder = RelBuilder.create(config); final RelNode node = builder .scan("EMP") .aggregate(builder.groupKey("DEPTNO"), builder.count(false, "C"), builder.sum(false, "S", builder.field("SAL"))) .filter( builder.call(SqlStdOperatorTable.GREATER_THAN, builder.field("C"), builder.literal(10))) .build(); System.out.println(RelOptUtil.toString(node)); select deptno,
 COUNT(*) as c, sum(sal) as s
 from Emp
 having COUNT(*) > 10 LogicalFilter(condition=[>($1, 10)]) LogicalAggregate(group=[{7}], C=[COUNT()], S=[SUM($5)]) LogicalTableScan(table=[[scott, EMP]]) Equivalent SQL:
  • 20. © Hortonworks Inc. 2016 Non-relational, post-relational Non-relational stores: • Document databases — MongoDB • Key-value stores — HBase, Cassandra • Graph databases — Neo4J • Multidimensional OLAP — Microsoft Analysis, Mondrian • Streams — Kafka, Storm • Text, audio, video Non-relational operators — data exploration, machine learning Late or no schema
  • 21. © Hortonworks Inc. 2016 Complex data, also known as nested or document-oriented data. Typically, it can be represented as JSON.
 
 2 new operators are sufficient: • UNNEST • COLLECT aggregate function Complex data employees: [ { name: “Bob”, age: 48, pets: [ {name: “Jim”, type: “Dog”}, {name: “Frank”, type: “Cat”} ] }, { name: “Stacy", age: 31,
 starSign: ‘taurus’, pets: [ {name: “Jack”, type: “Cat”} ] }, { name: “Ken”, age: 23 } ]
  • 22. © Hortonworks Inc. 2016 Flatten converts arrays of values to separate rows: • New record for each list item • Empty lists removes record Flatten is actually just syntactic sugar for the UNNEST relational operator: UNNEST and Flatten name age pets Bob 48 [{name: Jim, type: dog}, {name:Frank, type: dog}] Stacy 31 [{name: Jack, type: cat}] Ken 23 [] name age pet Bob 48 {name: Jim, type: dog} Bob 48 {name: Frank, type: dog} Stacy 31 {name: Jack, type: cat} select e.name, e.age,
 flatten(e.pet)
 from Employees as e select e.name, e.age,
 row(a.name, a.type)
 from Employees as e, 
 unnest e.addresses as a
  • 23. © Hortonworks Inc. 2016 Optimizing UNNEST As usual, to optimize, we write planner rules. We can push filters into the non-nested side, so we write FilterUnnestTransposeRule. (There are many other possible rules.) select e.name, a.name
 from Employees as e, 
 unnest e.pets as a where e.age < 30 select e.name, a.street
 from (
 select *
 from Employees 
 where e.age < 30) as e, unnest e.addresses as a FilterUnnestTransposeRule
  • 24. © Hortonworks Inc. 2016 Optimizing UNNEST (2) We can also optimize projects. If table is stored in a column-oriented file format, this reduces disk reads significantly. • Array wildcard projection through flatten • Non-flattened column inclusion select e.name,
 flatten(pets).name
 from Employees as e scan(name, age, pets) flatten(pets) as pet project(name, pet.name) scan(name, pets[*].name) flatten(pets) as pet scan(name, age, pets) flatten(pets as pet) project(name, pets[*].name) Original Plan Project through Flatten Project into scan (less data read) ProjectFlattenTransposeRule ProjectScanRule
  • 25. © Hortonworks Inc. 2016 Evolution: • Oracle: Schema before write, strongly typed SQL (like Java) • Hive: Schema before query, strongly typed SQL • Drill: Schema on data, weakly typed SQL (like JavaScript) Late schema name age starSign pets Bob 48 [{name: Jim, type: dog}, {name:Frank, type: dog}] Stacy 31 Taurus [{name: Jack, type: cat}] Ken 23 [] name age pets Ken 23 [] select *
 from Employees select *
 from Employees where age < 30 no starSign column!
  • 26. © Hortonworks Inc. 2016 Expanding * • Early schema databases expand * at planning time, based on schema • Drill expands * during query execution • Each operator needs to be able to propagate column names/types as well as data Internally, Drill is strongly typed • Strong typing means efficient code • JavaScript engines do this too • Infer type for each batch of records • Throw away generated code if a column changes type in the next batch of records Implementing schema-on-data select e.name
 from Employees where e.age < 30 select e._map[“name”] as name
 from Employees where cast(e._map[“age”] as integer) < 30
  • 27. © Hortonworks Inc. 2016 A table function is a Java UDF that returns a relation. • Its arguments may be relations or scalars. • It appears in the execution plan. • Annotations indicate whether it is safe to push filters, project through A table macro is a Java function that takes a parse tree and returns a parse tree. • Named after Lisp macros. • It does not appear in the execution plan. • Views (next slide) are a kind of table macro. Use a table macro rather than a table function, if possible. Re-use existing optimizations. User-defined operators select e.name
 from table( my_sample( select * from Employees, 0.15)) select e.name
 from table( my_filter( select * from Employees, ‘age’, ‘<‘, 30))
  • 28. © Hortonworks Inc. 2016 Views Scan [Emps] Join [$0, $5] Project [$0, $1, $2, $3] Filter [age >= 50] Aggregate [deptno, min(salary)] Scan [Managers] Aggregate [manager] Scan [Emps] select deptno, min(salary)
 from Managers where age >= 50 group by deptno create view Managers as select * from Emps as e where exists ( select * from Emps as underling where underling.manager = e.id)
  • 29. © Hortonworks Inc. 2016 Views (after expansion) select deptno, min(salary)
 from Managers where age >= 50 group by deptno create view Managers as select * from Emps as e where exists ( select * from Emps as underling where underling.manager = e.id) Scan [Emps] Aggregate [manager] Join [$0, $5] Project [$0, $1, $2, $3] Filter [age >= 50] Aggregate [deptno, min(salary)] Scan [Emps]
  • 30. © Hortonworks Inc. 2016 Views (after pushing down filter) select deptno, min(salary)
 from Managers where age >= 50 group by deptno create view Managers as select * from Emps as e where exists ( select * from Emps as underling where underling.manager = e.id) Scan [Emps] Scan [Emps] Join [$0, $5] Project [$0, $1, $2, $3] Filter [age >= 50] Aggregate [deptno, min(salary)]
  • 31. © Hortonworks Inc. 2016 Materialized view create materialized view
 EmpSummary as
 select deptno,
 gender,
 count(*) as c, 
 sum(sal)
 from Emps group by deptno, gender select count(*) as c
 from Emps where deptno = 10 and gender = ‘M’ Scan [Emps] Aggregate [deptno, gender,
 COUNT(*), SUM(sal)] Scan [EmpSummary] = Scan [Emps] Filter [deptno = 10 AND gender = ‘M’] Aggregate [COUNT(*)]
  • 32. © Hortonworks Inc. 2016 Materialized view, step 2: Rewrite query to match create materialized view
 EmpSummary as
 select deptno,
 gender,
 count(*) as c, 
 sum(sal)
 from Emps group by deptno, gender select count(*) as c
 from Emps where deptno = 10 and gender = ‘M’ Scan [Emps] Aggregate [deptno, gender,
 COUNT(*), SUM(sal)] Scan [EmpSummary] = Scan [Emps] Filter [deptno = 10 AND gender = ‘M’] Aggregate [deptno, gender,
 COUNT(*) AS c, SUM(sal) AS s] Project [c]
  • 33. © Hortonworks Inc. 2016 Materialized view, step 3: substitute table create materialized view
 EmpSummary as
 select deptno,
 gender,
 count(*) as c, 
 sum(sal)
 from Emps group by deptno, gender select count(*) as c
 from Emps where deptno = 10 and gender = ‘M’ Scan [Emps] Aggregate [deptno, gender,
 COUNT(*), SUM(sal)] Scan [EmpSummary] = Filter [deptno = 10 AND gender = ‘M’] Project [c] Scan [EmpSummary]
  • 34. © Hortonworks Inc. 2016 Streaming queries run forever. Stream appears in the FROM clause: Orders. Without the stream keyword, Orders means the history of the stream (a table). Calcite streaming SQL: in Samza, Storm, Flink. Streaming queries select stream *
 from Orders select *
 from Orders
  • 35. © Hortonworks Inc. 2016 • Orders is used as both stream and table • System determines where to find the records • Query is invalid if records are not available Combining past and future select stream *
 from Orders as o
 where units > ( 
 select avg(units)
 from Orders as h
 where h.productId = o.productId
 and h.rowtime >
 o.rowtime - interval ‘1’ year)
  • 36. © Hortonworks Inc. 2016 Hybrid systems combine more than one data source and/or engine. Examples: • Splunk join to MySQL • User-defined table written in Python reading from an in-memory temporary table just created by Drill. • Streaming query populating a table summarizing the last hour’s activity that will be used to populate a pie chart in a web dashboard. Two challenges: • Planning the query to take advantage of each system’s strengths. • Efficient interchange of data at run time. Hybrid systems
  • 37. © Hortonworks Inc. 2016 Hybrid algebra, hybrid run-time Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  • 38. DREMIO© Hortonworks Inc. 2016 Arrow in a Slide • New Top-level Apache Software Foundation project – Announced Feb 17, 2016 • Focused on Columnar In-Memory Analytics 1. 10-100x speedup on many workloads 2. Common data layer enables companies to choose best of breed systems 3. Designed to work with any programming language 4. Support for both relational and complex data as-is • Developers from 13+ major open source projects involved – A significant % of the world’s data will be processed through Arrow! Calcite Cassandra Deeplearning4j Drill Hadoop HBase Ibis Impala Kudu Pandas Parquet Phoenix Spark Storm R
  • 39. DREMIO© Hortonworks Inc. 2016 Focus on CPU Efficiency Traditional Memory Buffer Arrow Memory Buffer • Cache Locality • Super-scalar & vectorized operation • Minimal Structure Overhead • Constant value access – With minimal structure overhead • Operate directly on columnar compressed data
  • 40. DREMIO© Hortonworks Inc. 2016 High Performance Sharing & Interchange Today With Arrow • Each system has its own internal memory format • 70-80% CPU wasted on serialization and deserialization • Similar functionality implemented in multiple projects • All systems utilize the same memory format • No overhead for cross-system communication • Projects can share functionality (eg, Parquet-to-Arrow reader)
  • 41. © Hortonworks Inc. 2016 • Algebra-centric approach • Optimize by applying transformation rules • User-defined operators (table functions, table macros, custom RelNode classes) • Complex data • Late-schema queries • Streaming queries • Calcite enables planning hybrid queries • Arrow enables hybrid runtime Summary
  • 42. © Hortonworks Inc. 2016 @julianhyde @tshiran @ApacheCalcite @ApacheDrill @ApacheArrow Get involved: • http://calcite.apache.org • http://drill.apache.org • http://arrow.apache.org Thanks!