Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Planning with Polyalgebra: Bringing Together Relational, Complex and Machine Learning Algebra

1,666 views

Published on

A talk from given by Julian Hyde and Tomer Shiran at Hadoop Summit, Dublin.

Data scientists and analysts want the best API, DSL or query language possible, not to be limited by what the processing engine can support. Polyalgebra is an extension to relational algebra that separates the user language from the engine, so you can choose the best language and engine for the job. It also allows the system to optimize queries and cache results. We demonstrate how Ibis uses Polyalgebra to execute the same Python-based machine learning queries on Impala, Drill and Spark. And we show how to build Polyalgebra expressions in Calcite and how to define optimization rules and storage handlers.

Published in: Technology
  • Be the first to comment

Planning with Polyalgebra: Bringing Together Relational, Complex and Machine Learning Algebra

  1. 1. © Hortonworks Inc. 2016 Polyalgebra Hadoop Summit, April 14, 2016
  2. 2. © Hortonworks Inc. 2016 Julian Hyde @julianhyde Apache Calcite (VP), Drill, Kylin Mondrian OLAP Hortonworks Thanks: Jacques Nadeau @intjesus (Dremio/Drill) Wes McKinney (Cloudera/Arrow) Who Tomer Shiran @tshiran Apache Drill Dremio
  3. 3. Page‹#› © Hortonworks Inc. 2014 © Hortonworks Inc. 2016 Polyalgebra An extended form of relational algebra that encompasses work with dynamically-typed data, complex records, streaming and machine learning that allows for a single optimization space.
  4. 4. © Hortonworks Inc. 2016 Ecosystem Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  5. 5. © Hortonworks Inc. 2016 Ecosystem - data stores Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  6. 6. © Hortonworks Inc. 2016 Ecosystem - Engines Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  7. 7. © Hortonworks Inc. 2016 Ecosystem - Focus of this talk Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  8. 8. © Hortonworks Inc. 2016 HadoopRDBMS Old world, new world • Security • Metadata • SQL • Query planning • Data independence • Scale • Late schema • Choice of front-end • Choice of engines • Workload: batch, interactive, streaming, ML, graph, …
  9. 9. © Hortonworks Inc. 2016 Many front ends, many engines SQL Planning Execution
 engine Planning User code Map
 Reduce Tez User code in Yarn Spark MongoDB Hadoop External
 SQL SQL Spark Storm Cascading HBase Graph
  10. 10. © Hortonworks Inc. 2016 Extension to mathematical set theory Devised by E.F. Code (IBM) in 1970 Defines the relational database Operators: select, filter, join, sort, union, etc. Intermediate format for query planning/optimization Relational algebra SQL Relational algebra Runnable query plan Optimization
  11. 11. © Hortonworks Inc. 2016 select d.name, COUNT(*) as c
 from Emps as e
 join Depts as d
 on e.deptno = d.deptno
 where e.age < 30
 group by d.deptno
 having count(*) > 5
 order by c desc Relational algebra Scan [Emps] Scan [Depts] Join [e.deptno
 = d.deptno] Filter [e.age < 30] Aggregate [deptno, COUNT(*) AS c] Filter [c > 5] Project [name, c] Sort [c DESC] (Column names are simplified. They would usually
 be ordinals, e.g. $0 is the first column of the left input.)
  12. 12. © Hortonworks Inc. 2016 select * from (
 select zipcode, state
 from Emps
 union all
 select zipcode, state
 from Customers)
 where state in (‘CA’, ‘TX’) Relational algebra - Union and sub-query Scan [Emps] Scan [Customers] Union [all] Project [zipcode, state] Project [zipcode, state] Filter [state IN (‘CA’, ‘TX’)]
  13. 13. © Hortonworks Inc. 2016 insert into Facts
 values (‘Meaning of life’, 42),
 (‘Clever as clever’, 6) Relational algebra - Insert and Values Insert [Facts] Values [[‘Meaning of life’, 42], [‘Clever as clever’, 6]]
  14. 14. © Hortonworks Inc. 2016 MySQL Splunk Expression tree
 select p.productName, COUNT(*) as c
 from splunk.splunk as s
 join mysql.products as p
 on s.productId = p.productId
 where s.action = 'purchase'
 group by p.productName
 order by c desc join Key: product_id group Key: product_name
 Agg: count filter Condition:
 action =
 'purchase' sort Key: c DESC scan scan Table: splunk Table: products
  15. 15. © Hortonworks Inc. 2016 Splunk Expression tree
 (optimized) join Key: product_id group Key: product_name
 Agg: count filter Condition:
 action =
 'purchase' sort Key: c DESC scan Table: splunk MySQL scan Table: products select p.productName, COUNT(*) as c
 from splunk.splunk as s
 join mysql.products as p
 on s.productId = p.productId
 where s.action = 'purchase'
 group by p.productName
 order by c desc
  16. 16. Page‹#› © Hortonworks Inc. 2014 © Hortonworks Inc. 2016 Demo {sqlline, apache-calcite, .csv, CsvPushProjectOntoTableRule}
  17. 17. © Hortonworks Inc. 2016 Calcite – APIs and SPIs Cost, statistics RelOptCost RelOptCostFactory RelMetadataProvider • RelMdColumnUniquensss • RelMdDistinctRowCount • RelMdSelectivity SQL parser SqlNode
 SqlParser
 SqlValidator Transformation rules RelOptRule • MergeFilterRule • PushAggregateThroughUnionRule • 100+ more Global transformations • Unification (materialized view) • Column trimming • De-correlation Relational algebra RelNode (operator) • TableScan • Filter • Project • Union • Aggregate • … RelDataType (type) RexNode (expression) RelTrait (physical property) • RelConvention (calling-convention) • RelCollation (sortedness) • TBD (bucketedness/distribution) JDBC driver Metadata Schema Table Function • TableFunction • TableMacro Lattice
  18. 18. © Hortonworks Inc. 2016 Calcite Planning Process SQL parse tree Planner RelNode Graph Sql-to-Rel Converter SqlNode ! RelNode + RexNode • Node for each node in Input Plan • Each node is a Set of alternate Sub Plans • Set further divided into Subsets: based on traits like sortedness 1. Plan Graph • Rule: specifies an Operator sub-graph to match and logic to generate equivalent ‘better’ sub-graph • New and original sub-graph both remain in contention 2. Rules • RelNodes have Cost & Cumulative Cost 3. Cost Model - Used to plug in Schema, cost formulas - Filter selectivity - Join selectivity - NDV calculations 4. Metadata Providers Rule Match Queue - Add Rule matches to Queue - Apply Rule match transformations to plan graph - Iterate for fixed iterations or until cost doesn’t change - Match importance based on cost of RelNode and height Best RelNode Graph Translate to runtime Logical Plan Based on “Volcano” & “Cascades” papers [G. Graefe]
  19. 19. © Hortonworks Inc. 2016 Algebra builder API produces final FrameworkConfig config; final RelBuilder builder = RelBuilder.create(config); final RelNode node = builder .scan("EMP") .aggregate(builder.groupKey("DEPTNO"), builder.count(false, "C"), builder.sum(false, "S", builder.field("SAL"))) .filter( builder.call(SqlStdOperatorTable.GREATER_THAN, builder.field("C"), builder.literal(10))) .build(); System.out.println(RelOptUtil.toString(node)); select deptno,
 COUNT(*) as c, sum(sal) as s
 from Emp
 having COUNT(*) > 10 LogicalFilter(condition=[>($1, 10)]) LogicalAggregate(group=[{7}], C=[COUNT()], S=[SUM($5)]) LogicalTableScan(table=[[scott, EMP]]) Equivalent SQL:
  20. 20. © Hortonworks Inc. 2016 Non-relational, post-relational Non-relational stores: • Document databases — MongoDB • Key-value stores — HBase, Cassandra • Graph databases — Neo4J • Multidimensional OLAP — Microsoft Analysis, Mondrian • Streams — Kafka, Storm • Text, audio, video Non-relational operators — data exploration, machine learning Late or no schema
  21. 21. © Hortonworks Inc. 2016 Complex data, also known as nested or document-oriented data. Typically, it can be represented as JSON.
 
 2 new operators are sufficient: • UNNEST • COLLECT aggregate function Complex data employees: [ { name: “Bob”, age: 48, pets: [ {name: “Jim”, type: “Dog”}, {name: “Frank”, type: “Cat”} ] }, { name: “Stacy", age: 31,
 starSign: ‘taurus’, pets: [ {name: “Jack”, type: “Cat”} ] }, { name: “Ken”, age: 23 } ]
  22. 22. © Hortonworks Inc. 2016 Flatten converts arrays of values to separate rows: • New record for each list item • Empty lists removes record Flatten is actually just syntactic sugar for the UNNEST relational operator: UNNEST and Flatten name age pets Bob 48 [{name: Jim, type: dog}, {name:Frank, type: dog}] Stacy 31 [{name: Jack, type: cat}] Ken 23 [] name age pet Bob 48 {name: Jim, type: dog} Bob 48 {name: Frank, type: dog} Stacy 31 {name: Jack, type: cat} select e.name, e.age,
 flatten(e.pet)
 from Employees as e select e.name, e.age,
 row(a.name, a.type)
 from Employees as e, 
 unnest e.addresses as a
  23. 23. © Hortonworks Inc. 2016 Optimizing UNNEST As usual, to optimize, we write planner rules. We can push filters into the non-nested side, so we write FilterUnnestTransposeRule. (There are many other possible rules.) select e.name, a.name
 from Employees as e, 
 unnest e.pets as a where e.age < 30 select e.name, a.street
 from (
 select *
 from Employees 
 where e.age < 30) as e, unnest e.addresses as a FilterUnnestTransposeRule
  24. 24. © Hortonworks Inc. 2016 Optimizing UNNEST (2) We can also optimize projects. If table is stored in a column-oriented file format, this reduces disk reads significantly. • Array wildcard projection through flatten • Non-flattened column inclusion select e.name,
 flatten(pets).name
 from Employees as e scan(name, age, pets) flatten(pets) as pet project(name, pet.name) scan(name, pets[*].name) flatten(pets) as pet scan(name, age, pets) flatten(pets as pet) project(name, pets[*].name) Original Plan Project through Flatten Project into scan (less data read) ProjectFlattenTransposeRule ProjectScanRule
  25. 25. © Hortonworks Inc. 2016 Evolution: • Oracle: Schema before write, strongly typed SQL (like Java) • Hive: Schema before query, strongly typed SQL • Drill: Schema on data, weakly typed SQL (like JavaScript) Late schema name age starSign pets Bob 48 [{name: Jim, type: dog}, {name:Frank, type: dog}] Stacy 31 Taurus [{name: Jack, type: cat}] Ken 23 [] name age pets Ken 23 [] select *
 from Employees select *
 from Employees where age < 30 no starSign column!
  26. 26. © Hortonworks Inc. 2016 Expanding * • Early schema databases expand * at planning time, based on schema • Drill expands * during query execution • Each operator needs to be able to propagate column names/types as well as data Internally, Drill is strongly typed • Strong typing means efficient code • JavaScript engines do this too • Infer type for each batch of records • Throw away generated code if a column changes type in the next batch of records Implementing schema-on-data select e.name
 from Employees where e.age < 30 select e._map[“name”] as name
 from Employees where cast(e._map[“age”] as integer) < 30
  27. 27. © Hortonworks Inc. 2016 A table function is a Java UDF that returns a relation. • Its arguments may be relations or scalars. • It appears in the execution plan. • Annotations indicate whether it is safe to push filters, project through A table macro is a Java function that takes a parse tree and returns a parse tree. • Named after Lisp macros. • It does not appear in the execution plan. • Views (next slide) are a kind of table macro. Use a table macro rather than a table function, if possible. Re-use existing optimizations. User-defined operators select e.name
 from table( my_sample( select * from Employees, 0.15)) select e.name
 from table( my_filter( select * from Employees, ‘age’, ‘<‘, 30))
  28. 28. © Hortonworks Inc. 2016 Views Scan [Emps] Join [$0, $5] Project [$0, $1, $2, $3] Filter [age >= 50] Aggregate [deptno, min(salary)] Scan [Managers] Aggregate [manager] Scan [Emps] select deptno, min(salary)
 from Managers where age >= 50 group by deptno create view Managers as select * from Emps as e where exists ( select * from Emps as underling where underling.manager = e.id)
  29. 29. © Hortonworks Inc. 2016 Views (after expansion) select deptno, min(salary)
 from Managers where age >= 50 group by deptno create view Managers as select * from Emps as e where exists ( select * from Emps as underling where underling.manager = e.id) Scan [Emps] Aggregate [manager] Join [$0, $5] Project [$0, $1, $2, $3] Filter [age >= 50] Aggregate [deptno, min(salary)] Scan [Emps]
  30. 30. © Hortonworks Inc. 2016 Views (after pushing down filter) select deptno, min(salary)
 from Managers where age >= 50 group by deptno create view Managers as select * from Emps as e where exists ( select * from Emps as underling where underling.manager = e.id) Scan [Emps] Scan [Emps] Join [$0, $5] Project [$0, $1, $2, $3] Filter [age >= 50] Aggregate [deptno, min(salary)]
  31. 31. © Hortonworks Inc. 2016 Materialized view create materialized view
 EmpSummary as
 select deptno,
 gender,
 count(*) as c, 
 sum(sal)
 from Emps group by deptno, gender select count(*) as c
 from Emps where deptno = 10 and gender = ‘M’ Scan [Emps] Aggregate [deptno, gender,
 COUNT(*), SUM(sal)] Scan [EmpSummary] = Scan [Emps] Filter [deptno = 10 AND gender = ‘M’] Aggregate [COUNT(*)]
  32. 32. © Hortonworks Inc. 2016 Materialized view, step 2: Rewrite query to match create materialized view
 EmpSummary as
 select deptno,
 gender,
 count(*) as c, 
 sum(sal)
 from Emps group by deptno, gender select count(*) as c
 from Emps where deptno = 10 and gender = ‘M’ Scan [Emps] Aggregate [deptno, gender,
 COUNT(*), SUM(sal)] Scan [EmpSummary] = Scan [Emps] Filter [deptno = 10 AND gender = ‘M’] Aggregate [deptno, gender,
 COUNT(*) AS c, SUM(sal) AS s] Project [c]
  33. 33. © Hortonworks Inc. 2016 Materialized view, step 3: substitute table create materialized view
 EmpSummary as
 select deptno,
 gender,
 count(*) as c, 
 sum(sal)
 from Emps group by deptno, gender select count(*) as c
 from Emps where deptno = 10 and gender = ‘M’ Scan [Emps] Aggregate [deptno, gender,
 COUNT(*), SUM(sal)] Scan [EmpSummary] = Filter [deptno = 10 AND gender = ‘M’] Project [c] Scan [EmpSummary]
  34. 34. © Hortonworks Inc. 2016 Streaming queries run forever. Stream appears in the FROM clause: Orders. Without the stream keyword, Orders means the history of the stream (a table). Calcite streaming SQL: in Samza, Storm, Flink. Streaming queries select stream *
 from Orders select *
 from Orders
  35. 35. © Hortonworks Inc. 2016 • Orders is used as both stream and table • System determines where to find the records • Query is invalid if records are not available Combining past and future select stream *
 from Orders as o
 where units > ( 
 select avg(units)
 from Orders as h
 where h.productId = o.productId
 and h.rowtime >
 o.rowtime - interval ‘1’ year)
  36. 36. © Hortonworks Inc. 2016 Hybrid systems combine more than one data source and/or engine. Examples: • Splunk join to MySQL • User-defined table written in Python reading from an in-memory temporary table just created by Drill. • Streaming query populating a table summarizing the last hour’s activity that will be used to populate a pie chart in a web dashboard. Two challenges: • Planning the query to take advantage of each system’s strengths. • Efficient interchange of data at run time. Hybrid systems
  37. 37. © Hortonworks Inc. 2016 Hybrid algebra, hybrid run-time Calcite Drill Arrow Ibis Impala Kudu Splunk Cassandra JDBC MongoDB JDBC Spark Flink
  38. 38. DREMIO© Hortonworks Inc. 2016 Arrow in a Slide • New Top-level Apache Software Foundation project – Announced Feb 17, 2016 • Focused on Columnar In-Memory Analytics 1. 10-100x speedup on many workloads 2. Common data layer enables companies to choose best of breed systems 3. Designed to work with any programming language 4. Support for both relational and complex data as-is • Developers from 13+ major open source projects involved – A significant % of the world’s data will be processed through Arrow! Calcite Cassandra Deeplearning4j Drill Hadoop HBase Ibis Impala Kudu Pandas Parquet Phoenix Spark Storm R
  39. 39. DREMIO© Hortonworks Inc. 2016 Focus on CPU Efficiency Traditional Memory Buffer Arrow Memory Buffer • Cache Locality • Super-scalar & vectorized operation • Minimal Structure Overhead • Constant value access – With minimal structure overhead • Operate directly on columnar compressed data
  40. 40. DREMIO© Hortonworks Inc. 2016 High Performance Sharing & Interchange Today With Arrow • Each system has its own internal memory format • 70-80% CPU wasted on serialization and deserialization • Similar functionality implemented in multiple projects • All systems utilize the same memory format • No overhead for cross-system communication • Projects can share functionality (eg, Parquet-to-Arrow reader)
  41. 41. © Hortonworks Inc. 2016 • Algebra-centric approach • Optimize by applying transformation rules • User-defined operators (table functions, table macros, custom RelNode classes) • Complex data • Late-schema queries • Streaming queries • Calcite enables planning hybrid queries • Arrow enables hybrid runtime Summary
  42. 42. © Hortonworks Inc. 2016 @julianhyde @tshiran @ApacheCalcite @ApacheDrill @ApacheArrow Get involved: • http://calcite.apache.org • http://drill.apache.org • http://arrow.apache.org Thanks!

×