Successfully reported this slideshow.
Your SlideShare is downloading. ×

Using Apache Arrow, Calcite, and Parquet to Build a Relational Cache

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 37 Ad

Using Apache Arrow, Calcite, and Parquet to Build a Relational Cache

From DataEngConf 2017 - Everybody wants to get to data faster. As we move from more general solution to specific optimization techniques, the level of performance impact grows. This talk will discuss how layering in-memory caching, columnar storage and relational caching can combine to provide a substantial improvement in overall data science and analytical workloads. It will include a detailed overview of how you can use Apache Arrow, Calcite and Parquet to achieve multiple magnitudes improvement in performance over what is currently possible.

From DataEngConf 2017 - Everybody wants to get to data faster. As we move from more general solution to specific optimization techniques, the level of performance impact grows. This talk will discuss how layering in-memory caching, columnar storage and relational caching can combine to provide a substantial improvement in overall data science and analytical workloads. It will include a detailed overview of how you can use Apache Arrow, Calcite and Parquet to achieve multiple magnitudes improvement in performance over what is currently possible.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Advertisement

Similar to Using Apache Arrow, Calcite, and Parquet to Build a Relational Cache (20)

Advertisement

Using Apache Arrow, Calcite, and Parquet to Build a Relational Cache

  1. 1. © 2017 Dremio Corporation @DremioHQ Using Apache Arrow, Calcite and Parquet to build a Relational Cache Halloween 2017 @DataEngConf Jacques Nadeau
  2. 2. © 2017 Dremio Corporation @DremioHQ Who? Jacques Nadeau @intjesus • CTO & Co-founder of Dremio • Apache member • VP Apache Arrow • PMCs: Arrow, Calcite, Incubator, Heron (incubating)
  3. 3. © 2017 Dremio Corporation @DremioHQ Agenda • Tech Backgrounder • Caching Techniques • Relational Caching In Depth • Definition and Matching • Dealing with Updates • Closing Words
  4. 4. © 2017 Dremio Corporation @DremioHQ Tech Backgrounder
  5. 5. © 2017 Dremio Corporation @DremioHQ What is Apache Arrow • Columnar In-memory Data processing library • Designed to work with any programming language • Support for both relational and complex data as-is • Used by Pandas, Spark, Dremio
  6. 6. © 2017 Dremio Corporation @DremioHQ What is Apache Calcite • SQL parser, Relational Algebra & Optimizer • Understands Materialized Views and Lattices • Used by many to add SQL functionality including Apex, Drill, Hive, Flink, Kylin, Phoenix, Samza, Storm, Cascading & Dremio
  7. 7. © 2017 Dremio Corporation @DremioHQ What is Apache Parquet • OSS implementation of Google Dremel disk format for complex columnar data • Support high-level of data-ware columnar compression, vectorized columnar readback • Defacto standard for Analytical data on disk in Big Data ecosystem
  8. 8. © 2017 Dremio Corporation @DremioHQ Caching Techniques
  9. 9. © 2017 Dremio Corporation @DremioHQ What does Caching Mean? • Caching: Reduce the distance to data (DTD). • Distance: How much time and resources it takes to access data? – How fast is the medium? How near is it? – Is the data designed for efficient consumption? – How similar is the data to what you need to answer a question? Perf & Proximity Relevance Consumability Ways to reduce DTD
  10. 10. © 2017 Dremio Corporation @DremioHQ Types of Caching • In-Memory File Pinning • Columnar Disk Caching • In-Memory Block Caching • Near-CPU Data Caching • Cube Relational Caching • Arbitrary Relational Caching
  11. 11. © 2017 Dremio Corporation @DremioHQ In-Memory File Pinning • Hold a File in Memory for frequent retrieval • Pros – Simple, standard and well-defined interface – Improves the performance of the medium. – If you’re performance is primarily bound by disk IO, this might be a good option. • Cons – File structure not necessarily best in-memory structure. – Data manipulation almost always requires a copy of data to also be held in memory (because the file format is not directly consumable).
  12. 12. © 2017 Dremio Corporation @DremioHQ Columnar Disk Caching • Store the data in an optimized columnar format. • Pros – Better compression reduces IO – Good structure improves processing – Benefits selective workloads (needed subset of all columns) • Cons – Requires duplicating data – Typically manual/semi-automated (e.g. MapReduce/Spark to ETL persist/update)
  13. 13. © 2017 Dremio Corporation @DremioHQ In-Memory Block Caching • Maintain portions of on-disk data in Memory (e.g. Linux page cache, HBase block cache) • Pros – Very mature and usually had for free • Cons – Not easy to control/influence. – Very disconnected from workloads.
  14. 14. © 2017 Dremio Corporation @DremioHQ Near-CPU Data Caching (memory or disk) • Hold the data directly in a representation that can be processed without restructuring (e.g. Arrow format) • Pros – Processing can be done without interpretation of format – Very efficient to consume – Possible to consume data by multiple consumers without duplicating memory • Cons – Larger than compressed formats – Requires applications to agree on format
  15. 15. © 2017 Dremio Corporation @DremioHQ Cube-Based Relational Caching • Create several partially aggregated cuboids that can satisfy a range of aggregation queries • Pros – Low-latency performance for common aggregate query patterns – Cube storage requirements can be small fraction of original dataset size • Cons – Analysis latency is bi-modal: cube hit is great but a miss is either unserved or served slowly – Difficult or impossible to satisfy arbitrary queries
  16. 16. © 2017 Dremio Corporation @DremioHQ Arbitrary Relational Caching • Create arbitrary data fragments combined with partitioning and sorting schemes to speed any query • Pros – Base case is easy to understand – Can improve the performance of any query • Cons – Complex to match to arbitrary queries – Can be large depending on needs
  17. 17. © 2017 Dremio Corporation @DremioHQ Types of Caching: The combination we found useful • In-Memory File Pinning – Too non-specific given memory scarcity • ✔ Columnar Disk Caching – Make sure everything is in Parquet (for any non-ephemeral data) • ✔ In-Memory Block Caching – Leverage existing page-cache, avoid additional memory cache layers • ✔ Near-CPU Data Caching – Used primarily for ephemeral/short-term persistence to avoid overhead • ✔ Cube Relational Caching – Useful for aggregation patterns • ✔ Arbitrary Relational Caching – Useful for unusual aggregation and non-aggregation needs
  18. 18. © 2017 Dremio Corporation @DremioHQ Relational Caching In Depth
  19. 19. © 2017 Dremio Corporation @DremioHQ Relational Algebra Refresher • Relations: Source of data (a table) • Operators: Define a set of transformations – Join, Project, Scan, Filter, Aggregate, Window, etc • Properties: Defining traits of data at a particular relation – Sorted by X, Hash distributed by Y, etc. • Rules: Defining equality conditions between a collection of operations – Project > Filter can be changed to Filter > Project, A scan doesn’t need to project columns that aren’t used later, etc. • Graph/Tree: A collection of operators that define a particular dataset in a DAG Project Scan Filter Filter Scan Project
  20. 20. © 2017 Dremio Corporation @DremioHQ Relational Caching: Basic Concept • Store derived data that is between what you want and original dataset • Shortens Distance to Data (DTD) • Reduces resource requirements & latency Original Data What you Want What you Want What you Want Persisted Shared Intermediate State originalDTD newDTDcostreduction
  21. 21. © 2017 Dremio Corporation @DremioHQ You Probably Already Do This! Data Alternatives (Manually Created) • Sessionized • Cleansed • Partitioned by time or region • Summarized for a particular purpose Users Choose Depending on Need • Analysts trained on using different tables depending on use case • Custom datasets built for reporting • Summarization and/or extraction for dashboards
  22. 22. © 2017 Dremio Corporation @DremioHQ Benefit of Relational Caching over “Copy and Pick” “Copy and Pick” Relational Caching Physical Optimizations (transform, sort, partition, aggregate) Logical Model Source Table ???? User picks best optimization Cache picks best optimization Cache maintains representations Admin picks manage maintenance
  23. 23. © 2017 Dremio Corporation @DremioHQ Key Components of Relational Caching • How to Express Transformations/States: SQL • Hold and Match Relational algebra: Calcite • Persist alternative datasets: Parquet • A way to process: Arrow + Sabot • And a lot of code to put it all together…
  24. 24. © 2017 Dremio Corporation @DremioHQ Query Planner Our Approach Data Processing System (Sabot) End User Queries UI to Define Cached Patterns Source Storage Interface (Arrow) HDFS S3 Elastic Relational Pattern Matching System Relational Pattern Database Change Detection Database Cache Persistence Parquet Arrow Refresh System
  25. 25. © 2017 Dremio Corporation @DremioHQ Definition and Matching
  26. 26. © 2017 Dremio Corporation @DremioHQ Coming Back to Calcite • Calcite is a Planner & Optimizer • Comes with a prebuilt selection of operators, rules, properties (called traits) and ways to express relations • Also has a basic Materialized View facility (relevant!) Perfect Foundation for Relational Caching
  27. 27. © 2017 Dremio Corporation @DremioHQ How We Built Caching: Reflections • Reflection: A persisted alternative view of data in Parquet format – Raw Reflection: Persist all records of underlying dataset, controlling partitioning and sortedness – Aggregate Reflection: Persist a partially aggregated dataset based on a selection of dimensions and measures, still controlling partitioning and sortedness • Reflections can be built on either source tables or arbitrarily defined Virtual Datasets
  28. 28. © 2017 Dremio Corporation @DremioHQ Cache Matching: Aggregation Rollup Given a user query, try to create an alternative version of the query that matches the cached target. P(a,c) F(c’ < 10) S(t1) S(t1) A(a, sum(c) as c’) A(a,b, sum(c)) S(r1) User Query Reflection Definition Alternative Plan F(c’ < 10) S(r1) A(a, sum(c) as c’) Target Materialization
  29. 29. © 2017 Dremio Corporation @DremioHQ Cache Matching: Join/Aggregation Transposition Join(t1.id=t2.id) S(t1) S(t1) A(a, sum(c) as c’) A(id, sum(c)) S(r1) User Query Reflection Definition Alternative Plan Target MaterializationS(t2) Join(r1.id=t2.id) S(r1) A(a, sum(c) as c’) S(t2)
  30. 30. © 2017 Dremio Corporation @DremioHQ Cache Matching: Costing and Partitioning Benefits F(a) S(t1) S(t1) S(r1) Part by a User Query Target Materialization S(t1) S(r1) Part by b Target Materialization S(r1) pruned on a
  31. 31. © 2017 Dremio Corporation @DremioHQ Relational Matching, Other Examples • Physical Property Matching • Predicate Promotion • Predicate Inference • Join Decomposition • Join Promotion
  32. 32. © 2017 Dremio Corporation @DremioHQ Dealing with Updates
  33. 33. © 2017 Dremio Corporation @DremioHQ Refresh Management Importance of Cache Creation Ordering • Not all updating orderings are equal • Want to order updates based on “Refresh Graph” and dependencies • Multiple orders possible, cost against each other to minimize update cost Freshness Management • Underlying data may change • User Should define refresh frequency • Separately Define Absolute TTL Physical dataset 1H refresh 3H expiration Raw Reflection Aggregate Reflection
  34. 34. © 2017 Dremio Corporation @DremioHQ Multiple Update Modes (Depending on Mutation Pattern) • Full: Always rebuild reflections from scratch (highly mutating) • Incremental (files): Incrementally builds reflections based on new files and folders (append-only) • Incremental (rowstores): Incrementally builds reflections based on monotonically increasing field (append-only) • Partitioned Refresh: Maintains reflections based on source partitions (e.g. Filesystem directories, Hive partitions). (partially mutating)
  35. 35. © 2017 Dremio Corporation @DremioHQ Closing Words
  36. 36. © 2017 Dremio Corporation @DremioHQ What We’ve Seen Using these Techniques • Frequent 10x-100x+ performance improvements in multiple workloads • Vast reduction in resources required to achieve performance levels • In many cases, a reduction in disk space – Due to avoidance of excessive unused or rarely used physical copies
  37. 37. © 2017 Dremio Corporation @DremioHQ Find out More and Get Involved • Drop by my office hours (East Room Lounge - now) • Drop by the Dremio table behind you • Join us at @ApacheArrow meetup at @enigma_data Midtown – Wes Mckinney, creator of Pandas and myself, tech deep dive • Join the Dremio community (Relational Caching) – github.com/dremio/dremio-oss (Apache Licensed) – dremio.com – community.dremio.com • Find out more about the Building Blocks – dev@[arrow|calcite|parquet].apache.org – http://github.com/apache/[arrow|calcite|parquet-mr] – http://[arrow|calcite|parquet].apache.org • Follow @DremioHQ, @intjesus, @ApacheArrow, @ApacheCalcite, @ApacheParquet

×