Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

HUG_Ireland_Apache_Arrow_Tomer_Shiran

353 views

Published on

A presentation by Tomer Shiran, CEO of Dremio made to Hadoop User Group (HUG) Ireland on "Hadoop Summit Night" on April 12th, 2016. This presentation covers Apache Arrow in detail.

Published in: Data & Analytics
  • Hello! Who wants to chat with me? Nu photos with me here http://bit.ly/helenswee
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

HUG_Ireland_Apache_Arrow_Tomer_Shiran

  1. 1. DREMIO Apache Arrow A New Era of Columnar In-Memory Analytics Tomer Shiran, Co-Founder & CEO at Dremio tshiran@dremio.com | @tshiran Hadoop Meetup Ireland 2016 April 12, 2016
  2. 2. DREMIO Company Background Jacques Nadeau Founder & CTO • Recognized SQL & NoSQL expert • Apache Drill PMC Chair • Quigo (AOL); Offermatica (ADBE); aQuantive (MSFT) Tomer Shiran Founder & CEO • MapR (VP Product); Microsoft; IBM Research • Apache Drill Founder • Carnegie Mellon, Technion Julien Le Dem Architect • Apache Parquet Founder • Apache Pig PMC Member • Twitter (Lead, Analytics Data Pipeline); Yahoo! (Architect) Top Silicon Valley VCs• Stealth data analytics startup • Founded in 2015 • Led by experts in Big Data and open source
  3. 3. DREMIO Arrow in a Slide • New Top-level Apache Software Foundation project – Announced Feb 17, 2016 • Focused on Columnar In-Memory Analytics 1. 10-100x speedup on many workloads 2. Common data layer enables companies to choose best of breed systems 3. Designed to work with any programming language 4. Support for both relational and complex data as-is • Developers from 13+ major open source projects involved – A significant % of the world’s data will be processed through Arrow! Calcite Cassandra Deeplearning4j Drill Hadoop HBase Ibis Impala Kudu Pandas Parquet Phoenix Spark Storm R
  4. 4. DREMIO Agenda • Purpose • Memory Representation • Language Bindings • IPC & RPC • Example Integrations
  5. 5. DREMIO PURPOSE
  6. 6. DREMIO Overview • A high speed in-memory representation • Well-documented and cross language compatible • Designed to take advantage of modern CPU characteristics • Embeddable in execution engines, storage layers, etc.
  7. 7. DREMIO Focus on CPU Efficiency Traditional Memory Buffer Arrow Memory Buffer • Cache Locality • Super-scalar & vectorized operation • Minimal Structure Overhead • Constant value access – With minimal structure overhead • Operate directly on columnar compressed data
  8. 8. DREMIO High Performance Sharing & Interchange Today With Arrow • Each system has its own internal memory format • 70-80% CPU wasted on serialization and deserialization • Similar functionality implemented in multiple projects • All systems utilize the same memory format • No overhead for cross-system communication • Projects can share functionality (eg, Parquet-to-Arrow reader)
  9. 9. DREMIO Shared Need -> Open Source Opportunity • Columnar is complex • Shredded Columnar is even more complex • We all need to go to same place • Take advantage of Open Source approach • Once we pick a shared solution, we get interchange for “free” “We are also considering switching to a columnar canonical in-memory format for data that needs to be materialized during query processing, in order to take advantage of SIMD instructions” - Impala Team “A large fraction of the CPU time is spent waiting for data to be fetched from main memory…we are designing cache-friendly algorithms and data structures so Spark applications will spend less time waiting to fetch data from memory and more time doing useful work - Spark Team
  10. 10. DREMIO IN MEMORY REPRESENTATION
  11. 11. DREMIO persons = [ { name: 'wes', iq: 180, addresses: [ {number: 2, street 'a'}, {number: 3, street 'bb'} ] }, { name: 'joe', iq: 100, addresses: [ {number: 4, street 'ccc'}, {number: 5, street 'dddd'}, {number: 2, street 'f'} ] } ]
  12. 12. DREMIO Simple Example: persons.iq person.iq 180 100
  13. 13. DREMIO Simple Example: persons.addresses.number person.addresses 0 2 5 person.addresses.number 2 3 4 5 6 offset
  14. 14. DREMIO Columnar data person.addresses.street person.addresses 0 2 5 offset 0 1 3 6 10 a b b c c c d d d d f person.addresses.number 2 3 4 5 6 offset
  15. 15. DREMIO LANGUAGE BINDINGS
  16. 16. DREMIO Language Bindings • Target Languages – Java (beta) – C++ (underway) – Python & Pandas (underway) – R – Julia • Initial Focus – Read a structure – Write a structure – Manage memory
  17. 17. DREMIO Java: Creating Dynamic Off-Heap Structures FieldWriter w= getWriter(); w.varChar("name").write("Wes"); w.integer("iq").write(180); ListWriter list = writer.list("addresses"); list.startList(); MapWriter map = list.map(); map.start(); map.integer("number").writeInt(2); map.varChar("street").write("a"); map.end(); map.start(); map.integer("number").writeInt(3); map.varChar("street").write("bb"); map.end(); list.endList(); { name: 'wes', iq: 180, addresses: [ {number: 2, street 'a'}, {number: 3, street 'bb'} ] } JSON Representation Programmatic Construction
  18. 18. DREMIO Java: Memory Management (& NVMe) • Chunk-based managed allocator – Built on top of Netty’s JEMalloc implementation • Create a tree of allocators – Limit and transfer semantics across allocators – Leak detection and location accounting • Wrap native memory from other applications • New support for integration with Intel’s Persistent Memory library via Apache Mnemonic
  19. 19. DREMIO RPC & IPC
  20. 20. DREMIO Common Message Pattern • Schema negotiation – Logical description of structure – Identification of dictionary encoded nodes • Dictionary batch – Dictionary ID, values • Record batch – Batches of records up to 64K – Leaf nodes up to 2B values Schema Negotiation Dictionary Batch Record Batch Record Batch Record Batch 1..N Batches 0..N Batches
  21. 21. DREMIO Record Batch Construction Schema Negotiation Dictionary Batch Record Batch Record Batch Record Batch name (offset) name (data) iq (data) addresses (list offset) addresses.number addresses.street (offset) addresses.street (data) data header (describes offsets into data) name (bitmap) iq (bitmap) addresses (bitmap) addresses.number (bitmap) addresses.street (bitmap) { name: 'wes', iq: 180, addresses: [ { number: 2, street 'a'}, { number: 3, street 'bb'} ] } Each box is contiguous memory, entirely contiguous on wire
  22. 22. DREMIO RPC & IPC: Moving Data Between Systems RPC • Avoid Serialization & Deserialization • Layer TBD: Focused on supporting vectored io – Scatter/gather reads/writes against socket IPC • Alpha implementation using memory mapped files – Moving data between Python and Drill • Working on shared allocation approach – Shared reference counting and well-defined ownership semantics
  23. 23. DREMIO REAL-WORLD EXAMPLES
  24. 24. DREMIO Real World Example: Python With Drill in partition 0 … in partition n - 1 SQL Engine Python function input Python function input User-supplied Python code output output out partition 0 … out partition n - 1 SQL Engine
  25. 25. DREMIO Real World Example: Feather File Format for Python and R • Problem: fast, language- agnostic binary data frame file format • Written by Wes McKinney (Python) Hadley Wickham (R) • Read speeds close to disk IO performance Arrow array 0 Arrow array 1 … Arrow array n Feather metadata Feather file Apache Arrow memory Google flatbuffers
  26. 26. DREMIO Real World Example: Feather File Format for Python and R library(feather) path <- "my_data.feather" write_feather(df, path) df <- read_feather(path) import feather path = 'my_data.feather' feather.write_dataframe(df, path) df = feather.read_dataframe(path) R Python
  27. 27. DREMIO What’s Next • Parquet for Python & C++ – Using Arrow Representation • Available IPC Implementation • Spark, Drill Integration – Faster UDFs, Storage interfaces
  28. 28. DREMIO Get Involved • Join the community – dev@arrow.apache.org – Slack: https://apachearrowslackin.herokuapp.com/ – http://arrow.apache.org – @ApacheArrow • Hadoop Summit Talks – Tomorrow: The Heterogeneous Data Lake – Thursday: Planning with Polyalgebra

×