Apache Drill
interactive, ad-hoc query for large-scale datasets
Michael Hausenblas, Chief Data Engineer EMEA, MapR
Hadoop ...
Which
workloads do
you
encounter in
your
environment?
http://www.flickr.com/photos/kevinomara/2866648330/licensedunderCCBY...
Batch processing
… for recurring tasks such as large-scale data mining, ETL
offloading/data-warehousing  for the batch la...
OLTP
… user-facing eCommerce transactions, real-time messaging at
scale (FB), time-series processing, etc.  for the servi...
Stream processing
… in order to handle stream sources such as social media feeds
or sensor data (mobile phones, RFID, weat...
Search/Information Retrieval
… retrieval of items from unstructured documents (plain
text, etc.), semi-structured data for...
http://www.flickr.com/photos/9479603@N02/4144121838/ licensed under CC BY-NC-ND 2.0
But what about
interactive,
ad-hoc que...
Impala
Interactive Query (?)
low-latency
Use Case I
• Jane, a marketing analyst
• Determine target segments
• Data from different sources
Use Case II
• Logistics – supplier status
• Queries
– How many shipments from supplier X?
– How many shipments in region Y...
Requirements
• Support for different data sources
• Support for different query interfaces
• Low-latency/real-time
• Ad-ho...
And now for something completely different …
Google’s Dremel
http://research.google.com/pubs/pub36632.html
Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Rome...
Google’s Dremel
multi-level execution trees
columnar data layout
Google’s Dremel
nested data + schema column-striped representation
map nested data to tables
Google’s Dremel
experiments:
datasets & query performance
Back to Apache Drill …
Apache Drill–key facts
• Inspired by Google’s Dremel
• Standard SQL 2003 support
• Plug-able data sources
• Nested data is...
High-level Architecture
Principled Query Execution
Source
Query Parser
Logical
Plan Optimizer
Physical
Plan Execution
SQL 2003
DrQL
MongoQL
DSL
sc...
Wire-level Architecture
• Each node: Drillbit - maximize data locality
• Co-ordination, query planning, execution, etc, ar...
Wire-level Architecture
• Curator/Zookeeper for ephemeral cluster membership info
• Distributed cache (Hazelcast) for meta...
Wire-level Architecture
• Originating Drillbit acts as foreman: manages query
execution, scheduling, locality information,...
Wire-level Architecture
Foreman turns into
root of the multi-level
execution tree, leafs
activate their storage
engine int...
Drillbit Modules
DFS Engine
HBase Engine
RPC Endpoint
SQL
HiveQL
Pig
Parser
Distributed Cache
LogicalPlan
PhysicalPlan
Opt...
Key features
• Full SQL – ANSI SQL 2003
• Nested Data as first class citizen
• Optional Schema
• Extensibility Points …
Extensibility Points
• Source query  parser API
• Custom operators, UDF  logical plan
• Serving tree, CF, topology  phy...
… and Hadoop?
• HDFS can be a data source
• Complementary use cases*
• … use Apache Drill
– Find record with specified con...
Basic Demo
https://cwiki.apache.org/confluence/display/DRILL/Demo+HowTo
{
"id": "0001",
"type": "donut",
”ppu": 0.55,
"bat...
BE A PART OF IT!
Status
• Heavy development by multiple organizations
• Available
– Logical plan (ADSP)
– Reference interpreter
– Basic SQL...
Status
May 2013
• Full SQL support (+JDBC)
• Physical plan
• In-memory compressed data interfaces
• Distributed execution
...
Contributing
Contributions appreciated (besides code drops)!
• Test data & test queries
• Use case scenarios (textual/SQL ...
Kudos to …
• Julian Hyde, Pentaho
• Lisen Mu, XingCloud
• Tim Chen, Microsoft
• Chris Merrick, RJMetrics
• David Alves, UT...
Engage!
• Follow @ApacheDrill on Twitter
• Sign up at mailing lists (user | dev)
http://incubator.apache.org/drill/mailing...
Upcoming SlideShare
Loading in...5
×

Berlin Hadoop Get Together Apache Drill

381

Published on

Michael Hausenblas talk on Apache Drill given in Berlin (2013 05-08)

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
381
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • http://solr-vs-elasticsearch.com/
  • (This is a ASR-35 at DEC mainframe–other console terminals used were Teletype model 35 Teletypes)Allowing the user to issue ad-hoc queries is essential: often, the user might not necessarily know ahead of time what queries to issue. Also, one may need to react to changing circumstances. The lack of tools to perform interactive ad-hoc analysis at scale is a gap that Apache Drill fills.
  • Hive: compile to MR, Aster: external tables in MPP, Oracle/MySQL: export MR results to RDBMSDrill, Impala, CitusDB: real-time
  • Suppose a marketing analyst trying to experiment with ways to do targeting of user segments for next campaign. Needs access to web logs stored in Hadoop, and also needs to access user profiles stored in MongoDB as well as access to transaction data stored in a conventional database.
  • Re ad-hoc:You might not know ahead of time what queries you will want to make. You may need to react to changing circumstances.
  • Two innovations: handle nested-data column style (column-striped representation) and multi-level execution trees
  • repetition levels (r) — at what repeated field in the field’s path the value has repeated.definition levels (d) — how many fields in path thatcould be undefined (because they are optional or repeated) are actually presentOnly repeated fields increment the repetition level, only non-required fields increment the definition level. Required fields are always defined and do not need a definition level. Non repeated fields do not need a repetition level.An optional field requires one extra bit to store zero if it is NULL and one if it is defined. NULL values do not need to be stored as the definition level captures this information.
  • Source query - Human (eg DSL) or tool written(eg SQL/ANSI compliant) query Source query is parsed and transformed to produce the logical planLogical plan: dataflow of what should logically be doneTypically, the logical plan lives in memory in the form of Java objects, but also has a textual formThe logical query is then transformed and optimized into the physical plan.Optimizer introduces of parallel computation, taking topology into accountOptimizer handles columnar data to improve processing speedThe physical plan represents the actual structure of computation as it is done by the systemHow physical and exchange operators should be appliedAssignment to particular nodes and cores + actual query execution per node
  • Drillbits per node, maximize data localityCo-ordination, query planning, optimization, scheduling, execution are distributedBy default, Drillbits hold all roles, modules can optionally be disabled.Any node/Drillbit can act as endpoint for particular query.
  • Zookeeper maintains ephemeral cluster membership information onlySmall distributed cache utilizing embedded Hazelcast maintains information about individual queue depth, cached query plans, metadata, locality information, etc.
  • Originating Drillbit acts as foreman, manages all execution for their particular query, scheduling based on priority, queue depth and locality information.Drillbit data communication is streaming and avoids any serialization/deserialization
  • Red: originating drillbit, is the root of the multi-level execution tree, per query/jobLeafs use their storage engine interface to scan respective data source (DB, file, etc.)
  • Relation of Drill to HadoopHadoop = HDFS + MapReduceDrill for:Finding particular records with specified conditions. For example, to findrequest logs with specified account ID.Quick aggregation of statistics with dynamically-changing conditions. For example, getting a summary of request traffic volume from the previous night for a web application and draw a graph from it.Trial-and-error data analysis. For example, identifying the cause of trouble and aggregating values by various conditions, including by hour, day and etc...MapReduce: Executing a complex data mining on Big Data which requires multiple iterations and paths of data processing with programmed algorithms.Executing large join operations across huge datasets.Exporting large amount of data after processing.
  • Transcript of "Berlin Hadoop Get Together Apache Drill "

    1. 1. Apache Drill interactive, ad-hoc query for large-scale datasets Michael Hausenblas, Chief Data Engineer EMEA, MapR Hadoop get-together, Berlin, 2013-05-08
    2. 2. Which workloads do you encounter in your environment? http://www.flickr.com/photos/kevinomara/2866648330/licensedunderCCBY-NC-ND2.0
    3. 3. Batch processing … for recurring tasks such as large-scale data mining, ETL offloading/data-warehousing  for the batch layer in Lambda architecture
    4. 4. OLTP … user-facing eCommerce transactions, real-time messaging at scale (FB), time-series processing, etc.  for the serving layer in Lambda architecture
    5. 5. Stream processing … in order to handle stream sources such as social media feeds or sensor data (mobile phones, RFID, weather stations, etc.)  for the speed layer in Lambda architecture
    6. 6. Search/Information Retrieval … retrieval of items from unstructured documents (plain text, etc.), semi-structured data formats (JSON, etc.), as well as data stores (MongoDB, CouchDB, etc.)
    7. 7. http://www.flickr.com/photos/9479603@N02/4144121838/ licensed under CC BY-NC-ND 2.0 But what about interactive, ad-hoc query at scale?
    8. 8. Impala Interactive Query (?) low-latency
    9. 9. Use Case I • Jane, a marketing analyst • Determine target segments • Data from different sources
    10. 10. Use Case II • Logistics – supplier status • Queries – How many shipments from supplier X? – How many shipments in region Y? SUPPLIER_ID NAME REGION ACM ACME Corp US GAL GotALot Inc US BAP Bits and Pieces Ltd Europe ZUP Zu Pli Asia { "shipment": 100123, "supplier": "ACM", “timestamp": "2013-02-01", "description": ”first delivery today” }, { "shipment": 100124, "supplier": "BAP", "timestamp": "2013-02-02", "description": "hope you enjoy it” } …
    11. 11. Requirements • Support for different data sources • Support for different query interfaces • Low-latency/real-time • Ad-hoc queries • Scalable, reliable
    12. 12. And now for something completely different …
    13. 13. Google’s Dremel http://research.google.com/pubs/pub36632.html Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, Theo Vassilakis, Proc. of the 36th Int'l Conf on Very Large Data Bases (2010), pp. 330- 339 Dremel is a scalable, interactive ad-hoc query system for analysis of read-only nested data. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. … “ “ Dremel is a scalable, interactive ad-hoc query system for analysis of read-only nested data. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. …
    14. 14. Google’s Dremel multi-level execution trees columnar data layout
    15. 15. Google’s Dremel nested data + schema column-striped representation map nested data to tables
    16. 16. Google’s Dremel experiments: datasets & query performance
    17. 17. Back to Apache Drill …
    18. 18. Apache Drill–key facts • Inspired by Google’s Dremel • Standard SQL 2003 support • Plug-able data sources • Nested data is a first-class citizen • Schema is optional • Community driven, open, 100’s involved
    19. 19. High-level Architecture
    20. 20. Principled Query Execution Source Query Parser Logical Plan Optimizer Physical Plan Execution SQL 2003 DrQL MongoQL DSL scanner APITopology CF etc. query: [ { @id: "log", op: "sequence", do: [ { op: "scan", source: “logs” }, { op: "filter", condition: "x > 3” }, parser API
    21. 21. Wire-level Architecture • Each node: Drillbit - maximize data locality • Co-ordination, query planning, execution, etc, are distributed • Any node can act as endpoint for a query—foreman Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node
    22. 22. Wire-level Architecture • Curator/Zookeeper for ephemeral cluster membership info • Distributed cache (Hazelcast) for metadata, locality information, etc. Curator/Zk Distributed Cache Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Distributed Cache Distributed Cache Distributed Cache
    23. 23. Wire-level Architecture • Originating Drillbit acts as foreman: manages query execution, scheduling, locality information, etc. • Streaming data communication avoiding SerDe Curator/Zk Distributed Cache Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Storage Process Drillbit node Distributed Cache Distributed Cache Distributed Cache
    24. 24. Wire-level Architecture Foreman turns into root of the multi-level execution tree, leafs activate their storage engine interface. node node node Curator/Zk
    25. 25. Drillbit Modules DFS Engine HBase Engine RPC Endpoint SQL HiveQL Pig Parser Distributed Cache LogicalPlan PhysicalPlan Optimizer StorageEngineInterface Scheduler Foreman Operators Mongo
    26. 26. Key features • Full SQL – ANSI SQL 2003 • Nested Data as first class citizen • Optional Schema • Extensibility Points …
    27. 27. Extensibility Points • Source query  parser API • Custom operators, UDF  logical plan • Serving tree, CF, topology  physical plan/optimizer • Data sources &formats  scanner API Source Query Parser Logical Plan Optimizer Physical Plan Execution
    28. 28. … and Hadoop? • HDFS can be a data source • Complementary use cases* • … use Apache Drill – Find record with specified condition – Aggregation under dynamic conditions • … use MapReduce – Data mining with multiple iterations – ETL *)https://cloud.google.com/files/BigQueryTechnicalWP.pdf
    29. 29. Basic Demo https://cwiki.apache.org/confluence/display/DRILL/Demo+HowTo { "id": "0001", "type": "donut", ”ppu": 0.55, "batters": { "batter”: [ { "id": "1001", "type": "Regular" }, { "id": "1002", "type": "Chocolate" }, … data source: donuts.json query:[ { op:"sequence", do:[ { op: "scan", ref: "donuts", source: "local-logs", selection: {data: "activity"} }, { op: "filter", expr: "donuts.ppu < 2.00" }, … logical plan: simple_plan.json result: out.json { "sales" : 700.0, "typeCount" : 1, "quantity" : 700, "ppu" : 1.0 } { "sales" : 109.71, "typeCount" : 2, "quantity" : 159, "ppu" : 0.69 } { "sales" : 184.25, "typeCount" : 2, "quantity" : 335, "ppu" : 0.55 }
    30. 30. BE A PART OF IT!
    31. 31. Status • Heavy development by multiple organizations • Available – Logical plan (ADSP) – Reference interpreter – Basic SQL parser – Basic demo
    32. 32. Status May 2013 • Full SQL support (+JDBC) • Physical plan • In-memory compressed data interfaces • Distributed execution • HBase and MySQL storage engine • WebUI client
    33. 33. Contributing Contributions appreciated (besides code drops)! • Test data & test queries • Use case scenarios (textual/SQL queries) • Documentation • Further schedule – Alpha Q2 – Beta Q3
    34. 34. Kudos to … • Julian Hyde, Pentaho • Lisen Mu, XingCloud • Tim Chen, Microsoft • Chris Merrick, RJMetrics • David Alves, UT Austin • Sree Vaadi, SSS/NGData • Jacques Nadeau, MapR • Ted Dunning, MapR
    35. 35. Engage! • Follow @ApacheDrill on Twitter • Sign up at mailing lists (user | dev) http://incubator.apache.org/drill/mailing-lists.html • Standing G+ hangouts every Tuesday at 6pm CET http://j.mp/apache-drill-hangouts • Keep an eye on http://drill-user.org/
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×