Your SlideShare is downloading. ×
Understanding the value andarchitecture of Apache DrillMichael Hausenblas, Chief Data Engineer EMEA, MapR     Hadoop Summi...
Kudos to http://cmx.io/                              2                          2
Workloads•   Batch processing (MapReduce)•   Light-weight OLTP (HBase, Cassandra, etc.)•   Stream processing (Storm, S4)• ...
Interactive Query at Scale                       Impala         low-latency              4
Use Case• Jane, a marketing analyst• Determine target segments• Data from different sources                       5
Today’s Solutions• RDBMS-focused   – ETL data from MongoDB and Hadoop   – Query data using SQL• MapReduce-focused  – ETL f...
Requirements•   Support for different data sources•   Support for different query interfaces•   Low-latency/real-time•   A...
Google’s Dremelhttp://research.google.com/pubs/pub36632.html                                                8
Apache Drill Overview•   Inspired by Google’s Dremel•   Standard SQL 2003 support•   Other QL possible•   Plug-able data s...
Apache Drill Overview          10
High-level Architecture           11
High-level Architecture•   Each node: Drillbit - maximize data locality•   Co-ordination, query planning, execution, etc, ...
High-level Architecture• Zookeeper for ephemeral cluster membership info• Distributed cache (Hazelcast) for metadata, loca...
High-level Architecture• Originating Drillbit acts as foreman, manages query execution,  scheduling, locality information,...
Principled Query ExecutionSource                    Logical                              PhysicalQuery        Parser      ...
Drillbit Modules                          RPC Endpoint SQL                                                              Sc...
Key Features•   Full SQL 2003•   Nested data•   Optional schema•   Extensibility points                           17
Full SQL – ANSI SQL 2003• SQL-like is often not enough• Integration with existing tools   – Datameer, Tableau, Excel, SAP ...
Nested Data• Nested data becoming prevalent   – JSON/BSON, XML, ProtoBuf, Avro   – Some data sources support it natively  ...
Optional Schema• Many data sources don’t have rigid schemas   – Schema changes rapidly   – Different schema per record (e....
Extensibility Points •   Source query – parser API •   Custom operators, UDF – logical plan •   Optimizer •   Data sources...
… and Hadoop?• HDFS can be a data source• Complementary use cases …• … use Apache Drill   – Find record with specified con...
Example{ "id": "0001", "type": "donut", ”ppu": 0.55, "batters": {                                                         ...
Status• Heavy development by multiple organizations• Available  – Logical plan (ADSP)  – Reference interpreter  – Basic SQ...
StatusMarch/April• Larger SQL syntax• Physical plan• In-memory compressed data interfaces• Distributed execution focused o...
Contributing• Dremel-inspired columnar format: Twitter’s Parquet and  Hive’s ORC file• Integration with Hive metastore (?)...
Contributing• DRILL-48 RPC interface for query submission and physical plan  execution• DRILL-53 Setup cluster configurati...
Kudos to …•   Julian Hyde, Pentaho•   Timothy Chen, Microsoft•   Chris Merrick, RJMetrics•   David Alves, UT Austin•   Sre...
Engage!• Follow @ApacheDrill on Twitter• Sign up at mailing lists (user|dev)  http://incubator.apache.org/drill/mailing-li...
Upcoming SlideShare
Loading in...5
×

Hadoop Summit - Hausenblas 20 March

510

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
510
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • Hive: compile to MR, Aster: external tables in MPP, Oracle/MySQL: export MR results to RDBMSDrill, Impala, CitusDB: real-time
  • Suppose a marketing analyst trying to experiment with ways to do targeting of user segments for next campaign. Needs access to web logs stored in Hadoop, and also needs to access user profiles stored in MongoDB as well as access to transaction data stored in a conventional database.
  • Re ad-hoc:You might not know ahead of time what queries you will want to make. You may need to react to changing circumstances.
  • Two innovations: handle nested-data column style (column-striped representation) and query push-down
  • Drillbits per node, maximize data localityCo-ordination, query planning, optimization, scheduling, execution are distributedBy default, Drillbits hold all roles, modules can optionally be disabled.Any node/Drillbit can act as endpoint for particular query.
  • Zookeeper maintains ephemeral cluster membership information onlySmall distributed cache utilizing embedded Hazelcast maintains information about individual queue depth, cached query plans, metadata, locality information, etc.
  • Originating Drillbit acts as foreman, manages all execution for their particular query, scheduling based on priority, queue depth and locality information.Drillbit data communication is streaming and avoids any serialization/deserializationRed arrow: originating drillbit, is the root of the multi-level serving tree, per query
  • Source query - Human (eg DSL) or tool written(eg SQL/ANSI compliant) query Source query is parsed and transformed to produce the logical planLogical plan: dataflow of what should logically be doneTypically, the logical plan lives in memory in the form of Java objects, but also has a textual formThe logical query is then transformed and optimized into the physical plan.Optimizer introduces of parallel computation, taking topology into accountOptimizer handles columnar data to improve processing speedThe physical plan represents the actual structure of computation as it is done by the systemHow physical and exchange operators should be appliedAssignment to particular nodes and cores + actual query execution per node
  • Relation of Drill to HadoopHadoop = HDFS + MapReduceDrill for:Finding particular records with specified conditions. For example, to findrequest logs with specified account ID.Quick aggregation of statistics with dynamically-changing conditions. For example, getting a summary of request traffic volume from the previous night for a web application and draw a graph from it.Trial-and-error data analysis. For example, identifying the cause of trouble and aggregating values by various conditions, including by hour, day and etc...MapReduce: Executing a complex data mining on Big Data which requires multiple iterations and paths of data processing with programmed algorithms.Executing large join operations across huge datasets.Exporting large amount of data after processing.
  • Parquet and ORC file formatsDrill will probably adopt one as a primaryTez/Stinger: Make Hive more SQL’y, add a new execution engine, faster with ORC. Depending on status and code drop, maybe portions of execution engine can be sharedImpala: Hive replacement query engine. Backend entirely in C++, flat data, primarily in-memory datasets when blocking operators requiredInspiration around external integration with Hive metastore, collaboration on use and extension of ParquetShark+Spark: Scala query engine, record at a time, focused on intermediate resultset caching Ideas around Adaptive caching, cleaner Scala interfacesTajo: Cleaner APIs, still record at a time execution, very object orientedAPI Inspiration, front end test cases, expansion to reference interpreter via code sharing
  • Transcript of "Hadoop Summit - Hausenblas 20 March"

    1. 1. Understanding the value andarchitecture of Apache DrillMichael Hausenblas, Chief Data Engineer EMEA, MapR Hadoop Summit, Amsterdam, 2013-03-20 1
    2. 2. Kudos to http://cmx.io/ 2 2
    3. 3. Workloads• Batch processing (MapReduce)• Light-weight OLTP (HBase, Cassandra, etc.)• Stream processing (Storm, S4)• Search (Solr, Elasticsearch)• Interactive, ad-hoc query and analysis (?) 3
    4. 4. Interactive Query at Scale Impala low-latency 4
    5. 5. Use Case• Jane, a marketing analyst• Determine target segments• Data from different sources 5
    6. 6. Today’s Solutions• RDBMS-focused – ETL data from MongoDB and Hadoop – Query data using SQL• MapReduce-focused – ETL from RDBMS and MongoDB – Use Hive, etc. 6
    7. 7. Requirements• Support for different data sources• Support for different query interfaces• Low-latency/real-time• Ad-hoc queries• Scalable and fast• Reliable 7
    8. 8. Google’s Dremelhttp://research.google.com/pubs/pub36632.html 8
    9. 9. Apache Drill Overview• Inspired by Google’s Dremel• Standard SQL 2003 support• Other QL possible• Plug-able data sources• Support for nested data• Schema is optional• Community driven, open, 100’s involved 9
    10. 10. Apache Drill Overview 10
    11. 11. High-level Architecture 11
    12. 12. High-level Architecture• Each node: Drillbit - maximize data locality• Co-ordination, query planning, execution, etc, are distributed• By default Drillbits hold all roles• Any node can act as endpoint for a query Drillbit Drillbit Drillbit Drillbit Storage Storage Storage Storage Process Process Process Process node node node node 12
    13. 13. High-level Architecture• Zookeeper for ephemeral cluster membership info• Distributed cache (Hazelcast) for metadata, locality information, etc. Zookeeper Drillbit Drillbit Drillbit Drillbit Distributed Cache Distributed Cache Distributed Cache Distributed Cache Storage Storage Storage Storage Process Process Process Process node node node node 13
    14. 14. High-level Architecture• Originating Drillbit acts as foreman, manages query execution, scheduling, locality information, etc.• Streaming data communication avoiding SerDe Zookeeper Drillbit Drillbit Drillbit Drillbit Distributed Cache Distributed Cache Distributed Cache Distributed Cache Storage Storage Storage Storage Process Process Process Process node node node node 14
    15. 15. Principled Query ExecutionSource Logical PhysicalQuery Parser Plan Optimizer Plan ExecutionSQL 2003 parser API query: [ { topology scanner APIDrQL @id: "log", op: "sequence",MongoQL do: [ {DSL op: "scan", source: “logs” }, { op: "filter", condition: "x > 3” }, 15
    16. 16. Drillbit Modules RPC Endpoint SQL Scheduler Storage Engine Interface DFS Engine Physical Plan Logical PlanHiveQL Optimizer Foreman Pig HBase Engine OperatorsMongoParser Distributed Cache 16
    17. 17. Key Features• Full SQL 2003• Nested data• Optional schema• Extensibility points 17
    18. 18. Full SQL – ANSI SQL 2003• SQL-like is often not enough• Integration with existing tools – Datameer, Tableau, Excel, SAP Crystal Reports – Use standard ODBC/JDBC driver 18
    19. 19. Nested Data• Nested data becoming prevalent – JSON/BSON, XML, ProtoBuf, Avro – Some data sources support it natively (MongoDB, etc.)• Flattening nested data is error-prone• Extension to ANSI SQL 2003 19
    20. 20. Optional Schema• Many data sources don’t have rigid schemas – Schema changes rapidly – Different schema per record (e.g. HBase)• Supports queries against unknown schema• User can define schema or via discovery 20
    21. 21. Extensibility Points • Source query – parser API • Custom operators, UDF – logical plan • Optimizer • Data sources and formats – scanner APISource Logical PhysicalQuery Parser Plan Optimizer Plan Execution 21
    22. 22. … and Hadoop?• HDFS can be a data source• Complementary use cases …• … use Apache Drill – Find record with specified condition – Aggregation under dynamic conditions• … use MapReduce – Data mining with multiple iterations – ETLhttps://cloud.google.com/files/BigQueryTechnicalWP.pdf 22 22
    23. 23. Example{ "id": "0001", "type": "donut", ”ppu": 0.55, "batters": { { "batter”: "sales" : 700.0, [ "typeCount" : 1, { "id": "1001", "type": "Regular" }, "quantity" : 700, { "id": "1002", "type": "Chocolate" }, "ppu" : 1.0… } { "sales" : 109.71,data source: donuts.json "typeCount" : 2, "quantity" : 159, query:[ { "ppu" : 0.69 op:"sequence", } do:[ { { "sales" : 184.25, op: "scan", "typeCount" : 2, ref: "donuts", "quantity" : 335, source: "local-logs", "ppu" : 0.55 selection: {data: "activity"} } }, { result: out.json op: "filter", expr: "donuts.ppu < 2.00" },…logical plan: simple_plan.json https://cwiki.apache.org/confluence/display/DRILL/Demo+HowTo 23
    24. 24. Status• Heavy development by multiple organizations• Available – Logical plan (ADSP) – Reference interpreter – Basic SQL parser – Basic demo 24
    25. 25. StatusMarch/April• Larger SQL syntax• Physical plan• In-memory compressed data interfaces• Distributed execution focused on large cluster high performance sort, aggregation and join• Storage engine implementations (HBase, etc.) 25
    26. 26. Contributing• Dremel-inspired columnar format: Twitter’s Parquet and Hive’s ORC file• Integration with Hive metastore (?)• DRILL-13 Storage Engine: Define Java Interface• DRILL-15 Build HBase storage engine implementation 26
    27. 27. Contributing• DRILL-48 RPC interface for query submission and physical plan execution• DRILL-53 Setup cluster configuration and membership mgmt system – ZK for coordination – Helix for partition and resource assignment (?)• Further schedule – Alpha Q2 – Beta Q3 27
    28. 28. Kudos to …• Julian Hyde, Pentaho• Timothy Chen, Microsoft• Chris Merrick, RJMetrics• David Alves, UT Austin• Sree Vaadi, SSS/NGData• Jacques Nadeau, MapR• Ted Dunning, MapR 28
    29. 29. Engage!• Follow @ApacheDrill on Twitter• Sign up at mailing lists (user|dev) http://incubator.apache.org/drill/mailing-lists.html• Learn where and how to contribute https://cwiki.apache.org/confluence/display/DRILL/Contributing• Keep an eye on http://drill-user.org/ 29

    ×