• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Drill njhug -19 feb2013
 

Drill njhug -19 feb2013

on

  • 887 views

 

Statistics

Views

Total Views
887
Views on SlideShare
853
Embed Views
34

Actions

Likes
1
Downloads
0
Comments
0

1 Embed 34

http://www.mapr.com 34

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • With the recent explosion of everything related to Hadoop, it is no surprise that new projects/implementations related to the Hadoop ecosystem keep appearing. There have been quite a few initiatives that provide SQL interfaces into Hadoop. The Apache Drill project is a distributed system for interactive analysis of large-scale datasets, inspired by Google's Dremel. Drill is not trying to replace existing Big Data batch processing frameworks, such as Hadoop MapReduce or stream processing frameworks, such as S4 or Storm. It rather fills the existing void – real-time interactive processing of large data sets.------------------------------Technical DetailSimilar to Dremel, the Drill implementation is based on the processing of nested, tree-like data. In Dremel this data is based on protocol buffers – nested schema-based data model. Drill is planning to extend this data model by adding additional schema-based implementations, for example, Apache Avro and schema-less data models such asJSON and BSON. In addition to a single data structure, Drill is also planning to support “baby joins” – joins to the small, loadable in memory, data structures.

Drill njhug -19 feb2013 Drill njhug -19 feb2013 Presentation Transcript

  • Apache’s Answer to Low Latency Interactive Query for Big Data February 19, 2013 1
  • Who am I?http://www.mapr.com/company/events/nj h-2-19-2013• Keys Botzum• kbotzum@maprtech.com• Senior Principal Technologist, MapR Technologies 2
  • Agenda• Apache Drill overview• Key features• Status and progress• Discuss potential use cases and cooperation 3
  • Big Data Workloads• ETL• Data mining• Blob store• Lightweight OLTP on large datasets• Index and model generation• Web crawling• Stream processing• Clustering, anomaly detection and classification• Interactive analysis 4
  • Example Problem • Jane works as an Transaction analyst at an e- information commerce company • How does she figure User out good targeting profiles segments for the next marketing campaign? • She has some ideas Access and lots of data logs 5
  • Solving the Problem with Traditional Systems• Use an RDBMS – ETL the data from MongoDB and Hadoop into the RDBMS • MongoDB data must be flattened, schematized, filtered and aggregated • Hadoop data must be filtered and aggregated – Query the data using any SQL-based tool• Use MapReduce – ETL the data from Oracle and MongoDB into Hadoop – Work with the MapReduce team to generate the desired analyses• Use Hive – ETL the data from Oracle and MongoDB into Hadoop • MongoDB data must be flattened and schematized – But HiveQL is limited, queries take too long and BI tool support is limited• Challenges: data movement, loss of nesting structure, latency 6
  • WWGD Distributed Interactive Batch NoSQL File System analysis processing GFS BigTable Dremel MapReduce Hadoop HDFS HBase ??? MapReduce Build Apache Drill to provide a true open source solution to interactive analysis of Big Data 8
  • Google Dremel• Interactive analysis of large-scale datasets – Trillion records at interactive speeds – Complementary to MapReduce – Used by thousands of Google employees – Paper published at VLDB 2010• Model – Nested data model with schema • Most data at Google is stored/transferred in Protocol Buffers • Normalization (to relational) is prohibitive – SQL-like query language with nested data support• Implementation – Column-based storage and processing – In-situ data access (GFS and Bigtable) – Tree architecture as in Web search (and databases) 9
  • Innovations• MapReduce – Highly parallel algorithms running on commodity systems can deliver real value at reasonable cost – Scalable IO and compute trumps efficiency with todays commodity hardware – With many datasets, schemas and indexes are limiting – Flexibility is more important than efficiency – An easy, scalable, fault tolerant execution framework is key for large clusters• Dremel – Columnar storage provides significant performance benefits at scale – Columnar storage with nesting preserves structure and can be very efficient – Avoiding final record assembly as long as possible improves efficiency – Optimizing for the query use case can avoid the full generality of MR and thus significantly reduce latency. E.g., no need to start JVMs, just push compact queries to running agents. 10
  • Apache Drill Overview• Inspired by Google Dremel/BigQuery … more ambitious• Interactive analysis of Big Data using standard SQL• Fast – Low latency queries Interactive queries – Columnar execution Apache Drill Data analyst Reporting – Complement native interfaces and MapReduce/Hive/Pig 100 ms-20 min• Open – Community driven open source project – Under Apache Software Foundation• Modern Data mining – Standard ANSI SQL:2003 (select/into) MapReduce Modeling Hive – Nested/hierarchical data support Pig Large ETL 20 min-20 hr – Schema is optional – Supports RDBMS, Hadoop and NoSQL – Extensible 11
  • How Does It Work?• Drillbits run on each node, designed to maximize data locality• Processing is done outside MapReduce SELECT * FROM paradigm (but possibly within YARN) oracle.transactions,• Queries can be fed to any Drillbit mongo.users, hdfs.events• Coordination, query planning, optimization, LIMIT 1 scheduling, and execution are distributed 12
  • Key Features• Full SQL (ANSI SQL:2003)• Nested data• Schema is optional• Flexible and extensible architecture 13
  • Full SQL (ANSI SQL:2003)• Drill supports standard ANSI SQL:2003 – Correlated subqueries, analytic functions, … – SQL-like is not enough• Use any SQL-based tool with Apache Drill – Tableau, Microstrategy, Excel, SAP Crystal Reports, Toad, SQuirreL, … – Standard ODBC and JDBC drivers Client Tableau Drillbit MicroStrategy Drill% ODBC% SQL%Query% Query% Driver Drillbits Driver Parser Planner Drill% Worker Excel Drill%Worker SAP% Crystal% Reports 14
  • Nested Data JSON• Nested data is becoming prevalent { "name": "Homer", – JSON, BSON, XML, Protocol Buffers, Avro, etc. "gender": "Male", "followers": 100 – The data source may or may not be aware children: [ {name: "Bart"}, • MongoDB supports nested data natively {name: "Lisa”} ] • A single HBase value could be a JSON document } (compound nested type) – Google Dremel’s innovation was efficient Avro columnar storage and querying of nested data enum Gender {• Flattening nested data is error-prone and } MALE, FEMALE very difficult record User {• Apache Drill supports nested data string name; Gender gender; – Extensions to ANSI SQL:2003 } long followers; 15
  • Schema is Optional• Many data sources do not have rigid schemas – Schemas change rapidly – Each record may have a different schema • Sparse and wide rows in HBase and Cassandra, MongoDB• Apache Drill supports querying against unknown schemas – Query any HBase, Cassandra or MongoDB table• User can define the schema or let the system discover it automatically – System of record may already have schema information • Why manage it in a separate system? – No need to manage schema evolutionRow Key CF contents CF anchor"com.cnn.www" contents:html = "<html>…" anchor:my.look.ca = "CNN.com" anchor:cnnsi.com = "CNN""com.foxnews.www" contents:html = "<html>…" anchor:en.wikipedia.org = "Fox News"… … … 16
  • Flexible and Extensible Architecture• Apache Drill is designed for extensibility• Well-documented APIs and interfaces• Data sources and file formats – Implement a custom scanner to support a new data source or file format• Query languages – SQL:2003 is the primary language – Implement a custom Parser to support a Domain Specific Language• Optimizers – Drill will have a cost-based optimizer – Clear surrounding APIs support easy optimizer exploration• Operators – Custom operators can be implemented • Special operators for Mahout (k-means) being designed – Operator push-down to data source (RDBMS) 17
  • Architecture• Only the execution engine knows the physical attributes of the cluster – # nodes, hardware, file locations, …• Public interfaces enable extensibility – Developers can build parsers for new query languages – Developers can provide an execution plan directly• Each level of the plan has a human readable representation – Facilitates debugging and unit testing 18
  • Status: In Progress• Heavy active development by multiple organizations• Available – Logical plan syntax and interpreter – Reference interpreter• In progress – SQL interpreter – Storage engine implementations for Accumulo, Cassandra, HBase and various file formats• Significant community momentum – Over 200 people on the Drill mailing list – Over 200 members of the Bay Area Drill User Group – Drill meetups across the US and Europe – OpenDremel team joined Apache Drill• Anticipated schedule: – Prototype: Q1 – Alpha: Q2 – Beta: Q3 19
  • Why Apache Drill Will Be SuccessfulResources Community Architecture• Contributors have strong • Development done in the • Full SQL backgrounds from open • New data support companies like Oracle, • Active contributors from • Extensible APIs IBM Netezza, Informatica, multiple companies • Full Columnar Execution Clustrix and Pentaho • Rapidly growing • Beyond Hadoop 20
  • MapR’s Innovations for Hadoop• NFS direct access • Makes Hadoop file system look like any file system • Simplifies access to data in a Hadoop cluster • Enables non-Hadoop programs access to the data – you know, the existing important applications you already have!• Transparent compression • Saves space and thus $$$• Web, command line, and REST based management tools • Reduces the burden on your admin teams 21
  • MapR’s Innovations for Hadoop• Eliminates single points of failure • Self healing with automated stateful failover• Protects your data • Snapshots for point-in-time data protection and recovery • Mirroring for business continuity includes wide area replication support• More scalable • Central Name Node eliminated • Hundreds of billions of files/cluster – over a billion/node • File creation rates of over 1000/sec/node 22
  • MapR’s Innovations for Hadoop• Speeds jobs by up to 4X • 50% - 400% faster than other Hadoop distributions depending on benchmark and hardware• Google and MapR demonstrated Terasort world record • http://www.mapr.com/mapr-google• How did we do it? • Lots of C/C++ to avoid Java overhead • Raw disk IO • Application level NIC bonding • Numerous other optimizations in key components 23
  • MapR in the Cloud Available as a service with Google Compute Engine• Available as a service with Amazon Elastic MapReduce (EMR) – http://aws.amazon.com/elasticmapreduce/mapr 24
  • Three Editions• All Hadoop API compatible, majority open source components, full Hadoop stack• M3 – Faster, easier to use, better integration• M5 – Improving reliability and dependability• M7 – Hbase APIs on a more performant, scalable, and dependable platform (MapR Data Platform) 25
  • Questions?• What problems can Drill solve for you?• Where does it fit in the organization?• Which data sources and BI tools are important to you? 26
  • References• Google’s Dremel – http://research.google.com/pubs/pub36632.html• Google’s BigQuery – https://developers.google.com/bigquery/docs/query-reference• Microsoft’s Dryad – Distributed execution engine – http://research.microsoft.com/en-us/projects/dryad/• MIT’s C-Store – a columnar database – http://db.csail.mit.edu/projects/cstore/• Google’s Protobufs – https://developers.google.com/protocol-buffers/docs/proto• How Apache projects work – http://www.apache.org/foundation/how-it-works.html 27
  • Get Involved!• Download these slides – http://www.mapr.com/company/events/njh-2-19-2013• Apache Drill Project Information – http://www.mapr.com/drill – http://incubator.apache.org/drill – Join the mailing list and help: drill-dev- subscribe@incubator.apache.org• Join MapR – jobs@mapr.com 28
  • APPENDIX 29
  • How Does Impala Fit In?Impala Strengths Questions• Beta currently available • Open Source ‘Lite’• Easy install and setup on top of • Doesn’t support RDBMS or other Cloudera NoSQLs (beyond Hadoop/HBase)• Faster than Hive on some queries • Early row materialization increases• SQL-like query language footprint and reduces performance • Limited file format support • Query results must fit in memory! • Rigid schema is required • No support for nested data • Compound APIs restrict optimizer progression • SQL-like (not SQL) Many important features are “coming soon”. Architectural foundation is constrained. No community development. 30
  • Why Not Leverage MapReduce?• Scheduling Model – Coarse resource model reduces hardware utilization – Acquisition of resources typically takes 100’s of millis to seconds• Barriers – Map completion required before shuffle/reduce commencement – All maps must complete before reduce can start – In chained jobs, one job must finish entirely before the next one can start• Persistence and Recoverability – Data is persisted to disk between each barrier – Serialization and deserialization are required between execution phase 31