• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Xadoop - new approaches to data analytics
 

Xadoop - new approaches to data analytics

on

  • 3,556 views

Overview of our data analytics work given to Microsoft SQL Server guys during their visit to Systems Group, ETH Zurich

Overview of our data analytics work given to Microsoft SQL Server guys during their visit to Systems Group, ETH Zurich

Statistics

Views

Total Views
3,556
Views on SlideShare
3,529
Embed Views
27

Actions

Likes
1
Downloads
32
Comments
0

3 Embeds 27

http://www.slideshare.net 15
http://maxgrinev.tumblr.com 8
https://twitter.com 4

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Xadoop - new approaches to data analytics Xadoop - new approaches to data analytics Presentation Transcript

    • Systems Group Dept. Computer Science ETH Zurich - Switzerland Xadoop – new approaches to data analytics Lukas Blunschi, Maxim Grinev , Maria Grineva, Donald Kossmann, Georg Polzer, Kurt Stockinger (Credit Suisse)
    • Credit Suisse Project
      • Task: Analyze Oracle query logs for audit purposes
        • Log size: 6 TB new data every 6 month s
        • Typical query: who queried column A in table B in the second quarter of 2009
        • A few queries like this twice a year
      • Issues:
        • Storing logs in Oracle tables is slow => Storing in XML files instead
        • Scan-intensive queries because of complex log processing (SQL parsing)
    • Possible Solutions
      • Build a warehouse
        • Not cost effective for a few queries twice a year
      • Use Hadoop
        • Open source but proven software
        • Logs are already in files
        • Easy to implement the queries and to deploy
    • Hadoop Solution 1: Using Pig
      • Pig – High-level data processing language compiled to MapReduce
      • Advantages:
        • It is easy to develop in Pig
        • Extendable via User Defined Functions in Java
        • Widely used by Web companies (Twitter, etc)
      • Disadvantages:
        • Have to write a format-specific data loader to parse XML
        • Restricted support for nested queries
    • Hadoop Solution 1: Pig Example
      • Get users who queried table “LOGON_INFO” after the date and sorted by number of requests :
      • register ./pigxml.jar
      • define DATECOMP ch.ethz.xadoop.udf.DATECOMP();
      • define XMLLoader ch.ethz.xadoop.loader.XMLLoader() ;
      • A = load 'audit.xml' using XMLLoader() as (action, audit_type, comment_text, db_user, entry_id, instance_number, object_name, object_schema, os_process, os_user, return_code, scn, session_id, sql_bind, sql_text, terminal, user_host, extended_timestamp);
      • B = filter A by sql_text matches '.*LOGON_INFO.*' and DATECOMP((chararray)extended_timestamp, '2010-03-04T10:00:43.775225') > 0;
      • B1 = group B by db_user;
      • B2 = foreach B1 generate group, COUNT(B.sql_text) as num_of_queries;
      • B3 = order B2 by num_of_queries desc;
      • dump B3;
    • Hadoop Solution 1: Experiments 38m 10s 26m 20s 11m 05s 5 workers 59m 20s 40m 30s 19m 00s 3 workers 90 Gb 60 Gb 30 Gb
    • Hadoop Solution 2: Using XQuery
      • Xadoop is an integration of XQuery (Zorba) and Hadoop:
        • Map and Reduce are implemented in XQuery
      • Advantages:
        • Don’t need to write a loader for XML input
        • XQuery is a powerful data processing and transformation language with support for UDFs
      • Disadvantages:
        • You have to think in terms of two programming models: MapReduce and XQuery – that is quite natural and useful in practice though
    • Hadoop Solution 2: Using XQuery
      • declare function xadoop:map($ record ) {
      • for $r in $record
      • where fn:contains($r/sql_text, “ LOGON_INFO ”) and xs:date( $r/ extended_timestamp) > xs:date("2000-0 3 -0 4 ")
      • return (<key> {$r/db_user} </key>,<value>1</value>)
      • };
      • declare function xadoop:reduce($key, $ num ) {
      • ($key,<value>{fn:count($ num /value)}</value>)
      • };
    • Future Work: Vision
      • You cannot merge traditional OLAP and OLTP systems:
        • OLAP – pre-aggregated data with redundancy
        • OLTP – tend to be normalized
      • There are two trends on the Web
        • Hadoop is often used for analytic processing instead of warehouses
        • Key-value store is used for OLTP
      • MapReduce and key-value store are good match
        • MapReduce takes raw operational data and does aggregation on-the-fly
        • Key-value store is a natural input for MapReduce
    • Future work: Issues
      • Running Hadoop MapReduce over Cassandra key-value store:
        • “ SQL/XQuery” over Cassandra/BigTable data model compiled to M/R
        • How to share resources (CPU, I/O) to support both transactional and analytical workloads over the same store
      • Real-time analytics:
        • From pull (batch) to push (online) processing models
        • Hadoop is slow but can be optimized (e.g. checkpointing into main memory of another cloud machine instead of local disk)