Hive User Meeting March 2010 - Hive Team
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Hive User Meeting March 2010 - Hive Team

on

  • 10,930 views

New features and APIs in Hive

New features and APIs in Hive

Statistics

Views

Total Views
10,930
Views on SlideShare
10,842
Embed Views
88

Actions

Likes
10
Downloads
459
Comments
1

5 Embeds 88

http://www.slideshare.net 59
http://www.linkedin.com 10
http://blog.iband.kr 8
https://www.linkedin.com 6
http://devpub.kr 5

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi,
    How is SELECT A.* FROM A LEFT SEMI JOIN B ON (A.KEY=B.KEY AND B.VALUE>100); possible? I thought join conditions had to be only equality conditions (which the B.VALUE>100 is not)
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Hive User Meeting March 2010 - Hive Team Presentation Transcript

  • 1. Hive New Features and API Facebook Hive Team March 2010
  • 2. JDBC/ODBC and CTAS
  • 3. Hive ODBC Driver
    • Architecture:
      • Client/DriverManager  local call dynamic libraies
      • unixODBC (libodbchive.so) + hiveclient(libhiveclient.so) + thriftclient (libthrift.so)  network socket
      • HiveServer (in Java)  local call
      • Hive + Hadoop
    • unixODBC is not part of Hive open source, so you need to build it by yourself.
      • 32-bit/64-bit architecture
      • Thrift has to be r790732
      • Boost libraries
      • Linking with 3 rd party Driver Manager.
    Facebook
  • 4. Facebook Use Case
    • Hive integration with MicroStrategy 8.1.2 (HIVE-187) and 9.0.1. (HIVE-1101)
        • FreeForm SQL (reported generated from user input queries)
        • Reports generated daily.
    • All servers (MSTR IS server, HiveServer) are running on Linux.
        • ODBC driver needs to be 32 bits.
    Facebook
  • 5. Hive JDBC
    • Embedded mode:
      • jdbc:hive://
    • Client/server mode:
      • jdbc:hive://host:port/dbname
      • host:port is where the hive server is listening.
      • Architecture is similar to ODBC.
    Facebook
  • 6. Create table as select (CTAS)
      • New feature in branch 0.5.
      • E.g.,
      • CREATE TABLE T STORED AS TEXTFILE AS
      • SELECT a+1 a1, concat(b,c,d) b2
      • FROM S
      • WHERE …
      • Resulting schema:
      • T (a1 double, b2 string)
      • The create-clause can take all table properties except external table or partitioned table (on roadmap).
      • Atomicity: T will not be created if the select statement has an error.
    Facebook
  • 7. Join Strategies
  • 8. Left semi join
    • Implementing IN/EXISTS subquery semantics:
      • SELECT A.*
      • FROM A WHERE A.KEY IN
      • (SELECT B.KEY FROM B WHERE B.VALUE > 100);
      • Is equivalent to:
      • SELECT A.*
      • FROM A LEFT SEMI JOIN B
      • ON (A.KEY = B.KEY and B.VALUE > 100);
    • Optimizations:
      • map-side groupby to reduce data flowing to reducers
      • early exit if match in join.
    Facebook
  • 9. Map Join Implementation SELECT /*+MAPJOIN(a,c)*/ a.*, b.*, c.* a join b on a.key = b.key join c on a.key=c.key; Table b Table a Table c Mapper 1 File a1 File a2 File c1 Mapper 2 Mapper 3 a1 a2 c1 a1 a2 c1 a1 a2 c1
    • Spawn mapper based on the big table
    • All files of all small tables are
    • replicated onto each mapper
  • 10. Bucket Map Join
      • set hive.optimize.bucketmapjoin = true;
      • Work together with map join
      • All join tables are bucketized, and each small table’s bucket number can be divided by big table’s bucket number.
      • Bucket columns == Join columns
  • 11. Bucket Map Join Implementation SELECT /*+MAPJOIN(a,c)*/ a.*, b.*, c.* a join b on a.key = b.key join c on a.key=c.key; Table b Table a Table c Mapper 1 Bucket b1 Bucket a1 Bucket a2 Bucket c1 Mapper 2 Bucket b1 Mapper 3 Bucket b2 a1 c1 a1 c1 a2 c1 Normally in production, there will be thousands of buckets! Table a,b,c all bucketized by ‘key’ a has 2 buckets, b has 2, and c has 1
    • Spawn mapper based on the big table
    • Only matching buckets of all small tables are replicated onto each mapper
  • 12. Sort Merge Bucket Map Join
      • set hive.optimize.bucketmapjoin = true; set hive.optimize.bucketmapjoin.sortedmerge = true; set hive.input.format=org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
      • Work together with bucket map join
      • Bucket columns == Join columns == sort columns
      • If partitioned, only big table can allow multiple partitions, small tables must be restricted to a single partition by query.
  • 13. Sort Merge Bucket Map Join Facebook Table A Table B Table C 1, val_1 3, val_3 5, val_5 4, val_4 4, val_4 20, val_20 23, val_23 20, val_20 25, val_25
      • Small tables are read on demand NOT held entire small tables in memory Can perform outer join
  • 14. Skew Join
    • Join bottlenecked on the reducer who gets the skewed key
    • set hive.optimize.skewjoin = true; set hive.skewjoin.key = skew_key_threshold
  • 15. Skew Join Reducer 1 Reducer 2 a-K 3 b-K 3 a-K 3 b-K 3 a-K 2 b-K 2 a-K 2 b-K 2 a-K 1 b-K 1 Table A Table B A join B Write to HDFS HDFS File a-K1 HDFS File b-K1 Map join a-k1 map join b-k1 Job 1 Job 2 Final results
  • 16. Future Work
    • Skew Join with a Replication Algorithm
    • Memory Footprint Optimization
  • 17. Views, HBase Integration
  • 18. CREATE VIEW Syntax
      • CREATE VIEW [IF NOT EXISTS] view_name
      • [ (column_name [COMMENT column_comment], … ) ]
      • [COMMENT view_comment]
      • AS SELECT …
      • [ ORDER BY … LIMIT … ]
      • -- example
      • CREATE VIEW pokebaz(baz COMMENT ‘this column used to be bar’)
      • COMMENT ‘views are good for layering on renaming’
      • AS SELECT bar FROM pokes;
    Facebook
  • 19. View Features
    • Other commands
      • SHOW TABLES: views show up too
      • DESCRIBE: see view column descriptions
      • DESCRIBE EXTENDED: retrieve view definition
    • Enhancements on the way soon
      • Dependency management (e.g. CASCADE/RESTRICT)
      • Partition awareness
    • Enhancements (long term)
      • Updatable views
      • Materialized views
    Facebook
  • 20. HBase Storage Handler
    • CREATE TABLE users(
    • userid int, name string, email string, notes string)
    • STORED BY
    • 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    • WITH SERDEPROPERTIES (
    • "hbase.columns.mapping" =
    • ” small:name,small:email,large:notes”);
    Facebook
  • 21. HBase Storage Handler Features
    • Commands supported
      • CREATE EXTERNAL TABLE: register existing HTable
      • SELECT: join, group by, union, etc; over multiple Hbase tables, or mixing with native Hive tables
      • INSERT: from any Hive query
    • Enhancements Needed (feedback on priority welcome)
      • More flexible column mapping, ALTER TABLE
      • Timestamp read/write/restrict
      • Filter pushdown
      • Partition support
      • Write atomicity
    Facebook
  • 22. UDF, UDAF and UDTF
  • 23. User-Defined Functions (UDF)
    • 1 input to 1 output
    • Typically used in select
      • SELECT concat(first, ‘ ‘, last) AS full_name…
    • See Hive language wiki for full list of built-in UDF’s
      • http://wiki.apache.org/hadoop/Hive/LanguageManual
    • Noteworthy features
      • Sometimes you want to cast
        • SELECT CAST(5.0/2.0 AS INT)…
      • Conditional functions
        • SELECT IF(boolean, if_true, if_not_true)…
    Facebook
  • 24. User Defined Aggregate Functions (UDAF)
    • N inputs to 1 output
    • Typically used with GROUP BY
      • SELECT count(1) FROM … GROUP BY age
      • SELECT count(DISTINCT first_name) GROUP BY last_name…
      • sum(), avg(), min(), max()
    • For skew
      • set hive.groupby.skewindata = true;
      • set hive.map.aggr.hash.percentmemory = <some lower value>
    Facebook
  • 25. User Defined Table-Generating Functions (UDTF)
    • 1 input to N outputs
    • explode(Array<?> arg)
      • Converts an array into multiple rows, with one element per row
    • Transform-like syntax
      • SELECT udtf(col0, col1, …) AS colAlias FROM srcTable
    • Lateral view syntax
      • … FROM baseTable LATERAL VIEW udtf(col0, col1…) tableAlias AS colAlias
    • Also see: http://bit.ly/hive-udtf
    Facebook
  • 26. UDTF using Transform Syntax
    • SELECT explode(group_ids) AS group_id FROM src
    Facebook Table src Output
  • 27. UDTF using Lateral View Syntax
    • SELECT src.*, myTable.* FROM src LATERAL VIEW explode(group_ids) myTable AS group_id
    Facebook Table src
  • 28. UDTF using Lateral View Syntax Facebook explode(group_ids) myTable AS group_id Join input rows to output rows src group_id 1 2 3 Result
  • 29. SerDe – Serialization/Deserialization
  • 30. SerDe Examples
    • CREATE TABLE mylog (
    • user_id BIGINT,
    • page_url STRING,
    • unix_time INT)
    • ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' ;
    • CREATE table mylog_rc (
    • user_id BIGINT,
    • page_url STRING,
    • unix_time INT)
    • ROW FORMAT SERDE
    • 'org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe'
    • STORED AS RCFILE;
    Facebook
  • 31. SerDe
    • SerDe is short for serialization/deserialization. It controls the format of a row.
    • Serialized format:
      • Delimited format (tab, comma, ctrl-a …)
      • Thrift Protocols
      • ProtocolBuffer*
    • Deserialized (in-memory) format:
      • Java Integer/String/ArrayList/HashMap
      • Hadoop Writable classes
      • User-defined Java Classes (Thrift, ProtocolBuffer*)
    • * ProtocolBuffer support not available yet.
    Facebook
  • 32. Where is SerDe? Facebook File on HDFS Hierarchical Object Writable Stream Stream Hierarchical Object Map Output File Writable Writable Writable Hierarchical Object File on HDFS User Script Hierarchical Object Hierarchical Object Hive Operator Hive Operator SerDe FileFormat / Hadoop Serialization Mapper Reducer ObjectInspector imp 1.0 3 54 Imp 0.2 1 33 clk 2.2 8 212 Imp 0.7 2 22 thrift_record<…> thrift_record<…> thrift_record<…> thrift_record<…> BytesWritable(x3Fx64x72x00) Java Object Object of a Java Class Standard Object Use ArrayList for struct and array Use HashMap for map LazyObject Lazily-deserialized Writable Writable Writable Writable Text(‘imp 1.0 3 54’) // UTF8 encoded
  • 33. Object Inspector Facebook Hierarchical Object Writable Writable Struct int string list struct map string string Hierarchical Object String Object TypeInfo BytesWritable(x3Fx64x72x00) Text(‘ a=av:b=bv 23 1:2=4:5 abcd ’) class HO { HashMap<String, String> a, Integer b, List<ClassC> c, String d; } Class ClassC { Integer a, Integer b; } List ( HashMap(“a”  “av”, “b”  “bv”), 23, List(List(1,null),List(2,4),List(5,null)), “ abcd” ) int int HashMap(“a”  “av”, “b”  “bv”), HashMap<String, String> a, “ av” getType ObjectInspector1 getFieldOI getStructField getType ObjectInspector2 getMapValueOI getMapValue deserialize SerDe serialize getOI getType ObjectInspector3
  • 34. When to add a new SerDe
    • User has data with special serialized format not supported by Hive yet, and user does not want to convert the data before loading into Hive.
    • User has a more efficient way of serializing the data on disk.
    Facebook
  • 35. How to add a new SerDe for text data
    • Follow the example in contrib/src/java/org/apache/hadoop/hive/contrib/serde2/RegexSerDe.java
    • RegexSerDe uses a user-provided regular expression to deserialize data.
    • CREATE TABLE apache_log(host STRING,
    • identity STRING, user STRING, time STRING, request STRING,
    • status STRING, size STRING, referer STRING, agent STRING)
    • ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
    • WITH SERDEPROPERTIES (
    • &quot;input.regex&quot; = &quot;([^ ]*) ([^ ]*) ([^ ]*) (-|[^]*) ([^ &quot;]*|&quot;[^&quot;]*&quot;) (-|[0-9]*) (-|[0-9]*)(?: ([^ &quot;]*|&quot;[^&quot;]*&quot;) ([^ &quot;]*|&quot;[^&quot;]*&quot;))?&quot;,
    • &quot;output.format.string&quot; = &quot;%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s”)
    • STORED AS TEXTFILE;
    Facebook
  • 36. How to add a new SerDe for binary data
    • Follow the example in contrib/src/java/org/apache/hadoop/hive/contrib/serde2/thrift (HIVE-706) serde/src/java/org/apache/hadoop/hive/serde2/binarysortable
    • CREATE TABLE mythrift_table
    • ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.thrift.ThriftSerDe'
    • WITH SERDEPROPERTIES (
    • &quot;serialization.class&quot; = &quot;com.facebook.serde.tprofiles.full&quot;,
    • &quot;serialization.format&quot; = &quot;com.facebook.thrift.protocol.TBinaryProtocol“) ;
    • NOTE: Column information is provided by the SerDe class.
    Facebook
  • 37. Q & A Facebook