Hiveusermeeting20103facebook 3-100319204309-phpapp02


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Hiveusermeeting20103facebook 3-100319204309-phpapp02

  1. 1. Hive New Features and API Facebook Hive Team March 2010
  2. 2. JDBC/ODBC and CTAS
  3. 3. Hive ODBC Driver • Architecture: • Client/DriverManager  local call dynamic libraies • unixODBC ( + hiveclient( + thriftclient (  network socket • HiveServer (in Java)  local call • Hive + Hadoop • unixODBC is not part of Hive open source, so you need to build it by yourself. • • • • Facebook 32-bit/64-bit architecture Thrift has to be r790732 Boost libraries Linking with 3rd party Driver Manager.
  4. 4. Facebook Use Case » Hive integration with MicroStrategy 8.1.2 (HIVE-187) and 9.0.1. (HIVE-1101) • FreeForm SQL (reported generated from user input queries) • Reports generated daily. » All servers (MSTR IS server, HiveServer) are running on Linux. • ODBC driver needs to be 32 bits. Facebook
  5. 5. Hive JDBC » Embedded mode: › jdbc:hive:// » Client/server mode: › jdbc:hive://host:port/dbname › host:port is where the hive server is listening. › Architecture is similar to ODBC. Facebook
  6. 6. Create table as select (CTAS) • New feature in branch 0.5. • E.g., CREATE TABLE T STORED AS TEXTFILE AS SELECT a+1 a1, concat(b,c,d) b2 FROM S WHERE … Resulting schema: T (a1 double, b2 string) • The create-clause can take all table properties except external table or partitioned table (on roadmap). • Atomicity: T will not be created if the select statement has an error. Facebook
  7. 7. Join Strategies
  8. 8. Left semi join • Implementing IN/EXISTS subquery semantics: SELECT A.* FROM A WHERE A.KEY IN (SELECT B.KEY FROM B WHERE B.VALUE > 100); Is equivalent to: SELECT A.* FROM A LEFT SEMI JOIN B ON (A.KEY = B.KEY and B.VALUE > 100); • Optimizations: • map-side groupby to reduce data flowing to reducers • early exit if match in join. Facebook
  9. 9. Map Join Implementation SELECT /*+MAPJOIN(a,c)*/ a.*, b.*, c.* SELECT /*+MAPJOIN(a,c)*/ a.*, b.*, c.* aajoin bbon a.key = b.key join on a.key = b.key join ccon a.key=c.key; join on a.key=c.key; Table a Table b Mapper 1 a1 a1 a2 a2 Mapper 2 File File a1 a1 c1 c1 File File a2 a2 File File c1 c1 a1 a1 a2 a2 Mapper 3 Table c a1 a1 a2 a2 c1 c1 c1 c1 1. 2. Spawn mapper based on the big table All files of all small tables are replicated onto each mapper
  10. 10. Bucket Map Join set hive.optimize.bucketmapjoin = true; 1.Work together with map join 2.All join tables are bucketized, and each small table’s bucket number can be divided by big table’s bucket number. 3.Bucket columns == Join columns
  11. 11. Bucket Map Join Implementation SELECT /*+MAPJOIN(a,c)*/ a.*, b.*, c.* SELECT /*+MAPJOIN(a,c)*/ a.*, b.*, c.* aajoin bbon a.key = b.key join on a.key = b.key join ccon a.key=c.key; join on a.key=c.key; Table a Table b Mapper 1 Bucket b1 Mapper 2 Bucket b1 Mapper 3 Bucket b2 Table a,b,c all bucketized by ‘key’ Table a,b,c all bucketized by ‘key’ aahas 22buckets, bbhas 2, and cchas 11 has buckets, has 2, and has a1 a1 c1 c1 Bucket Bucket a1 a1 Table c Bucket Bucket c1 c1 Bucket Bucket a2 a2 a1 a1 c1 c1 a2 a2 c1 c1 1. 2. Spawn mapper based on the big table Only matching buckets of all small tables are replicated onto each mapper Normally in production, there will be thousands of buckets!
  12. 12. Sort Merge Bucket Map Join set hive.optimize.bucketmapjoin = true; set hive.optimize.bucketmapjoin.sortedmerge = true; set; 1.Work together with bucket map join 2.Bucket columns == Join columns == sort columns 3.If partitioned, only big table can allow multiple partitions, small tables must be restricted to a single partition by query.
  13. 13. Sort Merge Bucket Map Join Table A Table B Table C 4, val_4 4, val_4 20, val_20 20, val_20 20, val_20 20, val_20 25, val_25 25, val_25 1, val_1 1, val_1 3, val_3 3, val_3 4, val_4 4, val_4 5, val_5 5, val_5 23, val_23 23, val_23 Small tables are read on demand NOT held entire small tables in memory Can perform outer join Facebook
  14. 14. Skew Join Join bottlenecked on the reducer who gets the skewed key set hive.optimize.skewjoin = true; set hive.skewjoin.key = skew_key_threshold
  15. 15. Skew Join Job 1 Reducer 1 Job 2 HDFS File a-K1 a-K 1 b-K 1 Table A A join B Write to HDFS HDFS File b-K1 Map join a-k1 map join b-k1 a-K 2 b-K 2 a-K 2 b-K 2 Table B Reducer 2 a-K 3 a-K 3 b-K 3 b-K 3 Final results
  16. 16. Future Work Skew Join with a Replication Algorithm Memory Footprint Optimization
  17. 17. Views, HBase Integration
  18. 18. CREATE VIEW Syntax CREATE VIEW [IF NOT EXISTS] view_name [ (column_name [COMMENT column_comment], … ) ] [COMMENT view_comment] AS SELECT … [ ORDER BY … LIMIT … ] -- example CREATE VIEW pokebaz(baz COMMENT ‘this column used to be bar’) COMMENT ‘views are good for layering on renaming’ AS SELECT bar FROM pokes; Facebook
  19. 19. View Features » Other commands › SHOW TABLES: views show up too › DESCRIBE: see view column descriptions › DESCRIBE EXTENDED: retrieve view definition » Enhancements on the way soon › Dependency management (e.g. CASCADE/RESTRICT) › Partition awareness » Enhancements (long term) › Updatable views › Materialized views Facebook
  20. 20. HBase Storage Handler CREATE TABLE users( userid int, name string, email string, notes string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ”small:name,small:email,large:notes”); Facebook
  21. 21. HBase Storage Handler Features » Commands supported › CREATE EXTERNAL TABLE: register existing HTable › SELECT: join, group by, union, etc; over multiple Hbase tables, or mixing with native Hive tables › INSERT: from any Hive query » Enhancements Needed (feedback on priority welcome) › › › › › Facebook More flexible column mapping, ALTER TABLE Timestamp read/write/restrict Filter pushdown Partition support Write atomicity
  22. 22. UDF, UDAF and UDTF
  23. 23. User-Defined Functions (UDF) » 1 input to 1 output » Typically used in select › SELECT concat(first, ‘ ‘, last) AS full_name… » See Hive language wiki for full list of built-in UDF’s › » Noteworthy features › Sometimes you want to cast • SELECT CAST(5.0/2.0 AS INT)… › Conditional functions • SELECT IF(boolean, if_true, if_not_true)… Facebook
  24. 24. User Defined Aggregate Functions (UDAF) » N inputs to 1 output » Typically used with GROUP BY › SELECT count(1) FROM … GROUP BY age › SELECT count(DISTINCT first_name) GROUP BY last_name… › sum(), avg(), min(), max() » For skew › set hive.groupby.skewindata = true; › set = <some lower value> Facebook
  25. 25. User Defined Table-Generating Functions (UDTF) » 1 input to N outputs » explode(Array<?> arg) › Converts an array into multiple rows, with one element per row » Transform-like syntax › SELECT udtf(col0, col1, …) AS colAlias FROM srcTable » Lateral view syntax › …FROM baseTable LATERAL VIEW udtf(col0, col1…) tableAlias AS colAlias » Also see: Facebook
  26. 26. UDTF using Transform Syntax » SELECT explode(group_ids) AS group_id FROM src Table src Facebook Output
  27. 27. UDTF using Lateral View Syntax » SELECT src.*, myTable.* FROM src LATERAL VIEW explode(group_ids) myTable AS group_id Table src Facebook
  28. 28. UDTF using Lateral View Syntax src explode(group_ids) myTable AS group_id group_id 1 2 3 Join input rows to output rows Result Facebook
  29. 29. SerDe – Serialization/Deserialization
  30. 30. SerDe Examples » CREATE TABLE mylog ( user_id BIGINT, page_url STRING, unix_time INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't'; » CREATE table mylog_rc ( user_id BIGINT, page_url STRING, unix_time INT) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe' STORED AS RCFILE; Facebook
  31. 31. SerDe » SerDe is short for serialization/deserialization. It controls the format of a row. » Serialized format: › Delimited format (tab, comma, ctrl-a …) › Thrift Protocols › ProtocolBuffer* » Deserialized (in-memory) format: › Java Integer/String/ArrayList/HashMap › Hadoop Writable classes › User-defined Java Classes (Thrift, ProtocolBuffer*) » * ProtocolBuffer support not available yet. Facebook
  32. 32. Where is SerDe? Hive Operator Hive Operator Mapper Hive Operator Hive Operator Reducer ObjectInspector Hierarchical Hierarchical Object Object Hierarchical Hierarchical Object Object Java Object Object of a Java Class Hierarchical Hierarchical Hierarchical Hierarchical Object Object Object Object Standard Object Use ArrayList for struct and array Use HashMap for map Hierarchical Hierarchical Object Object LazyObject Lazily-deserialized SerDe Writable Writable Text(‘imp 1.0 3 54’) // UTF8 encoded Writable Writable Writable Writable Writable Writable Writable Writable Writable Writable Writable Writable BytesWritable(x3Fx64x72x00) Writable Writable FileFormat / Hadoop Serialization File on File on HDFS HDFS Stream Stream thrift_record<…> Stream Stream thrift_record<…> thrift_record<…> thrift_record<…> User Script User Script Facebook imp 1.0 3 54 Imp 0.2 1 33 clk 2.2 8 212 Imp 0.7 2 22 Map Map Output File Output File File on File on HDFS HDFS
  33. 33. Object Inspector “av” String Object String Object int int ObjectInspector3 getType string string string string int int struct struct getMapValue Hierarchical Hierarchical Object Object getMapValueOI ObjectInspector2 “av”, “b”  “bv”), HashMap(“a” getType deserialize serialize Writable Writable Facebook map map List ( HashMap(“a”  “av”, “b”  “bv”), 23, List(List(1,null),List(2,4),List(5,null)), getFieldOI “abcd” ObjectInspector1 ) getType getStructField Hierarchical Hierarchical Object Object HashMap<String, String> a, Writable Writable SerDe getOI Text(‘a=av:b=bv 23 1:2=4:5 abcd’) int list int class HO { list string string HashMap<String, String> a, Integer b, List<ClassC> c, String d; } Class ClassC { Integer a, Integer b; Struct Struct } TypeInfo BytesWritable(x3Fx64x72x00)
  34. 34. When to add a new SerDe » User has data with special serialized format not supported by Hive yet, and user does not want to convert the data before loading into Hive. » User has a more efficient way of serializing the data on disk. Facebook
  35. 35. How to add a new SerDe for text data » Follow the example in contrib/src/java/org/apache/hadoop/hive/contrib/serde2/ » RegexSerDe uses a user-provided regular expression to deserialize data. » CREATE TABLE apache_log(host STRING, identity STRING, user STRING, time STRING, request STRING, status STRING, size STRING, referer STRING, agent STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|[[^]]*]) ([^ "]*|"[^"]*") (-|[0-9]*) (-|[0-9]*)(?: ([^ "]*|"[^"]*") ([^ "]*|"[^"]*"))?", "output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s”) STORED AS TEXTFILE; Facebook
  36. 36. How to add a new SerDe for binary data » Follow the example in contrib/src/java/org/apache/hadoop/hive/contrib/serde2/thrift (HIVE-706) serde/src/java/org/apache/hadoop/hive/serde2/binarysortable » CREATE TABLE mythrift_table ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.thrift.ThriftSerDe' WITH SERDEPROPERTIES ( "serialization.class" = "com.facebook.serde.tprofiles.full", "serialization.format" = "com.facebook.thrift.protocol.TBinaryProtocol“); » NOTE: Column information is provided by the SerDe class. Facebook
  37. 37. Q&A Facebook