Hw09 Rethinking The Data Warehouse With Hadoop And Hive


Published on

Published in: Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Cost of training people is high – have to reduce cost by making system easy to use.
  • Why Hive? Petabytes of structured data User base familiar with SQL and Python/Perl/PHP Commercial Warehousing Software .. Does not scale, very expensive, inflexible Closed source, not programmable using Python/Perl/PHP Solution: SQL layer on top of scalable storage and map-reduce (Hadoop) Openness: Use any data format, embed any programming language
  • Nomenclature: Core switch and Top of Rack
  • 1GB connectivity within a rack, 100MB across racks? Are all disks 7200 SATA?
  • Hw09 Rethinking The Data Warehouse With Hadoop And Hive

    1. 1. Rethinking Data Warehousing & Analytics Ashish Thusoo, Facebook Data Infrastructure Team
    2. 2. Why Another Data Warehousing System? <ul><li>Data, data and more data </li></ul><ul><ul><li>200GB per day in March 2008 </li></ul></ul><ul><ul><li>2+TB(compressed) raw data per day in April 2009 </li></ul></ul><ul><ul><li>4+TB(compressed) raw data per day today </li></ul></ul>
    3. 4. Trends Leading to More Data Free or low cost of user services Realization that more insights are derived from simple algorithms on more data
    4. 5. Deficiencies of Existing Technologies Cost of Analysis and Storage on proprietary systems does not support trends towards more data Closed and Proprietary Systems Limited Scalability does not support trends towards more data
    5. 6. Hadoop Advantages <ul><li>Pros </li></ul><ul><ul><li>Superior in availability/scalability/manageability despite lower single node performance </li></ul></ul><ul><ul><li>Open system </li></ul></ul><ul><ul><li>Scalable costs </li></ul></ul><ul><li>Cons: Programmability and Metadata </li></ul><ul><ul><li>Map-reduce hard to program (users know sql/bash/python/perl) </li></ul></ul><ul><ul><li>Need to publish data in well known schemas </li></ul></ul><ul><li>Solution: HIVE </li></ul>
    6. 7. What is HIVE? <ul><li>A system for managing and querying structured data built on top of Hadoop </li></ul><ul><li>Components </li></ul><ul><ul><li>Map-Reduce for execution </li></ul></ul><ul><ul><li>HDFS for storage </li></ul></ul><ul><ul><li>Metadata in an RDBMS </li></ul></ul>
    7. 8. Hive: Simplifying Hadoop – New Technology Familiar Interfaces <ul><li>hive> select key, count(1) from kv1 where key > 100 group by key; </li></ul><ul><li>vs. </li></ul><ul><li>$ cat > /tmp/reducer.sh </li></ul><ul><li>uniq -c | awk '{print $2&quot; &quot;$1}‘ </li></ul><ul><li>$ cat > /tmp/map.sh </li></ul><ul><li>awk -F '01' '{if($1 > 100) print $1}‘ </li></ul><ul><li>$ bin/hadoop jar contrib/hadoop-0.19.2-dev-streaming.jar -input /user/hive/warehouse/kv1 -mapper map.sh -file /tmp/reducer.sh -file /tmp/map.sh -reducer reducer.sh -output /tmp/largekey -numReduceTasks 1 </li></ul><ul><li>$ bin/hadoop dfs –cat /tmp/largekey/part* </li></ul>
    8. 9. Hive: Open and Extensible <ul><li>Query your own formats and types with your own Serializer/Deserializers </li></ul><ul><li>Extend the SQL functionality through User Defined Functions </li></ul><ul><li>Do any non-SQL transformations through TRANSFORM operator that sends data from Hive to any user program/script </li></ul>
    9. 10. Hive: Smart Execution Plans for Performance <ul><li>Hash based Aggregations </li></ul><ul><li>Map-side Joins </li></ul><ul><li>Predicate Pushdown </li></ul><ul><li>Partition Pruning </li></ul><ul><li>Many more to come in the future </li></ul>
    10. 11. Interoperability <ul><li>JDBC and ODBC interfaces available </li></ul><ul><li>Integrations with some traditional SQL tools with some minor modifications </li></ul><ul><li>More improvements in future to support interoperability with existing front end tools </li></ul>
    11. 12. Information <ul><li>Available as a sub project in Hadoop </li></ul><ul><ul><li>http://wiki.apache.org/hadoop/Hive (wiki) </li></ul></ul><ul><ul><li>http://hadoop.apache.org/hive (home page) </li></ul></ul><ul><ul><li>http://svn.apache.org/repos/asf/hadoop/hive (SVN repo) </li></ul></ul><ul><ul><li>##hive (IRC) </li></ul></ul><ul><ul><li>Works with hadoop-0.17, 0.18, 0.19, 0.20 </li></ul></ul><ul><li>Release 0.4.0 is coming in the next few days </li></ul><ul><li>Mailing Lists: </li></ul><ul><ul><li>hive-{user,dev,commits}@hadoop.apache.org </li></ul></ul>
    12. 13. Data Warehousing @ Facebook using Hive & Hadoop
    13. 14. Data Flow Architecture at Facebook Web Servers Scribe MidTier Filers Production Hive-Hadoop Cluster Oracle RAC Federated MySQL Scribe-Hadoop Cluster Adhoc Hive-Hadoop Cluster Hive replication
    14. 15. Looks like this .. Disks Node Disks Node Disks Node Disks Node Disks Node Disks Node 1 Gigabit 4 Gigabit Node = DataNode + Map-Reduce
    15. 16. Hadoop & Hive Cluster @ Facebook <ul><li>Hadoop/Hive Warehouse – the new generation </li></ul><ul><ul><li>4800 cores, Storage capacity of 5.5 PetaBytes </li></ul></ul><ul><ul><li>12 TB per node </li></ul></ul><ul><ul><li>Two level network topology </li></ul></ul><ul><ul><ul><li>1 Gbit/sec from node to rack switch </li></ul></ul></ul><ul><ul><ul><li>4 Gbit/sec to top level rack switch </li></ul></ul></ul>
    16. 17. Hive & Hadoop Usage @ Facebook <ul><li>Statistics per day: </li></ul><ul><ul><li>4 TB of compressed new data added per day </li></ul></ul><ul><ul><li>135TB of compressed data scanned per day </li></ul></ul><ul><ul><li>7500+ Hive jobs on per day </li></ul></ul><ul><ul><li>80K compute hours per day </li></ul></ul><ul><li>Hive simplifies Hadoop: </li></ul><ul><ul><li>New engineers go though a Hive training session </li></ul></ul><ul><ul><li>~200 people/month run jobs on Hadoop/Hive </li></ul></ul><ul><ul><li>Analysts (non-engineers) use Hadoop through Hive </li></ul></ul><ul><ul><li>95% of jobs are Hive Jobs </li></ul></ul>
    17. 18. Hive & Hadoop Usage @ Facebook <ul><li>Types of Applications: </li></ul><ul><ul><li>Reporting </li></ul></ul><ul><ul><ul><li>Eg: Daily/Weekly aggregations of impression/click counts </li></ul></ul></ul><ul><ul><ul><li>Measures of user engagement </li></ul></ul></ul><ul><ul><ul><li>Microstrategy dashboards </li></ul></ul></ul><ul><ul><li>Ad hoc Analysis </li></ul></ul><ul><ul><ul><li>Eg: how many group admins broken down by state/country </li></ul></ul></ul><ul><ul><li>Machine Learning (Assembling training data) </li></ul></ul><ul><ul><ul><li>Ad Optimization </li></ul></ul></ul><ul><ul><ul><li>Eg: User Engagement as a function of user attributes </li></ul></ul></ul><ul><ul><li>Many others </li></ul></ul>
    18. 19. Facebook’s contributions… <ul><li>A lot of significant contributions: </li></ul><ul><ul><li>Hive </li></ul></ul><ul><ul><li>Hdfs features </li></ul></ul><ul><ul><li>Scheduler work </li></ul></ul><ul><ul><li>Etc… </li></ul></ul><ul><li>Talks by Dhruba Borthakur and Zheng Shao in the development track for more information on these projects </li></ul>