Nextag talk

1,721
-1

Published on

0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,721
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
48
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • Offline and Near-Real time data processing Not online
  • Simple map-reduce is easy – but it can get complicated very quickly.
  • Assume users know about Hadoop Streaming
  • Nomenclature: Core switch and Top of Rack
  • Compare to a standard unix file system
  • Rack local and node local access rocks Scalability is bound by switches
  • Point out that now we know how HDFS works – we can run maps close to data
  • Point out how data local computing is useful in this example Exposes some of the features we need in hadoop – output of reducer can be directly sent to another reducer As an exercise to the reader – the results from the shell do not equal those from hadoop – interesting to find why.
  • Nextag talk

    1. 1. HBase and Hive a random walk down H street Joydeep Sen Sarma
    2. 2. this.toString() <ul><li>Oracle, Netapp, Yahoo, dead startups </li></ul><ul><li>Facebook: </li></ul><ul><ul><li>Setup Hadoop@FB (circa 2007): ETL, custom jobs </li></ul></ul><ul><ul><li>Conceived, prototyped and co-authored Hive </li></ul></ul><ul><ul><li>Mentored Fair-Scheduler development </li></ul></ul><ul><ul><li>Prototype/Architecture for FB Messages (HBase/Cassandra) </li></ul></ul><ul><ul><li>FB Credits back-end contributor </li></ul></ul><ul><li>Nowadays: </li></ul><ul><ul><li>Job Scheduler </li></ul></ul><ul><ul><li>Hive (sometimes) </li></ul></ul><ul><ul><li>Chit-Chat (always) </li></ul></ul>
    3. 3. Outline <ul><li>Hadoop Ecosystem </li></ul><ul><li>Hive </li></ul><ul><ul><li>Introduction </li></ul></ul><ul><ul><li>Examples </li></ul></ul><ul><ul><li>Internals </li></ul></ul><ul><li>HDFS/MR </li></ul><ul><ul><li>Scheduler, HDFS-RAID, HA </li></ul></ul><ul><li>HBase </li></ul><ul><ul><li>Introduction </li></ul></ul><ul><ul><li>Example uses </li></ul></ul><ul><ul><li>Internals </li></ul></ul><ul><li>Operational Takeaways </li></ul>
    4. 4. Hadoop Ecosystem HDFS ZooKeeper HBase Map-Reduce Scheduler Hive Metastore Hive Pig
    5. 5. OMG - NoSQL looks like a DBMS! <ul><li>Zookeeper for coordination </li></ul><ul><ul><li>atomic primitives (CAS), notifications, heartbeats </li></ul></ul><ul><ul><li>Not a key-value store, not a file system, not a database </li></ul></ul><ul><ul><li>Like a DLM in a database, can build locks/leases easily on top </li></ul></ul><ul><li>HDFS for large objects </li></ul><ul><ul><li>Shared storage like NFS or SAN </li></ul></ul><ul><li>HBase for small objects </li></ul><ul><ul><li>Every DB has an index at it’s heart </li></ul></ul><ul><ul><li>Shared Storage DB (like Oracle) </li></ul></ul><ul><li>Map-Reduce/Hive/Pig etc. for analytics </li></ul><ul><li>Missing: Transaction Monitor, Automatic/Transactional view/secondary-index maintenance, Triggers, Remote Replication etc. </li></ul>
    6. 6. Why HIVE? <ul><li>Human Cost >> Machine Cost </li></ul><ul><li>Hadoop’s programming model is awesome, but .. </li></ul><ul><ul><li>map-reduce is impossible for non-engineers </li></ul></ul><ul><ul><li>training 100s of engineers in java map-reduce == hard </li></ul></ul><ul><ul><li>Map-reduce is non-trivial for engineers (sort vs. partition vs. grouping comparator anyone?) </li></ul></ul><ul><li>Much much easier to write Sql query </li></ul><ul><ul><li>Almost zero training cost </li></ul></ul><ul><ul><li>Hard things become easy </li></ul></ul><ul><li>Files as insufficient data management abstraction </li></ul><ul><ul><li>Tables, Schemas, Partitions, Indices </li></ul></ul><ul><ul><li>Metadata allows optimization, discovery, browsing </li></ul></ul><ul><li>Embrace all data formats: </li></ul><ul><ul><li>Complex Data Types, columnar or not, lazy retrieval </li></ul></ul>
    7. 7. Quick Examples <ul><li>Create some tables: </li></ul><ul><ul><ul><li>CREATE TABLE ad_imps (ad_id string, userid bigint, url string) </li></ul></ul></ul><ul><ul><ul><li>PARTITIONED BY (ds string); </li></ul></ul></ul><ul><ul><ul><li>CREATE TABLE dim_ads (ad_id string, campaign_id string) stored as textfile; </li></ul></ul></ul><ul><li>Group-by + Join: </li></ul><ul><ul><ul><li>SELECT a.campaign_id, count(1), count(DISTINCT b.user_id) </li></ul></ul></ul><ul><ul><ul><li>FROM dim_ads a JOIN impression_logs b ON(b.ad_id = a.ad_id) </li></ul></ul></ul><ul><ul><ul><li>WHERE b.dateid = '2008-12-01' </li></ul></ul></ul><ul><ul><ul><li>GROUP BY a.campaign_id; </li></ul></ul></ul><ul><li>Custom Transform + View: </li></ul><ul><ul><ul><li>ADD FILE url_to_cat.py; </li></ul></ul></ul><ul><ul><ul><li>CREATE VIEW tmp_adid_cat AS SELECT TRANSFORM (ad_id, url) USING ‘url_to_cat.py’ AS (ad_id, cat) FROM ad_imps WHERE ds=‘2008-12-01’; </li></ul></ul></ul><ul><ul><ul><li>SELECT a.campaign_id, b.cat, count(1) FROM dim_ads a JOIN tmp_adid_cat b ON (b.ad_id=a.ad_id) GROUP BY a.campaign_id; </li></ul></ul></ul>
    8. 8. HIVE: Components HDFS Hive CLI DDL Queries Browsing Map Reduce MetaStore Thrift API SerDe Thrift Jute JSON.. Execution Hive QL Parser Planner Mgmt. Web UI
    9. 9. Data Model Hive Entity Sample Metastore Entity Sample HDFS Location Table T /wh/T Partition date=d1 /wh/T/date=d1 Bucketing column userid /wh/T/date=d1/part-0000 … /wh/T/date=d1/part-1000 (hashed on userid) External Table extT /wh2/existing/dir (arbitrary location)
    10. 10. Using Hive: quick planner <ul><li>How to store data: </li></ul><ul><ul><li>Binary or text? Compressed or not? </li></ul></ul><ul><ul><li>RCFile saves space over SequenceFile </li></ul></ul><ul><li>How to partition and load data: </li></ul><ul><ul><li>Initial data load using Dynamic Partitioning </li></ul></ul><ul><ul><li>Incremental loading: </li></ul></ul><ul><ul><ul><li>Appending data to existing partitions? </li></ul></ul></ul><ul><ul><ul><li>Mutable partitions/data? (hard!) </li></ul></ul></ul><ul><ul><li>Managing space consumption (use RETENTION) </li></ul></ul><ul><li>Performance Tuning </li></ul><ul><ul><li>Learning to read Explain plans  </li></ul></ul>
    11. 11. Join Processing <ul><li>Sort-merge joins </li></ul><ul><ul><li>Uses vanilla map-reduce using join key for sorting </li></ul></ul><ul><ul><li>Single MR job for multiple joins on the same key: </li></ul></ul><ul><ul><li>FROM (a join b on a.key = b.key) join c on a.key = c.key </li></ul></ul><ul><ul><li>Put largest table last (reduces memory usage) </li></ul></ul><ul><li>Map-Joins </li></ul><ul><ul><li>Load smaller table into memory on each mapper </li></ul></ul><ul><ul><li>No sorting – map-side joins </li></ul></ul><ul><ul><li>Automatic Map-Join in Hive 0.7 </li></ul></ul><ul><li>Bucketed (Map) Join </li></ul><ul><ul><li>Map-side join if join key is same as bucketing key </li></ul></ul>
    12. 12. Group-By Processing <ul><li>Hash Based Map-side aggregation </li></ul><ul><ul><li>90% improvement for count(1) aggregate </li></ul></ul><ul><ul><li>Automatically reverts to regular map-reduce aggregation if cardinality is too high </li></ul></ul><ul><ul><li>Can be turned off -> regular sort based group by </li></ul></ul><ul><li>Handling skews in groups </li></ul><ul><ul><li>2-stage MR job </li></ul></ul><ul><ul><li>1 st MR - Partition on random value or distinct column (for distinct queries) and compute partial aggregates </li></ul></ul><ul><ul><li>2 nd MR – Compute full aggregates </li></ul></ul>
    13. 13. Common Issues <ul><li>Too many files: </li></ul><ul><ul><li>Option to merge (small) files at the end of MR job </li></ul></ul><ul><ul><li>ARCHIVE partitions with many small files </li></ul></ul><ul><ul><li>Using CombineHiveInputFormat to reduce number of mappers over partition with many small files </li></ul></ul><ul><li>High latencies for small jobs </li></ul><ul><ul><li>Scheduling latencies are killer </li></ul></ul><ul><ul><li>Optimize number of map-reduce jobs </li></ul></ul><ul><ul><li>Use automatic local mode if possible </li></ul></ul>
    14. 14. Other goodies <ul><li>Locks using Zookeeper (to prevent read/write race) </li></ul><ul><li>Indexes </li></ul><ul><li>Partitioned Views </li></ul><ul><li>Storage Handlers (to query HBase directly) </li></ul><ul><li>Statistics collection </li></ul><ul><li>Security </li></ul>Future <ul><li>Block Sampling and faster LIMIT queries </li></ul><ul><ul><li>Help query authoring and data exploration </li></ul></ul><ul><li>Hive Server </li></ul><ul><li>Much faster query execution </li></ul>
    15. 15. Hive vs. .. <ul><li>PIG </li></ul><ul><ul><li>Hive is not a procedural language </li></ul></ul><ul><ul><ul><li>But views are similar to Pig variables </li></ul></ul></ul><ul><ul><ul><li>Core philosophy to integrate with other languages via Streaming </li></ul></ul></ul><ul><ul><li>PIG does not provide SQL interface </li></ul></ul><ul><ul><li>PIG does not have a metastore or data management model </li></ul></ul><ul><ul><li>More similar than different </li></ul></ul><ul><li>Cascading </li></ul><ul><ul><li>No metastore or data management model </li></ul></ul><ul><ul><li>No declarative language </li></ul></ul><ul><ul><li>Seems similar to Hive’s internal execution operators </li></ul></ul>
    16. 16. Hive Warehouse @ Facebook <ul><li>Two primary warehouses (HDFS + MR) </li></ul><ul><ul><li>High SLA pipelines </li></ul></ul><ul><ul><li>Core Data Warehouse </li></ul></ul><ul><li>Core Warehouse, Jan 2011: </li></ul><ul><ul><li>~2800 nodes </li></ul></ul><ul><ul><li>30 petabytes disk space </li></ul></ul><ul><ul><li>Growing to 100PB by end of year </li></ul></ul><ul><li>Data access per day: </li></ul><ul><ul><li>~40 terabytes added (compressed) /day </li></ul></ul><ul><ul><li>25000 map/reduce jobs/ day </li></ul></ul><ul><ul><li>300-400 users/month </li></ul></ul>
    17. 17. Hive is just part of the story <ul><li>HDFS Improvements: </li></ul><ul><ul><li>HDFS RAID (saved ~5PB) </li></ul></ul><ul><ul><li>High Availability(AvatarNode) </li></ul></ul><ul><ul><li>NameNode improvements (locking, restart, decomissioning …) </li></ul></ul><ul><li>JobScheduler improvements: </li></ul><ul><ul><li>More efficient/concurrent JobTracker </li></ul></ul><ul><ul><li>Monitoring and killing runaway tasks </li></ul></ul><ul><ul><li>FairScheduler features (preemption, speculation, FIFO+FAIR) </li></ul></ul><ul><li>Hadoop/Hive Administration </li></ul><ul><ul><li>Rolling Upgrades for Hive </li></ul></ul><ul><ul><li>Tools/Configuration for managing multiple clusters </li></ul></ul><ul><ul><li>Hive Replication </li></ul></ul>
    18. 18. HBase png s via Lars George
    19. 19. Why/When use HBase? <ul><li>Very large indexed data store </li></ul><ul><ul><li>Lower management cost than building mysql cluster </li></ul></ul><ul><li>Very high write throughput </li></ul><ul><ul><li>Log Structured index trumps standard BTree </li></ul></ul><ul><li>Don’t need complex multi-row transactions </li></ul><ul><ul><li>But need durability and strong consistency </li></ul></ul><ul><li>Read performance is not critical </li></ul><ul><ul><li>Reads from memory not as fast as memcache </li></ul></ul><ul><ul><li>Random reads from disk suffer because of log-structure </li></ul></ul><ul><ul><li>Reading recently written data is (potentially) faster </li></ul></ul><ul><li>Need Killer Map-Reduce integration </li></ul>
    20. 20. <ul><li>HBase Table </li></ul><ul><ul><li>Collection of Column Families </li></ul></ul><ul><ul><li>HBase Table + Column Family == MySql Table </li></ul></ul><ul><li>HBase Column Family </li></ul><ul><ul><li>Collection of columns </li></ul></ul><ul><ul><li>Each column has key (column qualifier) and value </li></ul></ul><ul><ul><li>‘ Column’ in HBase == row in Mysql </li></ul></ul><ul><li>Data Organization </li></ul><ul><ul><li>Table Sharded on row-key </li></ul></ul><ul><ul><li>Data Indexed on row-key + column-family + column qualifier </li></ul></ul><ul><ul><li>Multiple timestamped versions for each cell (mysql row) </li></ul></ul>Data Model
    21. 21. Example <ul><li>User database in mysql </li></ul><ul><ul><li>create table friends (userid int, friendid int, since int) primary key (userid, friendid) </li></ul></ul><ul><ul><li>create table pictures (userid int, picid bigint, at varchar(256)) primary key (userid, picid) </li></ul></ul><ul><li>Sharded mysql layout: </li></ul><ul><ul><li>Each mysql db has friends and pictures tables </li></ul></ul><ul><ul><li>All entries for given user are in one mysql db </li></ul></ul><ul><li>Equivalent HBase Schema: </li></ul><ul><ul><li>friends and pictures are column families </li></ul></ul><ul><ul><li>In a single HBase table with userid as row-key </li></ul></ul><ul><ul><li>friendid and picid values become column qualifiers </li></ul></ul><ul><li>New terminology bizzare (thx. Google) </li></ul>
    22. 22. System Picture (thx. Lars)
    23. 23. HBase Index Internals <ul><li>LSM Trees: </li></ul><ul><ul><li>Data stored in a series of index files (Index organized table) </li></ul></ul><ul><ul><li>Bloom Filters for skipping index files entirely </li></ul></ul><ul><ul><li>Short Circuiting lookups </li></ul></ul><ul><ul><ul><li>Retrieve last value assuming client stores greatest timestamp last </li></ul></ul></ul><ul><ul><ul><li>Retrieve values stored in given time range (don’t scan all files) </li></ul></ul></ul><ul><ul><li>Compact multiple index files into one periodically </li></ul></ul><ul><li>In Memory caching </li></ul><ul><ul><li>Recent writes cached in MemStore (skip-list) </li></ul></ul><ul><ul><li>Pins index headers and bloom filters in memory </li></ul></ul><ul><ul><li>Block cache (not row cache) for index file data blocks </li></ul></ul>
    24. 24. BigTable (HBase) vs. Dynamo (Cassandra, …) <ul><li>Provides Strong Consistency with low read penalty </li></ul><ul><ul><li>Cassandra will require R>1 for strong consistency </li></ul></ul><ul><li>Data Resilience story is much better </li></ul><ul><ul><li>thx HDFS CRCs, replication, block placement </li></ul></ul><ul><li>Equally fast at disk reads </li></ul><ul><ul><li>Sticky Regions in HBase </li></ul></ul><ul><ul><li>Fast path local reads in HDFS </li></ul></ul><ul><li>No partition tolerance, lower availability </li></ul><ul><ul><li>No read replicas, no built-in conflict resolution </li></ul></ul>
    25. 25. Appendix
    26. 26. Looks like this .. Disks Node Disks Node Disks Node Disks Node Disks Node Disks Node 1 Gigabit 4-8 Gigabit Node = DataNode + Map-Reduce
    27. 27. HDFS <ul><li>Separation of Metadata from Data </li></ul><ul><ul><li>Metadata == Inodes, attributes, block locations, block replication </li></ul></ul><ul><li>File = Σ data blocks (typically 128MB) </li></ul><ul><ul><li>Architected for large files and streaming reads </li></ul></ul><ul><li>Highly Reliable </li></ul><ul><ul><li>Each data block typically replicated 3X to different datanodes </li></ul></ul><ul><ul><li>Clients compute and verify block checksums (end-to-end) </li></ul></ul><ul><li>Single namenode </li></ul><ul><ul><li>All metadata stored In-memory. Passive standby </li></ul></ul><ul><li>Client talks to both namenode and datanodes </li></ul><ul><ul><li>Bulk data from datanode to client  linear scalability </li></ul></ul><ul><ul><li>Custom Client library in Java/C/Thrift </li></ul></ul><ul><ul><li>Not POSIX, not NFS </li></ul></ul>
    28. 28. In pictures .. NameNode Disks 32GB RAM Secondary NameNode Disks 32GB RAM DataNode DataNode DataNode DFS Client DataNode DataNode DataNode getLocations locations
    29. 29. Map/Reduce DataFLow
    30. 30. Programming with Map/Reduce <ul><li>Find the most imported package in Hive source: </li></ul><ul><li>$ find . -name '*.java' -exec egrep '^import' '{}' ; | awk '{print $2}' | sort | uniq -c | sort -nr +0 -1 | head -1 </li></ul><ul><ul><li>208 org.apache.commons.logging.LogFactory; </li></ul></ul><ul><li>In Map-Reduce: </li></ul><ul><ul><li>1a. Map using: egrep '^import'| awk '{print $2}' </li></ul></ul><ul><ul><li>1b. Reduce on first column (package name) </li></ul></ul><ul><ul><li>1c. Reduce Function: uniq -c </li></ul></ul><ul><ul><li>2a. Map using: awk ‘{print “%05dt%sn”,100000-$1,$2}’ </li></ul></ul><ul><ul><li>2b. Reduce using first column (inverse counts), 1 reducer </li></ul></ul><ul><ul><li>2c. Reduce Function: Identity </li></ul></ul><ul><li>Scales to Terabytes </li></ul>
    31. 31. Rubbing it in .. <ul><li>hive> select key, count(1) from kv1 where key > 100 group by key; </li></ul><ul><li>vs. </li></ul><ul><li>$ cat > /tmp/reducer.sh </li></ul><ul><li>uniq -c | awk '{print $2&quot;t&quot;$1}‘ </li></ul><ul><li>$ cat > /tmp/map.sh </li></ul><ul><li>awk -F '001' '{if($1 > 100) print $1}‘ </li></ul><ul><li>$ bin/hadoop jar contrib/hadoop-0.19.2-dev-streaming.jar -input /user/hive/warehouse/kv1 -mapper map.sh -file /tmp/reducer.sh -file /tmp/map.sh -reducer reducer.sh -output /tmp/largekey -numReduceTasks 1 </li></ul><ul><li>$ bin/hadoop dfs –cat /tmp/largekey/part* </li></ul>
    32. 32. Hive Optimizations – Merge Sequential Map Reduce Jobs <ul><li>SQL: </li></ul><ul><ul><li>FROM (a join b on a.key = b.key) join c on a.key = c.key SELECT … </li></ul></ul>A Map Reduce B C AB Map Reduce ABC key av bv 1 111 222 key av 1 111 key bv 1 222 key cv 1 333 key av bv cv 1 111 222 333
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×