Practical Hadoop using Pig

6,472 views

Published on

So you want to get started with Hadoop, but how. This session will show you how to get started with Hadoop development using Pig. Prior Hadoop experience is not needed.
Thursday, May 8th, 02:00pm-02:50pm

Published in: Technology

Practical Hadoop using Pig

  1. 1. Practical Hadoop with Pig Dave Wellman #openwest @dwellman
  2. 2. How does it all work? HDFS Hadoop Shell MR Data Structures Pig Commands Pig Example
  3. 3. HDFS
  4. 4. HDFS has 3 main actors
  5. 5. The Name Node The Name Node is “The Conductor”. It directs the performance of the cluster.
  6. 6. The Data Nodes: A Data Node stores blocks of data. Clusters can be contain thousands of Data Nodes. *Yahoo has a 40,000 node cluster.
  7. 7. The Client The client is a window to the cluster.
  8. 8. The Name Node
  9. 9. The heart of the System.
  10. 10. The heart of the System. Maintains a virtual File Directory.
  11. 11. The heart of the System. Maintains a virtual File Directory. Tracks all the nodes.
  12. 12. The heart of the System. Maintains a virtual File Directory. Tracks all the nodes. Listens for “heartbeats” and “Block Reports” (more on this later).
  13. 13. The heart of the System. Maintains a virtual File Directory. Tracks all the nodes. Listens for “heartbeats” and “Block Reports” (more on this later). If the NameNode is down, the cluster is offline.
  14. 14. Storing Data
  15. 15. The Data Nodes
  16. 16. Add a Data Node:
  17. 17. Add a Data Node: The Data Node says “Hello” to the Name Node.
  18. 18. Add a Data Node: The Data Node says “Hello” to the Name Node. The Name Node offers the Data Node a handshake with version requirements.
  19. 19. Add a Data Node: The Data Node says “Hello” to the Name Node. The Name Node offers the Data Node a handshake with version requirements. The Data Node replies back to the Name Node, “Okay”, or “Shuts Down”.
  20. 20. Add a Data Node: The Data Node says “Hello” to the Name Node. The Name Node offers the Data Node a handshake with version requirements. The Data Node replies back to the Name Node, “Okay”, or “Shuts Down”. The Name Node hands the Data Node a NodeId that it remembers. .
  21. 21. Add a Data Node: The Data Node says “Hello” to the Name Node. The Name Node offers the Data Node a handshake with version requirements. The Data Node replies back to the Name Node, “Okay”, or “Shuts Down”. The Name Node hands the Data Node a NodeId that it remembers. The Data Node is now part of cluster and it checks in with the Name Node every 3 seconds.
  22. 22. Data Node Heartbeat:
  23. 23. Data Node Heartbeat: The “check-in” is a simple HTTP Request/Response.
  24. 24. Data Node Heartbeat: The “check-in” is a simple HTTP Request/Response. This "check-in" is very important communication protocol that guarantees the health of the cluster.
  25. 25. Data Node Heartbeat: The “check-in” is a simple HTTP Request/Response. This "check-in" is very important communication protocol that guarantees the health of the cluster. Block Reports – what data I have and is it okay.
  26. 26. Data Node Heartbeat: The “check-in” is a simple HTTP Request/Response. This "check-in" is very important communication protocol that guarantees the health of the cluster. Block Reports – what data I have and is it okay. Name Node controls the Data Nodes by issuing orders when they return and report their status.
  27. 27. Data Node Heartbeat: The “check-in” is a simple HTTP Request/Response. This "check-in" is very important communication protocol that guarantees the health of the cluster. Block Reports – what data I have and is it okay. Name Node controls the Data Nodes by issuing orders when they return and report their status. Replicate Data, Delete Data, Verify Data
  28. 28. Data Node Heartbeat: The “check-in” is a simple HTTP Request/Response. This "check-in" is very important communication protocol that guarantees the health of the cluster. Block Reports – what data I have and is it okay. Name Node controls the Data Nodes by issuing orders when they return and report their status. Replicate Data, Delete Data, Verify Data Same process for all nodes within a cluster.
  29. 29. Writing Data
  30. 30. The client “tells” the NameNode the virtual directory location for the file.
  31. 31. A64 B64 C28 The client “tells” the NameNode the virtual directory location for the file. The Client breaks the file into 64MB “blocks”
  32. 32. A64 B64 C28 The client “tells” the NameNode the virtual directory location for the file. The Client breaks the file into 64MB “blocks” The client “ask” the NameNode where the blocks go.
  33. 33. A64 B64 C28 A64 B64 C28 The client “tells” the NameNode the virtual directory location for the file. The Client breaks the file into 64MB “blocks” The client “ask” the NameNode where the blocks go. The Client “stream” the blocks, in parallel, to the DataNodes.
  34. 34. A64 B64 C28 The client “tells” the NameNode the virtual directory location for the file. The Client breaks the file into 64MB “blocks” The client “ask” the NameNode where the blocks go. The Client “stream” the blocks, in parallel, to the DataNodes. DataNode(s) tells the NameNode they have the data via the block report
  35. 35. The client “tells” the NameNode the virtual directory location for the file. The Client breaks the file into 64MB “blocks” The client “ask” the NameNode where the blocks go. The Client “stream” the blocks, in parallel, to the DataNodes. DataNode(s) tells the NameNode they have the data via the block report The NameNode tells the DataNode where to replicate the block. A64 A64 A64
  36. 36. Reading Data
  37. 37. The client tells the NameNode it would like to read a file.
  38. 38. The client tells the NameNode it would like to read a file. The NameNode reply’s with the list of blocks and the nodes the blocks are on.
  39. 39. A64 B64 C28 The client tells the NameNode it would like to read a file. The NameNode reply’s with the list of blocks and the nodes the blocks are on. The client request the first block from a DataNode
  40. 40. B64 C28 A64 The client tells the NameNode it would like to read a file. The NameNode reply’s with the list of blocks and the nodes the blocks are on. The client request the first block from a DataNode The client compares the checksum of the block against the manifest from the NameNode.
  41. 41. The client tells the NameNode it would like to read a file. The NameNode reply’s with the list of blocks and the nodes the blocks are on. The client request the first block from a DataNode The client compares the checksum of the block against the manifest from the NameNode. The client moves on to the next block in the sequence until the file has been read. B64 C28 A64 B64 C28
  42. 42. Failure Recovery
  43. 43. A Data Node Fails to “check-in” A64
  44. 44. A Data Node Fails to “check-in” After 10 minutes the Name Node gives up on that Data Node. A64
  45. 45. A Data Node Fails to “check-in” After 10 minutes the Name Node gives up on that Data Node. When another node that has blocks originally assigned to the lost node checks-in, the name node sends a block replication command. A64A64 A64
  46. 46. A Data Node Fails to “check-in” After 10 minutes the Name Node gives up on that Data Node. When another node that has blocks originally assigned to the lost node checks-in, the name node sends a block replication command. The Data Node replicates that block of data. (Just like a write) A64A64 A64A64
  47. 47. Interacting with Hadoop HDFS Shell Commands
  48. 48. HDFS Shell Commands. > Hadoop fs –ls <args> Same as unix or osx ls command. /user/hadoop/file1 /user/hadoop/file2 ...
  49. 49. HDFS Shell Commands. > Hadoop fs –mkdir <path> Creates directories in HDFS using path.
  50. 50. HDFS Shell Commands. > hadoop fs -copyFromLocal <localsrc> URI Copy a file from your client to HDFS. Similar to put command, except that the source is restricted to a local file reference.
  51. 51. HDFS Shell Commands. > hadoop fs -cat <path> Copies source paths to stdout.
  52. 52. HDFS Shell Commands. > hadoop fs -copyToLocal URI <localdst> Copy a file from HDFS to your client. Similar to get command, except that the destination is restricted to a local file reference.
  53. 53. HDFS Shell Commands. cat chgrp chmod chown copyFromLocal copyToLocal cp du dus expunge get getmerge ls lsr mkdir movefromLocal mv put rm rmr setrep stat tail test text touchz
  54. 54. Map Reduce Data Structures Basic, Tuples & Bags
  55. 55. Basic Data Types: Strings, Integers, Doubles, Longs, Byte, Boolean, etc. Advanced Data Types: Tuples and Bags
  56. 56. Tuples are JSON like and simple. raw_data: { date_time: bytearray, seconds: bytearray }
  57. 57. Bags hold Tuples and Bags element: { date_time: bytearray, seconds: bytearray group: chararray, ordered_list: { date: chararray, hour: chararray, score: long } }
  58. 58. Expert Advice: Always know your data structures. They are the foundation for all Map Reduce operations. Complex (deep) data structures will kill -9 performance. Keep them simple!
  59. 59. Processing Data Interacting with Pig using Grunt
  60. 60. GRUNT Grunt is a command line interface used to debug pig jobs. Similar to Ruby IRB or Groovy CLI. Grunt is your best weapon against bad pigs. pig -x local Grunt> |
  61. 61. GRUNT Grunt> describe Element Describe will display the data structure of an Element Grunt> dump Element Dump will display the data represented by an Element
  62. 62. GRUNT > describe raw_data Produces the output: > raw_data: { date_time: bytearray, items: bytearray } Or in a more human readable form: Raw_data: { date_time: bytearray, items: bytearray }
  63. 63. GRUNT > dump raw_data You can dump terabytes of data to your screen, so be careful. (05/10/2011 20:30:00.0,0) (05/10/2011 20:45:00.0,0) (05/10/2011 21:00:00.0,0) (05/10/2011 21:15:00.0,0) ...
  64. 64. Pig Programs Map Reduce Made Simple
  65. 65. Most PIG commands are assignments. • The element names the collection of records that exist out in the cluster. • It’s not a traditional programming variable. • It describes the data from the operation. • It does not change. Element = Operation;
  66. 66. The SET command Used to set a hadoop job variable. Like the name of your pig job. SET job.name 'Day over Day - [$input]’;
  67. 67. The REGISTER and DEFINE commands -- Setup udf jars REGISTER $jar_prefix/sidekick-hadoop-0.0.1.jar DEFINE BUCKET_FORMAT_DATE com.sidekick.hadoop.udf.UnixTimeFormatter('MM/dd/ yyyy HH:mm', 'HH');
  68. 68. The LOAD USING command -- load in the data from HDFS raw_data = LOAD '$input' USING PigStorage('t') AS (date_time, items);
  69. 69. The FILTER BY command Selects tuples from a relation based on some condition. -- filter to the week we want broadcast_week = FILTER bucket_list BY (date >= '03-Oct-2011') AND (date <= '10-Oct-2011');
  70. 70. The GROUP BY command Groups the data in one or multiple relations. daily_stats = GROUP broadcast_week BY (date, hour);
  71. 71. The FOREACH command Generates data transformations based on columns of data. bucket_list = FOREACH raw_data GENERATE FLATTEN(DATE_FORMAT_DATE(date_time)) AS date, MINUTE_BUCKET(date_time) AS hour, MAX_ITEMS(items) AS items; *DATE_FORMAT_DATE is a user defined function, an advanced topic we’ll come to in a minute.
  72. 72. The GENERATE command Use the FOREACH GENERATE operation to work with columns of data. bucket_list = FOREACH raw_data GENERATE FLATTEN(DATE_FORMAT_DATE(date_time)) AS date, MINUTE_BUCKET(date_time) AS hour, MAX_ITEMSS(items) AS items;
  73. 73. The FLATTEN command FLATTEN substitutes the fields of a tuple in place of the tuple. traffic_stats = FOREACH daily_stats GENERATE FLATTEN(GROUP), COUNT(broadcast_week) AS cnt, SUM(broadcast_week.items) AS total;
  74. 74. The STORE INTO USING command Store function determine how data stored after a pig job. -- All done, now store it STORE final_results INTO '$output' USING PigStorage();
  75. 75. Demo Time! “Because, it’s all a big lie until someone demos’ the code.” - Genghis Khan
  76. 76. Thank You. - Genghis Khan

×