Your SlideShare is downloading. ×
0
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Apache Hadoop India Summit 2011 talk "Pig - Making Hadoop Easy" by Alan Gate

3,586

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,586
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
59
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • A very common use case we see at Yahoo is users want to read one data set and group it several different ways. Since scan time often dominates for these large data sets, sharing one scan across several group instances can result in nearly linear speed up of queries.
  • In this case multiple pipelines are needed in Map and Reduce phasesDue to our pull based model in execution, we have split and multiplex embed the pipelines within themselvesRecords are tagged with the pipeline number in the map stageGrouping is done by Hadoop using a union of the keysMultiplex operator on the reducer places incoming records in the correct pipeline
  • As your website grows, the number of unique users grows beyond what you can keep in memory.A given map only gets input from a given input source. It can therefore annotate tuples from that source with information on which source it came from. The join key is then used to partition the data, but the join key plus the input source id is used to sort it. This allows pig to buffer one side of the join keys in memory and then use that as a probe table as keys from the other input stream by.
  • Running example: You start a webiste. You want to know how users are using your website. So you collect a couple of streams of information from your logs: page views and users.When you start you have a fair number of page views, but not many users.In this algorithm the smaller table is copied to every map in its entirety (doesn’t yet use Distributed Cache, it should). Larger file is partitioned as per normal MR.
  • As your website grows even more, some pages become significantly more popular than others. This means that some pages are visited by almost every user, while others are visited only by a few users.First, a sampling pass is done to determine which keys are large enough to need special attention. These are keys that have enough values that we estimate we cannot hold the entire value in memory. It’s about holding the values in memory, not the key.Then at partitioning time, those keys are handled specially. All other keys are treated as in the regular join. These selected keys from input1 are split across multiple reducers. For input2, they are replicated to each of these reducers that had the split. In this way we guarantee that every instance of key k from input1 comes into contact with every instance of k from input2.
  • Now lets say that for some reason you start keeping both your page view data and user data sorted by user.Note that one way to do this is make sure that pages and users are partitioned the same way. But this leads to a big problem. In order to make sure you can join all your data sets you end up using the same hash function to join them all. But rarely does one bucketing scheme make sense for all your data. Whatever is big enough for one data set will be too small for others, and vice versa. So Pig’s implementation doesn’t depend on how the data is split.Pig does this by sampling one of the inputs and then building an index from that sample that indicates the key for the first record in every split. The other input is used as the standard input file for Hadoop and is split to the maps as per normal. When the map begins processing this file, when it encounters the first key in that file it uses the index to determine where it should open the second, sampled file. It then opens the file at the appropriate point, seeks forward until it finds the key it is looking for, and then begins doing a join on the two data sources.
  • Can’t yet inline the Python functions in Pig Latin script. In 0.9 we’ll add the ability to put them in the same file.
  • Transcript

    • 1. Alan F. Gates<br />Yahoo!<br />Pig, Making Hadoop Easy<br />
    • 2. Who Am I?<br />Pig committer and PMC Member<br />An architect in Yahoo! grid team<br />Photo credit: Steven Guarnaccia, The Three Little Pigs<br />
    • 3. Motivation By Example<br />You have web server logs of purchases on your site. You want to find the 10 users who bought the most and the cities they live in. You also want to know what percentage of purchases they account for in those cities.<br />Load Logs<br />Find top 10 users<br />Sum purchases by city<br />Join by city<br />Store top 10 users<br />Calculate percentage<br />Store results<br />
    • 4. In Pig Latin<br />raw = load 'logs' as (name, city, purchase);<br />-- Find top 10 users<br />usrgrp = group raw by (name, city);<br />byusr = foreach usrgrp generate group as k1, <br /> SUM(raw.purchase) as utotal;<br />srtusr = order byusr by usrtotal desc;<br />topusrs = limit srtusr 10;<br />store topusrs into 'top_users';<br />-- Count purchases per city<br />citygrp = group raw by city;<br />bycity = foreach citygrp generate group as k2, <br /> SUM(raw.purchase) as ctotal;<br />-- Join top users back to city<br />jnd = join topusrs by k1.city, bycity by k2;<br />pct = foreach jnd generate k1.name, k1.city, utotal/ctotal;<br />store pct into 'top_users_pct_of_city';<br />
    • 5. Translates to Four MapReduce Jobs<br />
    • 6. Performance<br />
    • 7. Where Do Pigs Live?<br />Data Factory<br />Pig<br />Pipelines<br />Iterative Processing<br />Research<br />Data Warehouse<br />BI Tools<br />Analysis<br />Data Collection<br />
    • 8. Pig Highlights<br />Language designed to enable efficient description of data flow<br />Standard relational operators built in<br />User defined functions (UDFs) can be written for column transformation (TOUPPER), or aggregation (SUM)<br />UDFs can be written to take advantage of the combiner<br />Four join implementations built in: hash, fragment-replicate, merge, skewed<br />Multi-query: Pig will combine certain types of operations together in a single pipeline to reduce the number of times data is scanned<br />Order by provides total ordering across reducers in a balanced way<br />Writing load and store functions is easy once an InputFormat and OutputFormat exist<br />Piggybank, a collection of user contributed UDFs<br />
    • 9. Multi-store script<br />A = load ‘users’ as (name, age, gender, city, state);<br />B = filter A by name is not null;<br />C1 = group B by age, gender;<br />D1 = foreach C1 generate group, COUNT(B);<br />store D into ‘bydemo’;<br />C2= group B by state;<br />D2 = foreach C2 generate group, COUNT(B);<br />store D2 into ‘bystate’;<br />group by age, gender<br />store into ‘bydemo’<br />apply UDFs<br />load users<br />filter nulls<br />group by state<br />store into ‘bystate’<br />apply UDFs<br />
    • 10. Multi-Store Map-Reduce Plan<br />map<br />filter<br />split<br />local rearrange<br />local rearrange<br />reduce<br />multiplex<br />package<br />package<br />foreach<br />foreach<br />
    • 11. Hash Join<br />Users = load‘users’as (name, age);Pages = load ‘pages’ as (user, url);Jnd = join Users by name, Pages by user;<br />Map 1<br />Reducer 1<br />(1, user)<br />Pages<br />block n<br />(1, fred)<br />(2, fred)<br />(2, fred)<br />Pages<br />Users<br />Map 2<br />Reducer 2<br />Users<br />block m<br />(1, jane)<br />(2, jane)<br />(2, jane)<br />(2, name)<br />
    • 12. Fragment Replicate Join<br />Users = load‘users’as (name, age);Pages = load ‘pages’ as (user, url);Jnd = join Pages by user, Users by name using “replicated”;<br />Map 1<br />Users<br />Pages<br />block 1<br />Pages<br />Users<br />Map 2<br />Users<br />Pages<br />block 2<br />
    • 13. Skew Join<br />Users = load‘users’as (name, age);Pages = load ‘pages’ as (user, url);Jnd = join Pages by user, Users by name using “skewed”;<br />Map 1<br />Reducer 1<br />SP<br />(1, user)<br />Pages<br />block n<br />(1, fred, p1)<br />(1, fred, p2)<br />(2, fred)<br />Pages<br />Users<br />SP<br />Map 2<br />Reducer 2<br />Users<br />block m<br />(1, fred, p3)<br />(1, fred, p4)<br />(2, fred)<br />(2, name)<br />
    • 14. Merge Join<br />Users = load‘users’as (name, age);Pages = load ‘pages’ as (user, url);Jnd = join Pages by user, Users by name using “merge”;<br />Map 1<br />Users<br />Pages<br />Pages<br />Users<br />aaron…<br />amr<br />aaron<br />…<br />aaron<br /> .<br /> .<br /> .<br /> .<br /> .<br /> .<br /> .<br /> .<br />zach<br />aaron<br /> .<br /> .<br /> .<br /> .<br /> .<br /> .<br /> .<br /> .<br />zach<br />Map 2<br />Users<br />Pages<br />amy…<br />barb<br />amy<br />…<br />
    • 15. Who uses Pig for What?<br />70% of production grid jobs at Yahoo (10ks per day)<br />Also used by Twitter, LinkedIn, Ebay, AOL, …<br />Used to<br />Process web logs<br />Build user behavior models<br />Process images<br />Build maps of the web<br />Do research on raw data sets<br />
    • 16. Components<br />Accessing Pig:<br /><ul><li> Submit a script directly
    • 17. Grunt, the pig shell
    • 18. PigServer Java class, a JDBC like interface</li></ul>Job executes on cluster<br />Hadoop Cluster<br />Pig resides on user machine<br />User machine<br />No need to install anything extra on your Hadoop cluster.<br />
    • 19. How It Works<br />Pig Latin<br />A = LOAD ‘myfile’<br /> AS (x, y, z);<br />B = FILTER A by x > 0; <br />C = GROUP B BY x;<br />D = FOREACH A GENERATE<br /> x, COUNT(B);<br />STORE D INTO ‘output’;<br />pig.jar:<br /><ul><li>parses
    • 20. checks
    • 21. optimizes
    • 22. plans execution
    • 23. submits jar to Hadoop
    • 24. monitors job progress</li></ul>Execution Plan<br />Map:<br /> Filter<br /> Count<br />Combine/Reduce:<br /> Sum<br />
    • 25. New in 0.8<br />UDFs can be Jython<br />Improved and expanded statistics<br />Performance Improvements<br />Automatic merging of small files<br />Compression of intermediate results<br />PigUnit for unit testing your Pig Latin scripts<br />Access to static Java functions as UDFs<br />Improved HBase integration<br />Custom PartitionersB = group A by $0 partition by YourPartitioner parallel2;<br />Greatly expanded string and math built in UDFs<br />
    • 26. What’s Next?<br />Preview of Pig 0.9<br />Integrate Pig with scripting languages for control flow<br />Add macros to Pig Latin<br />Revive ILLUSTRATE<br />Fix runtime type errors<br />Rewrite parser to give more useful error messages<br />Programming Pig from O’Reilly Press<br />
    • 27. Learn More<br />Online documentation: http://pig.apache.org/<br />Hadoop, The Definitive Guide 2nd edition has an up to date chapter on Pig, search at your favorite bookstore<br />Join the mailing lists:<br />user@pig.apache.org for user questions<br />dev@pig.apache.com for developer issues<br />Follow me on Twitter, @alanfgates<br />
    • 28. UDFs in Scripting Languages<br />Evaluation functions can now be written in scripting languages that compile down to the JVM<br />Reference implementation provided in Jython<br />Jruby, others, could be added with minimal code<br />JavaScript implementation in progress<br />Jython sold separately<br />
    • 29. Example Python UDF<br />test.py:<br />@outputSchema(”sqr:long”)<br />def square(num): return ((num)*(num)) <br />test.pig:<br />register 'test.py' using jython as myfuncs;<br />A = load ‘input’ as (i:int);<br />B = foreach A generate myfuncs.square(i);<br />dump B;<br />
    • 30. Better statistics<br />Statistics printed out at end of job run<br />Pig information stored in Hadoop’s job history files so you can mine the information and analyze your Pig usage<br />Loader for reading job history files included in Piggybank<br />New PigRunner interface that allows users to invoke Pig and get back a statistics object that contains stats information<br />Can also pass listener to track Pig jobs as they run<br />Done for Oozie so it can show users Pig statistics<br />
    • 31. Sample stats info<br />Job Stats (time in seconds):<br />JobId Maps Reduces MxMTMnMT AMT MxRTMnRT ART Alias <br />job_0 2 1 15 3 9 27 27 27 a,b,c,d,e<br />job_1 1 1 3 3 3 12 12 12 g,h<br />job_2 1 1 3 3 3 12 12 12 i<br />job_3 1 1 3 3 3 12 12 12 i<br />Input(s):<br />Successfully read 10000 records from: “studenttab10k"<br />Successfully read 10000 records from: “votertab10k"<br />Output(s):<br />Successfully stored 6 records (150 bytes) in: ”outfile"<br />Counters:<br />Total records written : 6<br />Total bytes written : 150<br />
    • 32. Invoke Static Java Functions as UDFs<br />Often UDF you need already exists as Java function, e.g. Java’s URLDecoder.decode() for decoding URLs<br />define UrlDecode InvokeForString('java.net.URLDecoder.decode', 'String String');A = load 'encoded.txt' as (e:chararray);B = foreach A generate UrlDecode(e, 'UTF-8');<br />Currently only works with simple types and static functions <br />
    • 33. Improved HBase Integration<br />Can now read records as bytes instead of auto converting to strings<br />Filters can be pushed down<br />Can store data in HBase as well as load from it<br />Works with HBase 0.20 but not 0.89 or 0.90. Patch in PIG-1680 addresses this but has not been committed yet.<br />

    ×