Pig vs mapreduce
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Pig vs mapreduce

  • 13,474 views
Uploaded on

This was a talk given to the Pig meetup group in NYC on August 22nd. We talked about reasons why you would use Pig over Hadoop and vice versa, plus just some random thoughts and gripes.......

This was a talk given to the Pig meetup group in NYC on August 22nd. We talked about reasons why you would use Pig over Hadoop and vice versa, plus just some random thoughts and gripes.

Audio/video recording here: http://vimeo.com/73211764

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
13,474
On Slideshare
5,508
From Embeds
7,966
Number of Embeds
31

Actions

Shares
Downloads
206
Comments
0
Likes
12

Embeds 7,966

http://blog.mortardata.com 6,613
http://nosql.mypopescu.com 568
http://feedly.com 353
http://www.scoop.it 116
http://www.feedspot.com 50
http://feeds.feedburner.com 47
http://www.tumblr.com 42
http://www.newsblur.com 29
http://digg.com 25
http://cloud.feedly.com 23
http://newsblur.com 22
http://feedreader.com 12
http://www.hanrss.com 11
http://infrastacks.net 10
http://www.bigdatapro.io 8
http://reader.aol.com 6
http://feedproxy.google.com 5
http://www.inoreader.com 4
http://inoreader.com 3
http://www.google.com 3
http://localhost 3
http://192.168.3.23 2
https://reader.aol.com 2
http://webcache.googleusercontent.com 2
https://translate.googleusercontent.com 1
http://www.google.co.in 1
http://my.organic.hu 1
http://xianguo.com 1
http://127.0.0.1 1
https://twitter.com 1
http://translate.googleusercontent.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Donald's talk will cover how to use native MapReduce in conjunction with Pig, including a detailed discussion of when users might be best served to use one or the other.

Transcript

  • 1. Pig vs. MapReduce By Donald Miner NYC Pig User Group August 21, 2013
  • 2. About Don @donaldpminer dminer@clearedgeit.com
  • 3. I’ll be talking about What is Java MapReduce good for? Why is Pig better in some ways? When should I use which?
  • 4. When do I use Pig?? Can I use Pig to do this? YES NO Let’s get to the point
  • 5. When do I use Pig?? Can I use Pig to do this? YES NO USE PIG!
  • 6. When do I use Pig?? Can I use Pig to do this? YES NO TRY TO USE PIG ANYWAYS!
  • 7. When do I use Pig?? Can I use Pig to do this? YES NO TRY TO USE PIG ANYWAYS! Did that work? YES NO
  • 8. When do I use Pig?? Can I use Pig to do this? YES NO TRY TO USE PIG ANYWAYS! Did that work? YES NO OK… use Java MapReduce
  • 9. Why? • If you can do it with Pig, save yourself the pain • Almost always developer time is worth more than machine time • Trying something out in Pig is not risky (time- wise) – you might learn something about your problem – Ok, so it turned out to look a bit like a hack, but who cares? – Ok, so it ended up being slow, but who cares?
  • 10. Use the right tool for the job Pig Java MapReduce HTML Get the job done faster and better Big Data Problem TM
  • 11. Which is faster, Pig or Java MapReduce? Hypothetically, any Pig job could be rewritten using MapReduce… so Java MR can only be faster. The TRUE battle is the Pig optimizer vs. the developer VS Are you better than the Pig optimizer than figuring out how to string multiple jobs together (and other things)?
  • 12. Things that are hard to express in Pig • When something is hard to express succinctly in Pig, you are going to end up with a slow job i.e., building something up of several primitives • Some examples: – Tricky groupings or joins – Combining lots of data sets – Tricky usage of the distributed cache (replicated join) – Tricky cross products – Doing crazy stuff in nested FOREACH • In these cases, Pig is going to spawn off a bunch of MapReduce jobs, which could have been done with less This is change in “speed” that doesn’t just have to do with cost-of-abstraction
  • 13. The Fancy MAPREDUCE keyword! Pig has a relational operator called MAPREDUCE that allows your to plug in a Java MapReduce job! Use this to only replace the tricky things … don’t throw out all the stuff Pig is good at B = MAPREDUCE 'wordcount.jar' STORE A INTO 'inputDir' LOAD 'outputDir' AS (word:chararray, count: int) `org.myorg.WordCount inputDir outputDir`; Have the best of both worlds! To the rescue…
  • 14. Somewhat related: Is developer time worthless? Does speed really matter? Time spent writing Pig job Runtime of Pig job x times job is ran Time spent maintaining Pig job Time spent writing MR job Runtime of MR job x times job is ran Time spent maintaining MR job When does the scale tip in one direction or the other? Will the job run many times? Or once? Are your Java programmers sloppy? Is the Java MR significantly faster in this case? Is 14 minutes really that different from 20 minutes?
  • 15. Why is development so much faster in Pig? • Fewer java-level bugs to work out … but bugs might be harder to figure out • Fewer lines of code simply means less typing • Compilation and deployment can significantly slow down incremental improvements • Easier to read: The purpose of the analytic is more straightforward (the context is self-evident)
  • 16. Avoiding Java! • Not everyone is a Java expert … especially all those SQL guys you are repurposing • The higher level of abstraction makes Pig easier to learn and read – I’ve had both software engineers and SQL developers become productive in Pig in <4 days Oh, you want to learn Hadoop? Read this first!
  • 17. But can I really? not really. Pig is good at moving data sets between states … but not so good at manipulating the data itself examples: advanced string operations, math, complex aggregates, dates, NLP, model building You need user-defined functions (UDFs) I’ve seen too many people try to avoid UDFs UDFs are powerful: manipulate bags after a GROUP BY Plug into external libraries like NLTK or OpenNLP Loaders for complex custom data types Exploiting the order of data
  • 18. Ok, so I still want to avoid Java Do you work by yourself??? Give someone else the task of writing you a UDF! (they are bite-size little projects) Current UDF support in 0.11.1: Java, Python, JavaScript, Ruby, Groovy These can help you avoid Java if you simply don’t like it (me)
  • 19. Why did you write a book on MR Design Patterns if you think you should do stuff in Pig?? Good question! • I’ve seen plenty of devs do DUMB stuff in Pig just because there is a keyword for it e.g., silly joins, ordering, using the PARALLEL keyword wrong • Knowing how MapReduce works will result in you writing better Pig • In particular– how do Pig optimizations and relational keywords translate into MapReduce design patterns?
  • 20. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE A STORY ABOUT MAINTAINABILITY
  • 21. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE IT guy here. Your MapReduce job is blowing up the cluster, how do I fix this thing?
  • 22. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Ah, that’s pretty easy to fix. Just comment out that first line in the mapper function.
  • 23. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Ok, how do I do that?
  • 24. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easy
  • 25. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easy First, check the code out of git
  • 26. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH!
  • 27. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/ha doop/mapreducejobs/job s/codes/analytic/mymapr educejob/mapper.java . . .
  • 28. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . Now, build the .jar
  • 29. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . . . .Now, compile the .jar And ship the .jar to the cluster, replacing the old one
  • 30. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . . . . . . . . . Now, compile the .jar And ship the .jar to the cluster, replacing the old one Ok, now run the hadoop jar command. Don’t forget the CLASSPATH!
  • 31. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . . . . . . . . . . . Now, compile the .jar And ship the .jar to the cluster, replacing the old one Ok, now run the hadoop jar command. Don’t forget the CLASSPATH! Did that work?
  • 32. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE No
  • 33. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE . . . Ah, let’s try something else and do that again!
  • 34. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig)
  • 35. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig) IT guy here. Your MapReduce job is blowing up the cluster, how do I fix this thing?
  • 36. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig) Ah, that’s pretty easy to fix. Just comment out that line that says “FILTER blah blah” and save the file.
  • 37. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig) Ok, thanks!
  • 38. Pig: Deployment & Maintainability • Don’t have to worry about version mismatch (for the most part) • You can have multiple Pig client libraries installed at once • Takes compilation out of the build and deployment process • Can make changes to scripts in place if you have to • Iteratively tweaking scripts during development and debugging • Less chances for the developer to write Java-level bugs
  • 39. Some Caveats • Hadoop Streaming provides some of these same benefits • Big problems in both are still going to take time • If you are using Java UDFs, you still need to compile them (which is why I use Python)
  • 40. Unstructured Data • Delimited data is pretty easy • Pig has issues dealing with out of the box: – Media: images, videos, audio – Time series: utilizing order of data, lists – Ambiguously delimited text – Log data: rows with different context/meaning/format You can write custom loaders and tons of UDFs… but what’s the point?
  • 41. What about semi-structured data? • Some forms more natural that others – Well-defined JSON/XML schemas are usually OK • Pig has trouble dealing with: – Complex operations on unbounded lists of objects (e.g., bags) – Very Flexible schemas (think BigTable/Hbase) – Poorly designed JSON/XML Sometimes, it’s just more pain than it’s worth to try to do in Pig
  • 42. Pig vs. Hive vs. MapReduce • Same arguments apply for Hive vs. Java MR • Using Pig or Hive doesn’t make that big of a difference … but pick one because UDFs/Storage functions aren’t easily interchangeable • I think you’ll like Pig better than Hive (just like everyone likes emacs more than vi)
  • 43. WRAP UP: AN ANALOGY (#1) Pig is a scripting language, Hadoop’s MapReduce is a compiled language. PYTHON C ::
  • 44. WRAP UP: AN ANALOGY (#2) Pig is a higher level of abstraction, Hadoop’s MapReduce is a lower level of abstraction. SQL C ::
  • 45. A lot of the same arguments apply! • Compilation – Don’t have to compile Pig • Efficiency of code – Pig will be a bit less efficient (but…) • Lines of code and verbosity – Pig will have fewer lines of code • Optimization – Pig has more opportunities to do automatic optimization of queries • Code portability – The same Pig script will work across versions (for the most part) • Code readability – It should be easier to understand a Pig script • Underlying bugs – Underlying bugs in Pig can cause frustrating problems (thanks be to God for open source) • Amount of control and space of possibilities – There are fewer things you CAN do in Pig