• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Innovations in Apache Hadoop MapReduce Pig Hive for Improving Query Performance
 

Innovations in Apache Hadoop MapReduce Pig Hive for Improving Query Performance

on

  • 4,218 views

Apache Hadoop and its ecosystem projects Hive and Pig support interactions with data sets of enormous sizes. Hadoop always excelled at large-scale data processing; however, running smaller queries has ...

Apache Hadoop and its ecosystem projects Hive and Pig support interactions with data sets of enormous sizes. Hadoop always excelled at large-scale data processing; however, running smaller queries has been problematic due to the batch-oriented nature of the system. This talk will cover the enhancements we have made to YARN, MapReduce, Pig and Hive. We will also walk through the future enhancements we have planned.

Statistics

Views

Total Views
4,218
Views on SlideShare
3,949
Embed Views
269

Actions

Likes
6
Downloads
0
Comments
0

3 Embeds 269

https://twitter.com 249
http://eventifier.co 18
http://eventifier.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Since the time we started this, we’ve seen multiple people benchmark hive comparing its text format processors against alternatives
  • Notmapreduce, not hdfs, just plain hive
  • Layers of inspectors that identify column type, de-serialize data and determine appropriate expression routines in the inner loop
  • I wrote all of the code and Jitendra was just consulting :P

Innovations in Apache Hadoop MapReduce Pig Hive for Improving Query Performance Innovations in Apache Hadoop MapReduce Pig Hive for Improving Query Performance Presentation Transcript

  • Innovations In Apache Hadoop MapReduce,Pig and Hive for improving queryperformancegopalv@apache.orgvinodkv@apache.org Page 1
  • © Hortonworks Inc. 2013
  • Operation Stinger © Hortonworks Inc. 2013 Page 3
  • Performance at any cost © Hortonworks Inc. 2013
  • Performance at any cost• Scalability – Already works great, just don’t break it for performance gains• Isolation + Security – Queries between different users run as different users• Fault tolerance – Keep all of MR’s safety nets to work around bad nodes in clusters• UDFs – Make sure they are “User” defined and not “Admin” defined © Hortonworks Inc. 2013
  • First things first• How far can we push Hive as it exists today? © Hortonworks Inc. 2013
  • Benchmark spec• The TPC-DS benchmark data+query set• Query 27 (big joins small) – For all items sold in stores located in specified states during a given year, find the average quantity, average list price, average list sales price, average coupon amount for a given gender, marital status, education and customer demographic.• Query 82 (big joins big) – List all items and current prices sold through the store channel from certain manufacturers in a given price range and consistently had a quantity between 100 and 500 on hand in a 60-day period. © Hortonworks Inc. 2013
  • TL;DR - II• TPC-DS Query 82, Scale=200, 10 EC2 nodes (40 disks)3500 3257.6923000 2862.66925002000 Text RCFile1500 Partitioned RCFile Partitioned RCFile + Optimizations1000500 255.641 71.114 0 Query 82 © Hortonworks Inc. 2013
  • Forget the actual benchmark• First of all, YMMV – Software – Hardware – Setup – Tuning• Text formats seem to be the staple of all comparisons – Really? – Everybody’s using it but only for benchmarks! © Hortonworks Inc. 2013
  • What did the trick?• Mapreduce?• HDFS?• Or is it just Hive? © Hortonworks Inc. 2013
  • Optional Advice © Hortonworks Inc. 2013
  • RCFile• Binary RCFiles• Hive pushes down column projections• Less I/O, Less CPU• Smaller files © Hortonworks Inc. 2013
  • Data organization• No data system at scale is loaded once & left alone• Partitions are essential• Data flows into new partitions every day © Hortonworks Inc. 2013
  • A closer look• Now revisiting the benchmark and its results © Hortonworks Inc. 2013
  • Query27 - BeforeStage-3 16Stage-2 17Stage-1 49Stage-6 355Stage-5 512Stage-4 553 0 200 400 600 800 1000 1200 1400 1600 © Hortonworks Inc. 2013
  • Before© Hortonworks Inc. 2013
  • Query 27 - After Time Stage-9 33Stage-10 5 0 5 10 15 20 25 30 35 40 © Hortonworks Inc. 2013
  • After© Hortonworks Inc. 2013
  • Query 82 - BeforeStage-3 17Stage-2 17 Start TimeStage-1 2199Stage-4 1025 0 500 1000 1500 2000 2500 3000 3500 © Hortonworks Inc. 2013
  • Query 82 - AfterStage-1 71 0 10 20 30 40 50 60 70 80 © Hortonworks Inc. 2013
  • What changed?• Job Count/Correct plan• Correct data formats• Correct data organization• Correct configuration © Hortonworks Inc. 2013
  • © Hortonworks Inc. 2013
  • Is that all?• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Parallelism – Spin-up times – Data locality• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • In Hive• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Parallelism – Spin-up times – Data locality• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • In Hive• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Parallelism – Spin-up times – Data locality• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • Hive Metastore• 1+N Select problem – SELECT partitions FROM tables; – /* for each needed partition */ SELECT * FROM Partition .. – For query 27 , generates > 5000 queries! 4-5 seconds lost on each call! – Lazy loading or Include/Join are general solutions• Datanucleus/ORM issues – 100K NPEs try.. Catch.. Ignore..• Metastore DB Schema revisit – Denormalize some/all of it? © Hortonworks Inc. 2013
  • In Hive• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Parallelism – Spin-up times – Data locality• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • RCFile issues• RCFiles do not split well – Row groups and row group boundaries• Small row groups vs big row groups – Sync() vs min split – Storage packing• Run-length information is lost – Unnecessary deserialization costs © Hortonworks Inc. 2013
  • ORC file format• A single file as output of each task. – Dramatically simplifies integration with Hive – Lowers pressure on the NameNode• Support for the Hive type model – Complex types (struct, list, map, union) – New types (datetime, decimal) – Encoding specific to the column type• Split files without scanning for markers• Bound the amount of memory required for reading or writing. © Hortonworks Inc. 2013
  • In Hive• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Parallelism – Spin-up times – Data locality• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • CPU intensive code © Hortonworks Inc. 2013
  • CPU intensive code• Hive query engine processes one row at a time – Very inefficient in terms of CPU usage• Lazy deserialization: layers• Object inspector calls• Lots of virtual method calls © Hortonworks Inc. 2013
  • Tighten your loops © Hortonworks Inc. 2013
  • Vectorization to the rescue• Process a row batch at a time instead of a single row• Row batch to consist of column vectors – The column vector will consist of array(s) of primitive types as far as possible• Each operator will process the whole column vector at a time• File formats to give out vectorized batches for processing• Underlying research promises – Better instruction pipelines and cache usage – Mechanical sympathy © Hortonworks Inc. 2013
  • Vectorization: Prelim results• Functionality – Some arithmetic operators and filters using primitive type columns – Have a basic integration benchmark to prove that the whole setup works• Performance – Micro benchmark – More than 30x improvement in the CPU time – Disclaimer: – Micro benchmark! – Include io or deserialization costs or complex and string datatypes © Hortonworks Inc. 2013
  • In YARN+MR• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Data locality – Parallelism – Spin-up times• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • In YARN+MR• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Data locality – Parallelism – Spin-up times• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • Data Locality• CombineInputFormat• AM interaction with locality• Short-circuit reads!• Delay scheduling – Good for throughput – Bad for latency © Hortonworks Inc. 2013
  • In YARN+MR• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Data locality – Parallelism – Spin-up times• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • Parallelism• Can tune it (to some extent) – Controlling splits/reducer count• Hive doesn’t know dynamic cluster status – Benchmarks max out clusters, real jobs may or may not• Hive does not let you control parallelism – particularly in case of multiple jobs in a query © Hortonworks Inc. 2013
  • In YARN+MR• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Data locality – Parallelism – Spin-up times• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • Spin up times• AM startup costs• Task startup costs• Multiple waves of map tasks © Hortonworks Inc. 2013
  • Apache Tez• Generic DAG workflow• Container re-use• AM pool service © Hortonworks Inc. 2013
  • AM Pool Service• Pre-launches a pool of AMs• Jobs submitted to these pre-launched AMs – Saves 3-5 seconds• Pre-launched AMs can pre-allocate containers• Tasks can be started as soon as the job is submitted – Saves 2-3 seconds © Hortonworks Inc. 2013
  • Container reuse• Tez MapReduce AM supports Container reuse• Launched JVMs are re-used between tasks – about 4-5 seconds saved in case of multiple waves• Allows future enhancements – re-using task data structures across splits © Hortonworks Inc. 2013
  • In HDFS• NO!• In Hive – Metastore – RCFile issues – CPU intensive code• In YARN+MR – Data locality – Parallelism – Spin-up times• In HDFS – Bad disks/deteriorating nodes © Hortonworks Inc. 2013
  • Speculation/bad disks• No cluster remains at 100% forever• Bad disks cause latency issues – Speculation is one defense, but it is not enough – Fault tolerance is a safety net• Possible solutions: – More feedback from HDFS about stale nodes, bad/slow disks – Volume scheduling © Hortonworks Inc. 2013
  • General guidelines• Benchmarking – Be wary of benchmarks! Including ours! – Algebra with X © Hortonworks Inc. 2013
  • General guidelines contd.• Benchmarks: To repeat, YMMV.• Benchmark *your* use-case.• Decide your problem size – If (smallData) { Mysql/Postgres/Your smart phone } else { – Make it work – Make it scale – Make it faster }• If it is (seems to be) slow, file a bug, spend a little time!• Replacing systems without understanding them – Is an easy way to have an illusion of progress © Hortonworks Inc. 2013
  • Related talks• “Optimizing Hive Queries” by Owen O’Malley• “What’s New and What’s Next in Apache Hive” by Gunther Hagleitner © Hortonworks Inc. 2013
  • Credits• Arun C Murthy• Bikas Saha• Gopal Vijayaraghavan• Hitesh Shah• Siddharth Seth• Vinod Kumar Vavilapalli• Alan Gates• Ashutosh Chauhan• Vikram Dixit• Gunther Hagleitner• Owen O’Malley• Jintendranath Pandey• Yahoo!, Facebook, Twitter, SAP and Microsoft all contributing. © Hortonworks Inc. 2013
  • Q&A• Thanks! © Hortonworks Inc. 2013