• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Hadoop Overview kdd2011
 

Hadoop Overview kdd2011

on

  • 2,638 views

In KDD2011, Vijay Narayanan (Yahoo!) and Milind Bhandarkar (Greenplum Labs, EMC) conducted a tutorial on "Modeling with Hadoop". This is the first half of the tutorial.

In KDD2011, Vijay Narayanan (Yahoo!) and Milind Bhandarkar (Greenplum Labs, EMC) conducted a tutorial on "Modeling with Hadoop". This is the first half of the tutorial.

Statistics

Views

Total Views
2,638
Views on SlideShare
2,501
Embed Views
137

Actions

Likes
6
Downloads
238
Comments
0

3 Embeds 137

http://www.linkedin.com 123
https://www.linkedin.com 12
https://content-preview.socialcast.com 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Hadoop Overview kdd2011 Hadoop Overview kdd2011 Presentation Transcript

    • Modeling with Hadoop Vijay K. Narayanan Principal Scientist, Yahoo! Labs, Yahoo! Milind Bhandarkar Chief Architect, Greenplum Labs, EMC2
    • Session 1: Overview of Hadoop •  Motivation •  Hadoop •  Map-Reduce •  Distributed File System •  Next Generation MapReduce •  Q & A 2
    • Session 2: Modeling with Hadoop •  Types of learning in MapReduce •  Algorithms in MapReduce framework •  Data parallel algorithms •  Sequential algorithms •  Challenges and Enhancements 3
    • Session 3: Hands On Exercise •  Spin-up Single Node Hadoop cluster in a Virtual Machine •  Write a regression trainer •  Train model on a dataset 4
    • Overview of Apache Hadoop
    • Hadoop At Yahoo! (Some Statistics) •  40,000 + machines in 20+ clusters •  Largest cluster is 4,000 machines •  170 Petabytes of storage •  1000+ users •  1,000,000+ jobs/month 6
    • BEHINDEVERY CLICK
    • Who Uses Hadoop ?
    • Why Hadoop ? 10
    • Big Datasets(Data-Rich Computing theme proposal. J. Campbell, et al, 2007)
    • Cost Per Gigabyte (http://www.mkomo.com/cost-per-gigabyte)
    • Storage Trends(Graph by Adam Leventhal, ACM Queue, Dec 2009)
    • Motivating Examples 14
    • Yahoo! Search Assist
    • Search Assist •  Insight: Related concepts appear close together in text corpus •  Input: Web pages •  1 Billion Pages, 10K bytes each •  10 TB of input data •  Output: List(word, List(related words)) 16
    • Search Assist // Input: List(URL, Text) foreach URL in Input : Words = Tokenize(Text(URL)); foreach word in Tokens : Insert (word, Next(word, Tokens)) in Pairs; Insert (word, Previous(word, Tokens)) in Pairs; // Result: Pairs = List (word, RelatedWord) Group Pairs by word; // Result: List (word, List(RelatedWords) foreach word in Pairs : Count RelatedWords in GroupedPairs; // Result: List (word, List(RelatedWords, count)) foreach word in CountedPairs : Sort Pairs(word, *) descending by count; choose Top 5 Pairs; // Result: List (word, Top5(RelatedWords)) 17
    • People You May Know
    • People You May Know •  Insight: You might also know Joe Smith if a lot of folks you know, know Joe Smith •  if you don t know Joe Smith already •  Numbers: •  100 MM users •  Average connections per user is 100 19
    • People You May Know // Input: List(UserName, List(Connections)) foreach u in UserList : // 100 MM foreach x in Connections(u) : // 100 foreach y in Connections(x) : // 100 if (y not in Connections(u)) : Count(u, y)++; // 1 Trillion Iterations Sort (u,y) in descending order of Count(u,y); Choose Top 3 y; Store (u, {y0, y1, y2}) for serving; 20
    • Performance •  101 Random accesses for each user •  Assume 1 ms per random access •  100 ms per user •  100 MM users •  100 days on a single machine 21
    • MapReduce Paradigm 22
    • Map & Reduce •  Primitives in Lisp (& Other functional languages) 1970s •  Google Paper 2004 •  http://labs.google.com/papers/ mapreduce.html 23
    • Map Output_List = Map (Input_List) Square (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) = (1, 4, 9, 16, 25, 36,49, 64, 81, 100) 24
    • Reduce Output_Element = Reduce (Input_List) Sum (1, 4, 9, 16, 25, 36,49, 64, 81, 100) = 385 25
    • Parallelism •  Map is inherently parallel •  Each list element processed independently •  Reduce is inherently sequential •  Unless processing multiple lists •  Grouping to produce multiple lists 26
    • Search Assist Map // Input: http://hadoop.apache.org Pairs = Tokenize_And_Pair ( Text ( Input ) ) Output = { (apache, hadoop) (hadoop, mapreduce) (hadoop, streaming)(hadoop, pig) (apache, pig) (hadoop, DFS) (streaming,commandline) (hadoop, java) (DFS, namenode) (datanode,block) (replication, default)... } 27
    • Search Assist Reduce // Input: GroupedList (word, GroupedList(words)) CountedPairs = CountOccurrences (word, RelatedWords) Output = { (hadoop, apache, 7) (hadoop, DFS, 3) (hadoop, streaming,4) (hadoop, mapreduce, 9) ... } 28
    • Issues with Large Data •  Map Parallelism: Chunking input data •  Reduce Parallelism: Grouping related data •  Dealing with failures & load imbalance 29
    • Apache Hadoop •  January 2006: Subproject of Lucene •  January 2008: Top-level Apache project •  Stable Version: 0.20.203 •  Latest Version: 0.22 (Coming soon) 31
    • Apache Hadoop •  Reliable, Performant Distributed file system •  MapReduce Programming framework •  Ecosystem: HBase, Hive, Pig, Howl, Oozie, Zookeeper, Chukwa, Mahout, Cascading, Scribe, Cassandra, Hypertable, Voldemort, Azkaban, Sqoop, Flume, Avro ... 32
    • Problem: Bandwidth to Data •  Scan 100TB Datasets on 1000 node cluster •  Remote storage @ 10MB/s = 165 mins •  Local storage @ 50-200MB/s = 33-8 mins •  Moving computation is more efficient than moving data •  Need visibility into data placement 33
    • Problem: Scaling Reliably •  Failure is not an option, it s a rule ! •  1000 nodes, MTBF < 1 day •  4000 disks, 8000 cores, 25 switches, 1000 NICs, 2000 DIMMS (16TB RAM) •  Need fault tolerant store with reasonable availability guarantees •  Handle hardware faults transparently 34
    • Hadoop Goals •  Scalable: Petabytes (10 15 Bytes) of data on thousands on nodes •  Economical: Commodity components only •  Reliable •  Engineering reliability into every application is expensive 35
    • Hadoop MapReduce 36
    • Think MapReduce •  Record = (Key, Value) •  Key : Comparable, Serializable •  Value: Serializable •  Input, Map, Shuffle, Reduce, Output 37
    • Seems Familiar ? cat /var/log/auth.log* | grep session opened | cut -d -f10 | sort | uniq -c > ~/userlist 38
    • Map •  Input: (Key , Value ) 1 1•  Output: List(Key , Value ) 2 2•  Projections, Filtering, Transformation 39
    • Shuffle •  Input: List(Key , Value ) 2 2•  Output •  Sort(Partition(List(Key , List(Value )))) 2 2•  Provided by Hadoop 40
    • Reduce •  Input: List(Key , List(Value )) 2 2•  Output: List(Key , Value ) 3 3•  Aggregation 41
    • Hadoop Streaming •  Hadoop is written in Java •  Java MapReduce code is native •  What about Non-Java Programmers ? •  Perl, Python, Shell, R •  grep, sed, awk, uniq as Mappers/Reducers •  Text Input and Output 42
    • Hadoop Streaming •  Thin Java wrapper for Map & Reduce Tasks •  Forks actual Mapper & Reducer •  IPC via stdin, stdout, stderr •  Key.toString() t Value.toString() n •  Slower than Java programs •  Allows for quick prototyping / debugging 43
    • Hadoop Streaming $ bin/hadoop jar hadoop-streaming.jar -input in-files -output out-dir -mapper mapper.sh -reducer reducer.sh # mapper.sh sed -e s/ /n/g | grep . # reducer.sh uniq -c | awk {print $2 "t" $1} 44
    • Hadoop DistributedFile System (HDFS) 45
    • HDFS •  Data is organized into files and directories •  Files are divided into uniform sized blocks (default 128MB) and distributed across cluster nodes •  HDFS exposes block placement so that computation can be migrated to data 46
    • HDFS •  Blocks are replicated (default 3) to handle hardware failure •  Replication for performance and fault tolerance (Rack-Aware placement) •  HDFS keeps checksums of data for corruption detection and recovery 47
    • HDFS •  Master-Worker Architecture •  Single NameNode •  Many (Thousands) DataNodes 48
    • HDFS Master (NameNode) •  Manages filesystem namespace •  File metadata (i.e. inode ) •  Mapping inode to list of blocks + locations •  Authorization & Authentication •  Checkpoint & journal namespace changes 49
    • Namenode •  Mapping of datanode to list of blocks •  Monitor datanode health •  Replicate missing blocks •  Keeps ALL namespace in memory •  60M objects (File/Block) in 16GB 50
    • Datanodes •  Handle block storage on multiple volumes & block integrity •  Clients access the blocks directly from data nodes •  Periodically send heartbeats and block reports to Namenode •  Blocks are stored as underlying OS s files 51
    • HDFS Architecture
    • Next Generation MapReduce 53
    • MapReduce Today (Courtesy: Arun Murthy, Hortonworks)
    • Why ? •  Scalability Limitations today •  Maximum cluster size: 4000 nodes •  Maximum Concurrent tasks: 40,000 •  Job Tracker SPOF •  Fixed map and reduce containers (slots) •  Punishes pleasantly parallel apps 55
    • Why ? (contd) •  MapReduce is not suitable for every application •  Fine-Grained Iterative applications •  HaLoop: Hadoop in a Loop •  Message passing applications •  Graph Processing 56
    • Requirements •  Need scalable cluster resources manager •  Separate scheduling from resource management •  Multi-Lingual Communication Protocols 57
    • Bottom Line •  @techmilind #mrng (MapReduce, Next Gen) is in reality, #rmng (Resource Manager, Next Gen) •  Expect different programming paradigms to be implemented •  Including MPI (soon) 58
    • Architecture (Courtesy: Arun Murthy, Hortonworks)
    • The New World •  Resource Manager •  Allocates resources (containers) to applications •  Node Manager •  Manages containers on nodes •  Application Master •  Specific to paradigm e.g. MapReduce application master, MPI application master etc 60
    • Container •  In current terminology: A Task Slot •  Slice of the node s hardware resources •  #of cores, virtual memory, disk size, disk and network bandwidth etc •  Currently, only memory usage is sliced 61
    • Questions ? 62