Your SlideShare is downloading. ×
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)

1,831
views

Published on

Published in: Technology, Design

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,831
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
37
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Data-Intensive Computing for Text Analysis CS395T / INF385T / LIN386M University of Texas at Austin, Fall 2011 Lecture 7 October 6, 2011 Jason Baldridge Matt Lease Department of Linguistics School of Information University of Texas at Austin University of Texas at AustinJasonbaldridge at gmail dot com ml at ischool dot utexas dot edu 1
  • 2. Acknowledgments Course design and slides based on Jimmy Lin’s cloud computing courses at the University of Maryland, College ParkSome figures courtesy of the followingexcellent Hadoop books (order yours today!)• Chuck Lam’s Hadoop In Action (2010)• Tom White’s Hadoop: The Definitive Guide, 2nd Edition (2010) 2
  • 3. Today’s Agenda• Hadoop Counters• Graph Processing in MapReduce – Representing/Encoding Graphs • Adjacency matrices vs. Lists – Example: Single Source Shortest Page – Example: PageRank• Themes – No shared memory redundant computation • More computational capability overcomes less efficiency – Iterate MapReduce computations until convergence – Use non-MapReduce driver for over-arching control • Not just for pre- and post-processing • Opportunity for global synchronization between iterations• In-class exercise 3
  • 4. Hadoop Counters
  • 5. Lam p. 98, White pp. 226-227 5
  • 6. White p. 172 Hadoop Counters & Global State• Hadoop’s Counters provide its only means for sharing/modifying global distributed state – Built-in safeguards for distributed modification • e.g. two tasks try to increment a counter simultaneously – Lightweight: only long bytes… per counter – Limited control • create, read, and increment • no destroy, arbitrary set, or decrement• Advertised use: progress tracking and logging• To what extent might we “abuse” counters for tracking/updating interesting shared state? 6
  • 7. How high (and precisely) can you count?• How precise? • Integer representation • To approximate fractional values, scale and truncate (Lin & Dyer p. 99)• How high? – “8-byte integers” (Lin & Dyer p. 99 ): really only one byte? – Old API: org.apache.hadoop.mapred.Counters • long getCounter(…), incrCounter(…, long amount) – New API: org.apache.hadoop.mapreduce.Counter • long getValue(), increment(long incr)• How many? – Old API: static int MAX_COUNTER_LIMIT (next slide…) – New API: ???? (int countCounters() ) 7
  • 8. 8
  • 9. 9
  • 10. White p. 173, 227-231• incrCounter(…)• getCounters(…)• getCounter(…)• findCounter(…)• http://developer.yahoo.com/hadoop/tutorial/module5.html#metrics 10
  • 11. White p. 172 Counters and Global StateCounter values are definitive only once a job has successfully completed - White p. 227What about while a job is running?• If a task reports progress, it sets a JobTracker flag to indicate a status change should be sent to the TaskTracker – The flag is checked in a separate thread every 3s, and if set, the TaskTracker is notified – What about counter updates?• The TaskTracker sends heartbeats to the JobTracker (at least every 5s) which include the status of all tasks being run by the TaskTracker... – Counters (which can be relatively larger) are sent less frequently• JobClient receives the latest status by polling the jobtracker every 1s• Clients can call JobClient’s getJob() to obtain a RunningJob instance with the latest status information (at time of the call?) 11
  • 12. Representing Graphs
  • 13. What’s a graph? Graphs are ubiquitous  The Web (pages and hyperlink structure)  Computer networks (computers and connections)  Highways and railroads (cities and roads/tracks)  Social networks G = (V,E), where  V: the set of vertices (nodes)  E: the set of edges (links)  Either/Both may contain additional information • e.g. edge weights (e.g. cost, time, distance) • e.g. node values (e.g. PageRank) Graph types  Directed vs. undirected  Cyclic vs. acyclic
  • 14. Some Graph Problems Finding shortest paths  Routing Internet traffic and UPS trucks Finding minimum spanning trees  Telco laying down fiber Finding Max Flow  Airline scheduling Identify “special” nodes and communities  Breaking up terrorist cells, spread of avian flu Bipartite matching  Monster.com, Match.com And of course... PageRank
  • 15. Graphs and MapReduce MapReduce graph processing typically involves  Performing computations at each node • e.g. using node features, edge features, and local link structure  Propagating computations • “traversing” the graph Key questions  How do you represent graph data in MapReduce?  How do you traverse a graph in MapReduce?
  • 16. Graph Representation How do we encode graph structure suitably for  computation  propagation Two common approaches  Adjacency matrix 2  Adjacency list 1 3 4
  • 17. Adjacency MatricesRepresent a graph as an |V| x |V| square matrix M  Mjk = w  directed edge of weight w from node j to node k • w=0  no edge exists • Mii: main diagonal gives self-loop weights from node i to itself  If undirected, use only top-right of matrix (symmetry) 2 1 2 3 4 1 0 1 0 1 1 3 2 1 0 1 1 3 1 0 0 0 4 1 0 1 0 4
  • 18. Adjacency Matrices: Critique Advantages:  Amenable to mathematical manipulation  Easy iteration for computation over out-links and in-links • Mj* column over all out-links from node j • M*k row over all in-links to node k Disadvantages  Sparsity: wasted computations, wasted space
  • 19. Adjacency ListsTake adjacency matrices… and throw away all the zeros Hmm… look familiar…? 1 2 3 4 1 0 1 0 1 1: 2, 4 2 1 0 1 1 2: 1, 3, 4 3 1 0 0 0 3: 1 4: 1, 3 4 1 0 1 0
  • 20. Inverted Index: Boolean Retrieval Doc 1 Doc 2 Doc 3 Doc 4 one fish, two fish red fish, blue fish cat in the hat green eggs and ham 1 2 3 4 blue 1 blue 2 cat 1 cat 3 egg 1 egg 4 fish 1 1 fish 1 2 green 1 green 4 ham 1 ham 4 hat 1 hat 3 one 1 one 1 red 1 red 2 two 1 two 1
  • 21. Adjacency Lists: Critique  Vs. Adjacency matrix  Sparsity: More compact, fewer wasted computations  Easy to compute over out-links  What about computation over in-links? 1 2 3 4 1 0 1 0 1 1: 2, 4 2 1 0 1 1 2: 1, 3, 4 3 1 0 0 0 3: 1 4: 1, 3 4 1 0 1 0
  • 22. Single Source Shortest Path
  • 23. Problem Find shortest path from a source node to one or more target nodes  Shortest may mean lowest weight or cost, etc. Classic approach  Dijkstra’s Algorithm • Maintain a global priority queue over all (node, distance) pairs • Sort queue by min distance to reach each node from the source node • Initialization: distance to source node = 0, all others =  • Visit nodes in order of (monotonically) increasing path length • Whenever node visited, no shorter path exists • For each node is visited • update its neighbours in the queue • Remove the node from the queue
  • 24. Edsger W. Dijkstra  May 11, 1930 – August 6, 2002  Received the 1972 Turing Award  Schlumberger Centennial Chair of Computer Science at UT Austin (1984-2000)  http://en.wikipedia.org/wiki/Dijkstra’s_algorithm  Wikipedia has nice animation of it in action
  • 25. Dijkstra’s Algorithm Maintain global priority queue over all (node, distance) pairs  Sort queue by min distance to reach each node from the source node Initialization  distance to source node = 0  distance to all other nodes =  While queue not empty  visit next node (i.e. the node with shortest path length in the queue) • Output distance to it if desired • Update distance to each of its neighbours in the queue • Remove it from the queue
  • 26. Dijkstra’s Algorithm Example 1   10 0 2 3 9 4 6 5 7   2Example from CLR
  • 27. Dijkstra’s Algorithm Example 1 10  10 0 2 3 9 4 6 5 7 5  2Example from CLR
  • 28. Dijkstra’s Algorithm Example 1 8 14 10 0 2 3 9 4 6 5 7 5 7 2Example from CLR
  • 29. Dijkstra’s Algorithm Example 1 8 13 10 0 2 3 9 4 6 5 7 5 7 2Example from CLR
  • 30. Dijkstra’s Algorithm Example 1 1 8 9 10 0 2 3 9 4 6 5 7 5 7 2Example from CLR
  • 31. Dijkstra’s Algorithm Example 1 8 9 10 0 2 3 9 4 6 5 7 5 7 2Example from CLR
  • 32. Problem Find shortest path from a source node to one or more target nodes  Shortest may mean lowest weight or cost, etc. Classic approach  Dijkstra’s Algorithm
  • 33. Problem Find shortest path from a source node to one or more target nodes  Shortest may mean lowest weight or cost, etc. Classic approach  Dijkstra’s Algorithm MapReduce approach  Parallel Breadth-First Search (BFS)
  • 34. Finding the Shortest Path Assume unweighted graph (for now…) General Inductive Approach  Initialization • DISTANCETO(source s) = 0 • For any node n connected to s, DISTANCETO(n) = 1 • Else DISTANCETO(any other node p) =   For each iteration • For every node n • For every neighbor m  M(n), DISTANCETO(m) = 1 + min( DISTANCETO(n) ) d1 m1 … d2 s … n m2 … d3 m3
  • 35. Visualizing Parallel BFS n7 n0 n1 n3 n2 n6 n5 n4 n8 n9
  • 36. From Intuition to Algorithm  Representation  Key: node n  Value: d (distance from start) • Also: adjacency list (list of nodes reachable from n)  Initialization: d =  for all nodes except start node  Mapper  m  adjacency list: emit (m, d + 1)  Sort/Shuffle  Groups distances by reachable nodes  Reducer  Selects minimum distance path for each reachable node  Additional bookkeeping needed to keep track of actual path
  • 37. BFS Pseudo-Code What type should we use for the values?
  • 38. Multiple Iterations Needed  Each iteration advances the “frontier” by one hop  Subsequent iterations find more reachable nodes  Multiple iterations are needed to explore entire graph  Preserving graph structure  Problem: Where did the adjacency list go?  Solution: mapper emits (n, adjacency list) s well
  • 39. Stopping Criterion  How many iterations are needed?  Convince yourself: when a node is first “discovered”, we’ve found the shortest path  Now answer the question...  Six degrees of separation?  Practicalities of implementation in MapReduce
  • 40. Comparison to Dijkstra Dijkstra’s algorithm is more efficient  At any step it only pursues edges from the minimum-cost path inside the frontier MapReduce explores all paths in parallel  Lots of “waste”  Useful work is only done at the “frontier” Why can’t we do better using MapReduce?
  • 41. Weighted Edges Now consider non-unit, positive edge weights  Why can’t edge weights be negative? Adjacency list now includes a weight w for each edge  In mapper, emit (m, d + wp) instead of (m, d + 1) for each node m Is that all?
  • 42. Stopping Criterion  How many iterations are needed in parallel BFS (positive edge weight case)?  Convince yourself: when a node is first “discovered”, we’ve found the shortest path
  • 43. Additional Complexities 1 search frontier 1 n6 n7 1 n8 r 10 1 n9 n5 n1 s 1 1 q p 1 n4 n2 1 n3
  • 44. Stopping Criterion  How many iterations are needed in parallel BFS (positive edge weight case)?  Practicalities of implementation in MapReduce  Unrelated to stopping… where have we seen min/max before?
  • 45. In General: Graphs and MapReduce Graph algorithms typically involve  Performing computations at each node: based on node features, edge features, and local link structure  Propagating computations: “traversing” the graph Generic recipe  Represent graphs as adjacency lists  Perform local computations in mapper  Pass along partial results via outlinks, keyed by destination node  Perform aggregation in reducer on inlinks to a node  Iterate until convergence: controlled by external “driver”  Don’t forget to pass the graph structure between iterations
  • 46. PageRank
  • 47. Random Walks Over the Web Random surfer model  User starts at a random Web page  User randomly clicks on links, surfing from page to page PageRank  Characterizes the amount of time spent on any given page  Mathematically, a probability distribution over pages PageRank captures notions of page importance  Correspondence to human intuition?  One of thousands of features used in web search  Note: query-independent
  • 48. PageRank: DefinedGiven page x with inlinks t1…tn, where  C(t) is the out-degree of t   is probability of random jump  N is the total number of nodes in the graph 1 n PR (ti ) PR ( x)      (1   ) N i 1 C (ti ) t1 X t2 … tn
  • 49. Computing PageRank Properties of PageRank  Can be computed iteratively  Effects at each iteration are local Sketch of algorithm:  Start with seed PRi values  Each page distributes PRi “credit” to all pages it links to  Each target page adds up “credit” from multiple in-bound links to compute PRi+1  Iterate until values converge
  • 50. Simplified PageRank First, tackle the simple case:  No random jump factor  No dangling links Then, factor in these complexities…  Why do we need the random jump?  Where do dangling links come from?
  • 51. Sample PageRank Iteration (1) Iteration 1 n2 (0.2) n2 (0.166) 0.1 n1 (0.2) 0.1 0.1 n1 (0.066) 0.1 0.066 0.066 0.066 n5 (0.2) n5 (0.3) n3 (0.2) n3 (0.166) 0.2 0.2 n4 (0.2) n4 (0.3)
  • 52. Sample PageRank Iteration (2) Iteration 2 n2 (0.166) n2 (0.133) 0.033 0.083 n1 (0.066) 0.083 n1 (0.1) 0.033 0.1 0.1 0.1 n5 (0.3) n5 (0.383) n3 (0.166) n3 (0.183) 0.3 0.166 n4 (0.3) n4 (0.2)
  • 53. PageRank in MapReduce n1 [n2, n4] n2 [n3, n5] n3 [n4] n4 [n5] n5 [n1, n2, n3] Map n2 n4 n3 n5 n4 n5 n1 n2 n3 n1 n2 n2 n3 n3 n4 n4 n5 n5 Reduce n1 [n2, n4] n2 [n3, n5] n3 [n4] n4 [n5] n5 [n1, n2, n3]
  • 54. PageRank Pseudo-Code
  • 55. Complete PageRank Two additional complexities  What is the proper treatment of dangling nodes?  How do we factor in the random jump factor? Solution:  Second pass to redistribute “missing PageRank mass” and account for random jumps  1  m  p      (1   )  p  G G       p is PageRank value from before, p is updated PageRank value  |G| is the number of nodes in the graph  m is the missing PageRank mass How to perform bookkeeping for dangling nodes? How to implement this 2nd pass in Hadoop?
  • 56. PageRank Convergence Alternative convergence criteria  Iterate until PageRank values don’t change  Iterate until PageRank rankings don’t change  Fixed number of iterations Convergence for web graphs?
  • 57. Local Aggregation d1 m1 d2 Use combiners m2 n  BFS uses min, PageRank uses sum d3 • associative and commutative m3  In-mapper combining design pattern also applicable  Opportunity for aggregation when mapper sees multiple nodes with out-links to same destination node How do we maximize opportunities for local aggregation?  Partition the dataset into clusters with many internal and few external links  Chicken-and-egg problem: don’t we need MapReduce to do this? • Use cheap heuristics • e.g. social network: zip code or school • e.g. for web: language or domain name • etc.
  • 58. Limitations of MapReduce Amount of intermediate data (to shuffle) is proportional to number of edges in graph We have considered sparse graphs (i.e. with few edges), minimizing such intermediate data For dense graphs with O(n^2) edges, runtime would be dominated by copying intermediate data Consequently, MapReduce algorithms are often impractical on large, dense graphs But isn’t data-intensive computing exactly what MapReduce is supposed to help us with?? See (Lin and Dyer, p. 101)
  • 59. In-class Exercise: All Pairs PBFS
  • 60. 1: class Mapper 1: class Mapper2: method Map( Node N ) 2: method Map( sid s, Node N )3: d = N.Distance 3: d = N[s].Distance4: Emit( N.id, N ) 4: Emit( Pair(sid, N.id), N )5: for all (nid m in N.AdjacencyList) do 5: for all (nid m in N.AdjacencyList) do6: Emit( m, d + 1) 6: Emit( Pair(sid, m), d + 1)1: class Reducer 1: class Reducer2: method Reduce(nid m, [d1, d2, ...]) 2: method Reduce( Pair(sid s,nid m), [d1,3: dmin = 1 d2, ...] )4: Node M = null 3: dmin = 15: for all d in counts [d1, d2, ...] do 4: M = null6: if IsNode(d) then 5: for all d in counts [d1, d2, ...] do7: M=d 6: if IsNode(d) then8: else if d < dmin then 7: M=d9: dmin = d 8: else if d < dmin then10: M.Distance = dmin 9: dmin = d11: Emit( M ) 10: M[s].Distance = dmin 11: Emit( M )
  • 61. 1: class Mapper 1: class Mapper2: method Map( sid s, Node N ) 2: method Map( sid s, Node N )3: d = N[s].Distance 3: d = N[s].Distance4:4: Emit( Pair(sid, N.id), N ) 4: if sid=0 then5: for all (nid m in N.AdjacencyList) do 5: Emit( Pair(sid, N.id), N )6: Emit( Pair(sid, m), d + 1) 6: for all (nid m in N.AdjacencyList) do 7: Emit( Pair(sid, m), d + 1)1: class Reducer2: method Reduce( Pair(sid s,nid m), [d1, Partition: all pairs with same 2nd nid to samed2, ...] ) reducer3: dmin = 1 KeyComp: order by sid, the nid, sort sid=04: M = null first5: for all d in counts [d1, d2, ...] do6: if IsNode(d) then 1: class Reducer7: M=d 2: M = null8: else if d < dmin then 3: method Reduce( Pair(sid s,nid m), [d1,9: dmin = d d2, ...] )10: M[s].Distance = dmin 4: dmin = 111: Emit( M ) 5: for all d in counts [d1, d2, ...] do 6: if IsNode(d) then 7: M=d 8: else if d < dmin then 9: dmin = d 10: M[s].Distance = dmin 11: Emit( M )