Successfully reported this slideshow.
Upcoming SlideShare
×

Lecture 3: Data-Intensive Computing for Text Analysis (Fall 2011)

2,669 views

Published on

Course at University of Texas at Austin, co-taught with Jason Baldridge

Published in: Technology, Education
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Lecture 3: Data-Intensive Computing for Text Analysis (Fall 2011)

1. 1. Data-Intensive Computing for Text Analysis CS395T / INF385T / LIN386M University of Texas at Austin, Fall 2011 Lecture 3 September 8, 2011 Jason Baldridge Matt Lease Department of Linguistics School of Information University of Texas at Austin University of Texas at AustinJasonbaldridge at gmail dot com ml at ischool dot utexas dot edu
2. 2. Acknowledgments Course design and slides derived from Jimmy Lin’s cloud computing courses at the University of Maryland, College ParkSome figures courtesy of the followingexcellent Hadoop books (order yours today!)• Chuck Lam’s Hadoop In Action (2010)• Tom White’s Hadoop: The Definitive Guide, 2nd Edition (2010)
3. 3. Today’s Agenda• Review• Toward MapReduce “design patterns” – Building block: preserving state across calls – In-Map & In-Mapper combining (vs. combiners) – Secondary sorting (via value-to-key Conversion) – Pairs and Stripes – Order Inversion• Group Work (examples) – Interlude: scaling counts, TF-IDF
4. 4. Review
5. 5. MapReduce: RecapRequired: map ( K1, V1 ) → list ( K2, V2 ) reduce ( K2, list(V2) ) → list ( K3, V3)All values with the same key are reduced togetherOptional: partition (K2, N) → Rj maps K2 to some reducer Rj in [1..N]  Often a simple hash of the key, e.g., hash(k’) mod n  Divides up key space for parallel reduce operations combine ( K2, list(V2) ) → list ( K2, V2 )  Mini-reducers that run in memory after the map phase  Used as an optimization to reduce network trafficThe execution framework handles everything else…
6. 6. “Everything Else”  The execution framework handles everything else…  Scheduling: assigns workers to map and reduce tasks  ―Data distribution‖: moves processes to data  Synchronization: gathers, sorts, and shuffles intermediate data  Errors and faults: detects worker failures and restarts  Limited control over data and execution flow  All algorithms must expressed in m, r, c, p  You don’t know:  Where mappers and reducers run  When a mapper or reducer begins or finishes  Which input a particular mapper is processing  Which intermediate key a particular reducer is processing
7. 7. k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map mapa 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combinea 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3
8. 8. Shuffle and Sort Mapper intermediate files (on disk) merged spills (on disk) Combiner Reducer circular buffer (in memory) Combiner other reducers spills (on disk) other mappers
9. 9. Courtesy of Tom White’s Hadoop: The Definitive Guide, 2nd Edition (2010), p. 178 Shuffle and 2 Sorts As map emits values, local sorting runs in tandem (1st sort) Combine is optionally called 0..N times for local aggregation on sorted (K2, list(V2)) tuples (more sorting of output) Partition determines which (logical) reducer Rj each key will go to Node’s TaskTracker tells JobTracker it has keys for Rj JobTracker determines node to run Rj based on data locality When local map/combine/sort finishes, sends data to Rj’s node Rj’s node iteratively merges inputs from map nodes as it arrives (2nd sort) For each (K, list(V)) tuple in merged output, call reduce(…)
10. 10. Scalable Hadoop Algorithms: Themes Avoid object creation  Inherently costly operation  Garbage collection Avoid buffering  Limited heap size  Works for small datasets, but won’t scale! • Yet… we’ll talk about patterns involving buffering…
11. 11. Importance of Local Aggregation Ideal scaling characteristics:  Twice the data, twice the running time  Twice the resources, half the running time Why can’t we achieve this?  Synchronization requires communication  Communication kills performance Thus… avoid communication!  Reduce intermediate data via local aggregation  Combiners can help
12. 12. Tools for Synchronization  Cleverly-constructed data structures  Bring partial results together  Sort order of intermediate keys  Control order in which reducers process keys  Partitioner  Control which reducer processes which keys  Preserving state in mappers and reducers  Capture dependencies across multiple keys and values
13. 13. Secondary Sorting MapReduce sorts input to reducers by key  Values may be arbitrarily ordered What if want to sort value also?  E.g., k → (k1,v1), (k1,v3), (k1,v4), (k1,v8)… Solutions?  Swap key and value to sort by value?  What if we use (k,v) as a joint key (and change nothing else)?
14. 14. Secondary Sorting: Solutions Solution 1: Buffer values in memory, then sort  Tradeoffs? Solution 2: ―Value-to-key conversion‖ design pattern  Form composite intermediate key: (k, v1)  Let execution framework do the sorting  Preserve state across multiple key-value pairs  …how do we make this happen?
15. 15. Secondary Sorting (Lin 57, White 241)  Create composite key: (k,v)  Define a Key Comparator to sort via both  Possibly not needed in some cases (e.g. strings & concatenation)  Define a partition function based only on the (original) key  All pairs with same key should go to same reducer  Multiple keys may still go to the same reduce node; how do you know when the key changes across invocations of reduce()? • i.e. assume you want to do something with all values associated with a given key (e.g. print all on the same line, with no other keys)  Preserve state in the reducer across invocations  reduce() will be called separately for each pair, but we need to track the current key so we can detect when it changes Hadoop also provides Group Comparator
16. 16. Preserving State in Hadoop Mapper object Reducer object one object per task state state configure API initialization hook configure one call per input key-value pair map reduce one call per intermediate key close API cleanup hook close
17. 17. Combiner Design Combiners and reducers share same method signature  Sometimes, reducers can serve as combiners  Often, not… Remember: combiner are optional optimizations  Should not affect algorithm correctness  May be run 0, 1, or multiple times
18. 18. “Hello World”: Word Countmap ( K( K1=String, V1=String ) → list ( K2=String, V2=Integer )reduce ( K2=String, list(V2=Integer) ) → list ( K3=String, V3=Integer) Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, sum);
19. 19. “Hello World”: Word Countmap ( K( K1=String, V1=String ) → list ( K2=String, V2=Integer )reduce ( K2=String, list(V2=Integer) ) → list ( K3=String, V3=Integer) Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, sum); Combiner?
20. 20. MapReduce Algorithm Design
21. 21. Design Pattern for Local Aggregation ―In-mapper combining‖  Fold the functionality of the combiner into the mapper, including preserving state across multiple map calls Advantages  Speed  Why is this faster than actual combiners? • Construction/deconstruction, serialization/deserialization • Guarantee and control use Disadvantages  Buffering! Explicit memory management required • Can use disk-backed-buffer, based on # items or byes in memory • What if multiple mappers running on the same node? Do we know?  Potential for order-dependent bugs
22. 22. “Hello World”: Word Countmap ( K( K1=String, V1=String ) → list ( K2=String, V2=Integer )reduce ( K2=String, list(V2=Integer) ) → list ( K3=String, V3=Integer) Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, sum); Combine = reduce
23. 23. Word Count: in-map combiningAre combiners still needed?
24. 24. Word Count: in-mapper combiningAre combiners still needed?
25. 25. Example 2: Compute the Mean (v1)Why can’t we use reducer as combiner?
26. 26. Example 2: Compute the Mean (v2)Why doesn’t this work?
27. 27. Example 2: Compute the Mean (v3)
28. 28. Computing the Mean: in-mapper combiningAre combiners still needed?
29. 29. Example 3: Term Co-occurrence Term co-occurrence matrix for a text collection  M = N x N matrix (N = vocabulary size)  Mij: number of times i and j co-occur in some context (for concreteness, let’s say context = sentence) Why?  Distributional profiles as a way of measuring semantic distance  Semantic distance useful for many language processing tasks
30. 30. MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection = specific instance of a large counting problem  A large event space (number of terms)  A large number of observations (the collection itself)  Goal: keep track of interesting statistics about the events Basic approach  Mappers generate partial counts  Reducers aggregate partial counts How do we aggregate partial counts efficiently?
31. 31. Approach 1: “Pairs”  Each mapper takes a sentence:  Generate all co-occurring term pairs  For all pairs, emit (a, b) → count  Reducers sum up counts associated with these pairs  Use combiners!
32. 32. Pairs: Pseudo-Code
33. 33. “Pairs” Analysis  Advantages  Easy to implement, easy to understand  Disadvantages  Lots of pairs to sort and shuffle around (upper bound?)  Not many opportunities for combiners to work
34. 34. Another Try: “Stripes”  Idea: group together pairs into an associative array (a, b) → 1 (a, c) → 2 (a, d) → 5 a → { b: 1, c: 2, d: 5, e: 3, f: 2 } (a, e) → 3 (a, f) → 2  Each mapper takes a sentence:  Generate all co-occurring term pairs  For each term, emit a → { b: countb, c: countc, d: countd … }  Reducers perform element-wise sum of associative arrays a → { b: 1, d: 5, e: 3 } + a → { b: 1, c: 2, d: 2, f: 2 } a → { b: 2, c: 2, d: 7, e: 3, f: 2 }
35. 35. Stripes: Pseudo-Code
36. 36. “Stripes” Analysis  Advantages  Far less sorting and shuffling of key-value pairs  Can make better use of combiners  Disadvantages  More difficult to implement  Underlying object more heavyweight  Fundamental limitation in terms of size of event space • Buffering!
37. 37. Cluster size: 38 coresData Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3),which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)
38. 38. Relative Frequencies How do we estimate relative frequencies from counts? count ( A, B) count ( A, B) f ( B | A)   count ( A)  count ( A, B ) B Why do we want to do this? How do we do this with MapReduce?
39. 39. f(B|A): “Stripes” a → {b1:3, b2 :12, b3 :7, b4 :1, … }  Easy!  One pass to compute (a, *)  Another pass to directly compute f(B|A)
40. 40. f(B|A): “Pairs” (a, *) → 32 Reducer holds this value in memory (a, b1) → 3 (a, b1) → 3 / 32 (a, b2) → 12 (a, b2) → 12 / 32 (a, b3) → 7 (a, b3) → 7 / 32 (a, b4) → 1 (a, b4) → 1 / 32 … …  For this to work:  Must emit extra (a, *) for every bn in mapper  Must make sure all a’s get sent to same reducer (use partitioner)  Must make sure (a, *) comes first (define sort order)  Must hold state in reducer across different key-value pairs
41. 41. “Order Inversion”  Common design pattern  Computing relative frequencies requires marginal counts  But marginal cannot be computed until you see all counts  Buffering is a bad idea!  Trick: getting the marginal counts to arrive at the reducer before the joint counts  Optimizations  Apply in-memory combining pattern to accumulate marginal counts  Should we apply combiners?
42. 42. Synchronization: Pairs vs. Stripes  Approach 1: turn synchronization into an ordering problem  Sort keys into correct order of computation  Partition key space so that each reducer gets the appropriate set of partial results  Hold state in reducer across multiple key-value pairs to perform computation  Illustrated by the ―pairs‖ approach  Approach 2: construct data structures that bring partial results together  Each reducer receives all the data it needs to complete the computation  Illustrated by the ―stripes‖ approach
43. 43. Recap: Tools for Synchronization Cleverly-constructed data structures  Bring data together Sort order of intermediate keys  Control order in which reducers process keys Partitioner  Control which reducer processes which keys Preserving state in mappers and reducers  Capture dependencies across multiple keys and values
44. 44. Issues and Tradeoffs Number of key-value pairs  Object creation overhead  Time for sorting and shuffling pairs across the network Size of each key-value pair  De/serialization overhead Local aggregation  Opportunities to perform local aggregation varies  Combiners make a big difference  Combiners vs. in-mapper combining  RAM vs. disk vs. network
45. 45. Group Work (Examples)
46. 46. Task 5 How many distinct words in the document collection start with each letter?  Note: ―types‖ vs. ―tokens‖
47. 47. Task 5 How many distinct words in the document collection start with each letter?  Note: ―types‖ vs. ―tokens‖ Mapper<String,String  String,String> Map(String docID, String document) for each word in document emit (first character, word) Ways to make more efficient?
48. 48. Task 5 How many distinct words in the document collection start with each letter?  Note: ―types‖ vs. ―tokens‖ Mapper<String,String  String,String> Map(String docID, String document) for each word in document emit (first character, word) Reducer<Integer,Integer  Integer,V3> Reduce(Integer length, Iterator<K2> values): set of words = empty set; for each word add word to set emit(letter, size word set) Ways to make more efficient?
49. 49. Task 5b How many distinct words in the document collection start with each letter?  How to use in-mapper combining and a separate combiner  Tradeoffs Mapper<String,String  String,String> Map(String docID, String document) for each word in document emit (first character, word)
50. 50. Task 5b How many distinct words in the document collection start with each letter?  How to use in-mapper combining and a separate combiner  Tradeoffs? Mapper<String,String  String,String> Map(String docID, String document) for each word in document emit (first character, word) Combiner<String,String  String,String> Combine(String letter, Iterator<String> words): set of words = empty set; for each word add word to set for each word in set emit(letter, word)
51. 51. Task 6: find median document length
52. 52. Task 6: find median document length Mapper<K1,V1  Integer,Integer> Map(K1 xx, V1 xx) 10,000 / N times emit( length(generateRandomDocument()), 1)
53. 53. Task 6: find median document length Mapper<K1,V1  Integer,Integer> Map(K1 xx, V1 xx) 10,000 / N times emit( length(generateRandomDocument()), 1) Reducer<Integer,Integer  Integer,V3> Reduce(Integer length, Iterator<K2> values): static list lengths = empty list; for each value append length to list Close() { output median } conf.setNumReduceTasks(1) Problems with this solution?
54. 54. Interlude: Scaling counts  Many applications require counts of words in some context.  E.g. information retrieval, vector-based semantics  Counts from frequent words like ―the‖ can overwhelm the signal from content words such as ―stocks‖ and ―football‖  Two strategies for combating high frequency words:  Use a stop list that excludes them  Scale the counts such that high frequency words are downgraded.
55. 55. Interlude: Scaling counts, TF-IDF  TF-IDF, or term frequency—inverse document frequency is a standard way of scaling.  Inverse document frequency for a term t is the ratio of the number of documents in the collection to the number of documents containing t:  TF-IDF is just the term frequency times the idf:
56. 56. Interlude: Scaling counts, TF-IDF  TF-IDF, or term frequency—inverse document frequency is a standard way of scaling.  Inverse document frequency for a term t is the ratio of the number of documents in the collection to the number of documents containing t:  TF-IDF is just the term frequency times the idf:
57. 57. Interlude: Scaling counts using DF  Recall the word co-occurrence counts task from the earlier slides.  mij represents the number of times word j has occurred in the neighborhood of word i.  The row mi gives a vector profile of word i that we can use for tasks like determining word similarity (e.g. using cosine distance)  Words like ―the‖ will tend to have high counts that we want to scale down so they don’t dominate this computation.  The counts in mij can be scaled down using dfj. Let’s create a transformed matrix S where:
58. 58. Task 7  Compute S, the co-occurrence counts scaled by document frequency. • First: do the simplest mapper • Then: simplify things for the reducer