Eacl 2006 Pedersen

745 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
745
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Eacl 2006 Pedersen

  1. 1. Language Independent Methods of Clustering Similar Contexts (with applications) Ted Pedersen University of Minnesota, Duluth http://www.d.umn.edu/~tpederse [email_address]
  2. 2. Language Independent Methods <ul><li>Do not utilize syntactic information </li></ul><ul><ul><li>No parsers, part of speech taggers, etc. required </li></ul></ul><ul><li>Do not utilize dictionaries or other manually created lexical resources </li></ul><ul><li>Based on lexical features selected from corpora </li></ul><ul><ul><li>Assumption: word segmentation can be done by looking for white spaces between strings </li></ul></ul><ul><li>No manually annotated data of any kind, methods are completely unsupervised in the strictest sense </li></ul>
  3. 3. Clustering Similar Contexts <ul><li>A context is a short unit of text </li></ul><ul><ul><li>often a phrase to a paragraph in length, although it can be longer </li></ul></ul><ul><li>Input: N contexts </li></ul><ul><li>Output: K clusters </li></ul><ul><ul><li>Where each member of a cluster is a context that is more similar to each other than to the contexts found in other clusters </li></ul></ul>
  4. 4. Applications <ul><li>Headed contexts (contain target word) </li></ul><ul><ul><li>Name Discrimination </li></ul></ul><ul><ul><li>Word Sense Discrimination </li></ul></ul><ul><li>Headless contexts </li></ul><ul><ul><li>Email Organization </li></ul></ul><ul><ul><li>Document Clustering </li></ul></ul><ul><ul><li>Paraphrase identification </li></ul></ul><ul><li>Clustering Sets of Related Words </li></ul>
  5. 5. Tutorial Outline <ul><li>Identifying lexical features </li></ul><ul><ul><li>Measures of association & tests of significance </li></ul></ul><ul><li>Context representations </li></ul><ul><ul><li>First & second order </li></ul></ul><ul><li>Dimensionality reduction </li></ul><ul><ul><li>Singular Value Decomposition </li></ul></ul><ul><li>Clustering </li></ul><ul><ul><li>Partitional techniques </li></ul></ul><ul><ul><li>Cluster stopping </li></ul></ul><ul><ul><li>Cluster labeling </li></ul></ul><ul><li>Hands On Exercises </li></ul>
  6. 6. General Info <ul><li>Please fill out short survey </li></ul><ul><li>Break from 4:00-4:30pm </li></ul><ul><li>Finish at 6pm </li></ul><ul><ul><li>Reception tonight at 7pm at Castle (?) </li></ul></ul><ul><li>Slides and video from tutorial will be posted (I will send you email when that is ready) </li></ul><ul><li>Questions are welcome </li></ul><ul><ul><li>Now, or via email to me or SenseClusters list. </li></ul></ul><ul><li>Comments, observations, criticisms are all welcome </li></ul><ul><li>Knoppix CD, will give you Linux and SenseClusters when computer is booted from the CD. </li></ul>
  7. 7. SenseClusters <ul><li>A package for clustering contexts </li></ul><ul><ul><li>http://senseclusters.sourceforge.net </li></ul></ul><ul><ul><li>SenseClusters Live! (Knoppix CD) </li></ul></ul><ul><li>Integrates with various other tools </li></ul><ul><ul><li>Ngram Statistics Package </li></ul></ul><ul><ul><li>CLUTO </li></ul></ul><ul><ul><li>SVDPACKC </li></ul></ul>
  8. 8. Many thanks… <ul><li>Amruta Purandare (M.S., 2004) </li></ul><ul><ul><li>Founding developer of SenseClusters (2002-2004) </li></ul></ul><ul><ul><li>Now PhD student in Intelligent Systems at the University of Pittsburgh http://www.cs.pitt.edu/~amruta/ </li></ul></ul><ul><li>Anagha Kulkarni (M.S., 2006, expected) </li></ul><ul><ul><li>Enhancing SenseClusters since Fall 2004! </li></ul></ul><ul><ul><li>http://www.d.umn.edu/~kulka020/ </li></ul></ul><ul><li>National Science Foundation (USA) for supporting Amruta, Anagha and me via CAREER award #0092784 </li></ul>
  9. 9. Background and Motivations
  10. 10. Headed and Headless Contexts <ul><li>A headed context includes a target word </li></ul><ul><ul><li>Our goal is to cluster the target words based on their surrounding contexts </li></ul></ul><ul><ul><li>Target word is center of context and our attention </li></ul></ul><ul><li>A headless context has no target word </li></ul><ul><ul><li>Our goal is to cluster the contexts based on their similarity to each other </li></ul></ul><ul><ul><li>The focus is on the context as a whole </li></ul></ul>
  11. 11. Headed Contexts (input) <ul><li>I can hear the ocean in that shell. </li></ul><ul><li>My operating system shell is bash. </li></ul><ul><li>The shells on the shore are lovely. </li></ul><ul><li>The shell command line is flexible. </li></ul><ul><li>The oyster shell is very hard and black. </li></ul>
  12. 12. Headed Contexts (output) <ul><li>Cluster 1: </li></ul><ul><ul><li>My operating system shell is bash. </li></ul></ul><ul><ul><li>The shell command line is flexible. </li></ul></ul><ul><li>Cluster 2: </li></ul><ul><ul><li>The shells on the shore are lovely. </li></ul></ul><ul><ul><li>The oyster shell is very hard and black. </li></ul></ul><ul><ul><li>I can hear the ocean in that shell. </li></ul></ul>
  13. 13. Headless Contexts (input) <ul><li>The new version of Linux is more stable and better support for cameras. </li></ul><ul><li>My Chevy Malibu has had some front end troubles. </li></ul><ul><li>Osborne made on of the first personal computers. </li></ul><ul><li>The brakes went out, and the car flew into the house. </li></ul><ul><li>With the price of gasoline, I think I’ll be taking the bus more often! </li></ul>
  14. 14. Headless Contexts (output) <ul><li>Cluster 1: </li></ul><ul><ul><li>The new version of Linux is more stable and better support for cameras. </li></ul></ul><ul><ul><li>Osborne made one of the first personal computers. </li></ul></ul><ul><li>Cluster 2: </li></ul><ul><ul><li>My Chevy Malibu has had some front end troubles. </li></ul></ul><ul><ul><li>The brakes went out, and the car flew into the house. </li></ul></ul><ul><ul><li>With the price of gasoline, I think I’ll be taking the bus more often! </li></ul></ul>
  15. 15. Web Search as Application <ul><li>Web search results are headed contexts </li></ul><ul><ul><li>Search term is target word (found in snippets) </li></ul></ul><ul><li>Web search results are often disorganized – two people sharing same name, two organizations sharing same abbreviation, etc. often have their pages “mixed up” </li></ul><ul><li>If you click on search results or follow links in pages found, you will encounter headless contexts too… </li></ul>
  16. 16. Name Discrimination
  17. 17. George Millers!
  18. 23. Email Foldering as Application <ul><li>Email (public or private) is made up of headless contexts </li></ul><ul><ul><li>Short, usually focused… </li></ul></ul><ul><li>Cluster similar email messages together </li></ul><ul><ul><li>Automatic email foldering </li></ul></ul><ul><ul><li>Take all messages from sent-mail file or inbox and organize into categories </li></ul></ul>
  19. 26. Clustering News as Application <ul><li>News articles are headless contexts </li></ul><ul><ul><li>Entire article or first paragraph </li></ul></ul><ul><ul><li>Short, usually focused </li></ul></ul><ul><li>Cluster similar articles together </li></ul>
  20. 30. What is it to be “similar”? <ul><li>You shall know a word by the company it keeps </li></ul><ul><ul><li>Firth, 1957 ( Studies in Linguistic Analysis ) </li></ul></ul><ul><li>Meanings of words are (largely) determined by their distributional patterns (Distributional Hypothesis) </li></ul><ul><ul><li>Harris, 1968 ( Mathematical Structures of Language ) </li></ul></ul><ul><li>Words that occur in similar contexts will have similar meanings (Strong Contextual Hypothesis) </li></ul><ul><ul><li>Miller and Charles, 1991 ( Language and Cognitive Processes ) </li></ul></ul><ul><li>Various extensions… </li></ul><ul><ul><li>Similar contexts will have similar meanings, etc. </li></ul></ul><ul><ul><li>Names that occur in similar contexts will refer to the same underlying person, etc. </li></ul></ul>
  21. 31. General Methodology <ul><li>Represent contexts to be clustered using first or second order feature vectors </li></ul><ul><ul><li>Lexical features </li></ul></ul><ul><li>Reduce dimensionality to make vectors more tractable and/or understandable </li></ul><ul><ul><li>Singular value decomposition </li></ul></ul><ul><li>Cluster the context vectors </li></ul><ul><ul><li>Find the number of clusters </li></ul></ul><ul><ul><li>Label the clusters </li></ul></ul><ul><li>Evaluate and/or use the contexts! </li></ul>
  22. 32. Identifying Lexical Features Measures of Association and Tests of Significance
  23. 33. What are features? <ul><li>Features represent the (hopefully) salient characteristics of the contexts to be clustered </li></ul><ul><li>Eventually we will represent each context as a vector, where the dimensions of the vector are associated with features </li></ul><ul><li>Vectors/contexts that include many of the same features will be similar to each other </li></ul>
  24. 34. Where do features come from? <ul><li>In unsupervised clustering, it is common for the feature selection data to be the same data that is to be clustered </li></ul><ul><ul><li>This is not cheating, since data to be clustered does not have any labeled classes that can be used to assist feature selection </li></ul></ul><ul><ul><li>It may also be necessary, since we may need to cluster all available data, and not hold out some for a separate feature identification step </li></ul></ul><ul><ul><ul><li>Email or news articles </li></ul></ul></ul>
  25. 35. Feature Selection <ul><li>“ Test” data – the contexts to be clustered </li></ul><ul><ul><li>Assume that the feature selection data is the same as the test data, unless otherwise indicated </li></ul></ul><ul><li>“ Training” data – a separate corpus of held out feature selection data (that will not be clustered) </li></ul><ul><ul><li>may need to use if you have a small number of contexts to cluster (e.g., web search results) </li></ul></ul><ul><ul><li>This sense of “training” due to Schütze (1998) </li></ul></ul>
  26. 36. Lexical Features <ul><li>Unigram – a single word that occurs more than a given number of times </li></ul><ul><li>Bigram – an ordered pair of words that occur together more often than expected by chance </li></ul><ul><ul><li>Consecutive or may have intervening words </li></ul></ul><ul><li>Co-occurrence – an unordered bigram </li></ul><ul><li>Target Co-occurrence – a co-occurrence where one of the words is the target word </li></ul>
  27. 37. Bigrams <ul><li>fine wine (window size of 2) </li></ul><ul><li>baseball bat </li></ul><ul><li>house of representatives (window size of 3) </li></ul><ul><li>president of the republic (window size of 4) </li></ul><ul><li>apple orchard </li></ul><ul><li>Selected using a small window size (2-4 words), trying to capture a regular (localized) pattern between two words (collocation?) </li></ul>
  28. 38. Co-occurrences <ul><li>tropics water </li></ul><ul><li>boat fish </li></ul><ul><li>law president </li></ul><ul><li>train travel </li></ul><ul><li>Usually selected using a larger window (7-10 words) of context, hoping to capture pairs of related words rather than collocations </li></ul>
  29. 39. Bigrams and Co-occurrences <ul><li>Pairs of words tend to be much less ambiguous than unigrams </li></ul><ul><ul><li>“bank” versus “river bank” and “bank card” </li></ul></ul><ul><ul><li>“dot” versus “dot com” and “dot product” </li></ul></ul><ul><li>Three grams and beyond occur much less frequently (Ngrams very Zipfian) </li></ul><ul><li>Unigrams are noisy, but bountiful </li></ul>
  30. 40. “ occur together more often than expected by chance…” <ul><li>Observed frequencies for two words occurring together and alone are stored in a 2x2 matrix </li></ul><ul><ul><li>Throw out bigrams that include one or two stop words </li></ul></ul><ul><li>Expected values are calculated, based on the model of independence and observed values </li></ul><ul><ul><li>How often would you expect these words to occur together, if they only occurred together by chance? </li></ul></ul><ul><ul><li>If two words occur “significantly” more often than the expected value, then the words do not occur together by chance. </li></ul></ul>
  31. 41. 2x2 Contingency Table 100,000 300 !Artificial 400 100 Artificial !Intelligence Intelligence
  32. 42. 2x2 Contingency Table 100,000 99,700 300 99,600 99,400 200 !Artificial 400 300 100 Artificial !Intelligence Intelligence
  33. 43. 2x2 Contingency Table 100,000 99,700 300 99,600 99,400.0 99,301.2 200.0 298.8 !Artificial 400 300.0 398.8 100.0 000.12 Artificial !Intelligence Intelligence
  34. 44. Measures of Association
  35. 45. Measures of Association
  36. 46. Interpreting the Scores… <ul><li>G^2 and X^2 are asymptotically approximated by the chi-squared distribution… </li></ul><ul><li>This means…if you fix the marginal totals of a table, randomly generate internal cell values in the table, calculate the G^2 or X^2 scores for each resulting table, and plot the distribution of the scores, you *should* get … </li></ul>
  37. 48. Interpreting the Scores… <ul><li>Values above a certain level of significance can be considered grounds for rejecting the null hypothesis </li></ul><ul><ul><li>H0: the words in the bigram are independent </li></ul></ul><ul><ul><li>3.841 is associated with 95% confidence that the null hypothesis should be rejected </li></ul></ul>
  38. 49. Measures of Association <ul><li>There are numerous measures of association that can be used to identify bigram and co-occurrence features </li></ul><ul><li>Many of these are supported in the Ngram Statistics Package (NSP) </li></ul><ul><ul><li>http://www.d.umn.edu/~tpederse/nsp.html </li></ul></ul>
  39. 50. Measures Supported in NSP <ul><li>Log-likelihood Ratio (ll) </li></ul><ul><ul><li>True Mutual Information (tmi) </li></ul></ul><ul><li>Pearson’s Chi-squared Test (x2) </li></ul><ul><li>Pointwise Mutual Information (pmi) </li></ul><ul><li>Phi coefficient (phi) </li></ul><ul><li>T-test (tscore) </li></ul><ul><li>Fisher’s Exact Test (leftFisher, rightFisher) </li></ul><ul><li>Dice Coefficient (dice) </li></ul><ul><li>Odds Ratio (odds) </li></ul>
  40. 51. NSP <ul><li>Will explore NSP during practical session </li></ul><ul><ul><li>Integrated into SenseClusters, may also be used in stand-alone mode </li></ul></ul><ul><li>Can be installed easily on a Linux/Unix system from CD or download from </li></ul><ul><ul><li>http://www.d.umn.edu/~tpederse/nsp.html </li></ul></ul><ul><ul><li>I’m told it can also be installed on Windows (via cygwin or ActivePerl), but I have no personal experience of this… </li></ul></ul>
  41. 52. Summary <ul><li>Identify lexical features based on frequency counts or measures of association – either in the data to be clustered or in a separate set of feature selection data </li></ul><ul><ul><li>Language independent </li></ul></ul><ul><li>Unigrams usually only selected by frequency </li></ul><ul><ul><li>Remember, no labeled data from which to learn, so somewhat less effective as features than in supervised case </li></ul></ul><ul><li>Bigrams and co-occurrences can also be selected by frequency, or better yet measures of association </li></ul><ul><ul><li>Bigrams and co-occurrences need not be consecutive </li></ul></ul><ul><ul><li>Stop words should be eliminated </li></ul></ul><ul><ul><li>Frequency thresholds are helpful (e.g., unigram/bigram that occurs once may be too rare to be useful) </li></ul></ul>
  42. 53. Related Work <ul><li>Moore, 2004 (EMNLP) follow-up to Dunning and Pedersen on log-likelihood and exact tests </li></ul><ul><li>http://acl.ldc.upenn.edu/acl2004/emnlp/pdf/Moore.pdf </li></ul><ul><li>Pedersen, 1996 (SCSUG) explanation of exact tests, and comparison to log-likelihood </li></ul><ul><li>http://arxiv.org/abs/cmp-lg/9608010 </li></ul><ul><li>(also see Pedersen, Kayaalp, and Bruce, AAAI-1996) </li></ul><ul><li>Dunning, 1993 ( Computational Linguistics ) introduces log-likelihood ratio for collocation identification </li></ul><ul><li>http://acl.ldc.upenn.edu/J/J93/J93-1003.pdf </li></ul>
  43. 54. Context Representations First and Second Order Methods
  44. 55. Once features selected… <ul><li>We will have a set of unigrams, bigrams, co-occurrences or target co-occurrences that we believe are somehow interesting and useful </li></ul><ul><ul><li>We also have any frequency and measure of association score that have been used in their selection </li></ul></ul><ul><li>Convert contexts to be clustered into a vector representation based on these features </li></ul>
  45. 56. First Order Representation <ul><li>Each context is represented by a vector with M dimensions, each of which indicates whether or not a particular feature occurred in that context </li></ul><ul><ul><li>Value may be binary, a frequency count, or an association score </li></ul></ul><ul><li>Context by Feature representation </li></ul>
  46. 57. Contexts <ul><li>Cxt1: There was an island curse of black magic cast by that voodoo child. </li></ul><ul><li>Cxt2: Harold, a known voodoo child, was gifted in the arts of black magic. </li></ul><ul><li>Cxt3: Despite their military might, it was a serious error to attack. </li></ul><ul><li>Cxt4: Military might is no defense against a voodoo child or an island curse. </li></ul>
  47. 58. Unigram Feature Set <ul><li>island 1000 </li></ul><ul><li>black 700 </li></ul><ul><li>curse 500 </li></ul><ul><li>magic 400 </li></ul><ul><li>child 200 </li></ul><ul><li>(assume these are frequency counts obtained from some corpus…) </li></ul>
  48. 59. First Order Vectors of Unigrams 1 0 1 0 1 Cxt4 0 0 0 0 0 Cxt3 1 1 0 1 0 Cxt2 1 1 1 1 1 Cxt1 child magic curse black island
  49. 60. Bigram Feature Set <ul><li>island curse 189.2 </li></ul><ul><li>black magic 123.5 </li></ul><ul><li>voodoo child 120.0 </li></ul><ul><li>military might 100.3 </li></ul><ul><li>serious error 89.2 </li></ul><ul><li>island child 73.2 </li></ul><ul><li>voodoo might 69.4 </li></ul><ul><li>military error 54.9 </li></ul><ul><li>black child 43.2 </li></ul><ul><li>serious curse 21.2 </li></ul><ul><li>(assume these are log-likelihood scores based on frequency counts from some corpus) </li></ul>
  50. 61. First Order Vectors of Bigrams 1 0 1 1 0 Cxt4 0 1 1 0 0 Cxt3 1 0 0 0 1 Cxt2 1 0 0 1 1 Cxt1 voodoo child serious error military might island curse black magic
  51. 62. First Order Vectors <ul><li>Can have binary values or weights associated with frequency, etc. </li></ul><ul><li>Forms a context by feature matrix </li></ul><ul><li>May optionally be smoothed/reduced with Singular Value Decomposition </li></ul><ul><ul><li>More on that later… </li></ul></ul><ul><li>The contexts are ready for clustering… </li></ul><ul><ul><li>More on that later… </li></ul></ul>
  52. 63. Second Order Features <ul><li>First order features encode the occurrence of a feature in a context </li></ul><ul><ul><li>Feature occurrence represented by binary value </li></ul></ul><ul><li>Second order features encode something ‘extra’ about a feature that occurs in a context </li></ul><ul><ul><li>Feature occurrence represented by word co-occurrences </li></ul></ul><ul><ul><li>Feature occurrence represented by context occurrences </li></ul></ul>
  53. 64. Second Order Representation <ul><li>First, build word by word matrix from features </li></ul><ul><ul><li>Based on bigrams or co-occurrences </li></ul></ul><ul><ul><li>First word is row, second word is column, cell is score </li></ul></ul><ul><ul><li>(optionally) reduce dimensionality w/SVD </li></ul></ul><ul><ul><li>Each row forms a vector of first order co-occurrences </li></ul></ul><ul><li>Second, replace each word in a context with its row/vector as found in the word by word matrix </li></ul><ul><li>Average all the word vectors in the context to create the second order representation </li></ul><ul><ul><li>Due to Sch ü tze (1998), related to LSI/LSA </li></ul></ul>
  54. 65. Word by Word Matrix 120.0 0 69.4 0 0 voodoo 0 89.2 0 21.2 0 serious 0 54.9 100.3 0 0 military 73.2 0 0 189.2 0 island 43.2 0 0 0 123.5 black child error might curse magic
  55. 66. Word by Word Matrix <ul><li>… can also be used to identify sets of related words </li></ul><ul><li>In the case of bigrams, rows represent the first word in a bigram and columns represent the second word </li></ul><ul><ul><li>Matrix is asymmetric </li></ul></ul><ul><li>In the case of co-occurrences, rows and columns are equivalent </li></ul><ul><ul><li>Matrix is symmetric </li></ul></ul><ul><li>The vector (row) for each word represent a set of first order features for that word </li></ul><ul><li>Each word in a context to be clustered for which a vector exists (in the word by word matrix) is replaced by that vector in that context </li></ul>
  56. 67. There was an island curse of black magic cast by that voodoo child. 120.0 0 69.4 0 0 voodoo 73.2 0 0 189.2 0 island 43.2 0 0 0 123.5 black child error might curse magic
  57. 68. Second Order Co-Occurrences <ul><li>Word vectors for “black” and “island” show similarity as both occur with “child” </li></ul><ul><li>“black” and “island” are second order co-occurrence with each other, since both occur with “child” but not with each other (i.e., “black island” is not observed) </li></ul>
  58. 69. Second Order Representation <ul><li>There was an [curse, child] curse of [magic, child] magic cast by that [might, child] child </li></ul><ul><li>[curse, child] + [magic, child] + [might, child] </li></ul>
  59. 70. There was an island curse of black magic cast by that voodoo child. 78.8 0 24.4 63.1 41.2 Cxt1 child error might curse magic
  60. 71. Second Order Representation <ul><li>Results in a Context by Feature (Word) Representation </li></ul><ul><li>Cell values do not indicate if feature occurred in context. Rather, they show the strength of association of that feature with other words that occur with a word in the context. </li></ul>
  61. 72. Summary <ul><li>First order representations are intuitive, but… </li></ul><ul><ul><li>Can suffer from sparsity </li></ul></ul><ul><ul><li>Contexts represented based on the features that occur in those contexts </li></ul></ul><ul><li>Second order representations are harder to visualize, but… </li></ul><ul><ul><li>Allow a word to be represented by the words it co-occurs with (i.e., the company it keeps) </li></ul></ul><ul><ul><li>Allows a context to be represented by the words that occur with the words in the context </li></ul></ul><ul><ul><li>Helps combat sparsity… </li></ul></ul>
  62. 73. Related Work <ul><li>Pedersen and Bruce 1997 (EMNLP) presented first order method of discrimination </li></ul><ul><li>http://acl.ldc.upenn.edu/W/W97/W97-0322.pdf </li></ul><ul><li>Schütze 1998 ( Computational Linguistics ) introduced second order method </li></ul><ul><li>http://acl.ldc.upenn.edu/J/J98/J98-1004.pdf </li></ul><ul><li>Purandare and Pedersen 2004 (CoNLL) compared first and second order methods </li></ul><ul><li>http://acl.ldc.upenn.edu/hlt-naacl2004/conll04/pdf/purandare.pdf </li></ul><ul><ul><li>First order better if you have lots of data </li></ul></ul><ul><ul><li>Second order better with smaller amounts of data </li></ul></ul>
  63. 74. Dimensionality Reduction Singular Value Decomposition
  64. 75. Motivation <ul><li>First order matrices are very sparse </li></ul><ul><ul><li>Context by feature </li></ul></ul><ul><ul><li>Word by word </li></ul></ul><ul><li>NLP data is noisy </li></ul><ul><ul><li>No stemming performed </li></ul></ul><ul><ul><li>synonyms </li></ul></ul>
  65. 76. Many Methods <ul><li>Singular Value Decomposition (SVD) </li></ul><ul><ul><li>SVDPACKC http://www.netlib.org/svdpack/ </li></ul></ul><ul><li>Multi-Dimensional Scaling (MDS) </li></ul><ul><li>Principal Components Analysis (PCA) </li></ul><ul><li>Independent Components Analysis (ICA) </li></ul><ul><li>Linear Discriminant Analysis (LDA) </li></ul><ul><li>etc… </li></ul>
  66. 77. Effect of SVD <ul><li>SVD reduces a matrix to a given number of dimensions This may convert a word level space into a semantic or conceptual space </li></ul><ul><ul><li>If “dog” and “collie” and “wolf” are dimensions/columns in a word co-occurrence matrix, after SVD they may be a single dimension that represents “canines” </li></ul></ul>
  67. 78. Effect of SVD <ul><li>The dimensions of the matrix after SVD are principal components that represent the meaning of concepts </li></ul><ul><ul><li>Similar columns are grouped together </li></ul></ul><ul><li>SVD is a way of smoothing a very sparse matrix, so that there are very few zero valued cells after SVD </li></ul>
  68. 79. How can SVD be used? <ul><li>SVD on first order contexts will reduce a context by feature representation down to a smaller number of features </li></ul><ul><ul><li>Latent Semantic Analysis typically performs SVD on a feature by context representation, where the contexts are reduced </li></ul></ul><ul><li>SVD used in creating second order context representations </li></ul><ul><ul><li>Reduce word by word matrix </li></ul></ul>
  69. 80. Word by Word Matrix 4 2 0 0 0 3 0 1 box 0 1 2 2 1 2 0 0 memory 0 0 0 1 0 0 2 0 organ 0 2 0 3 2 0 0 0 debt 0 1 0 3 1 0 0 2 linux 0 1 0 3 2 0 0 0 sales 3 0 2 2 0 3 0 0 lab 1 0 2 0 0 1 2 0 petri 0 1 0 0 2 0 0 1 disk 1 0 2 0 0 0 3 0 body 0 0 0 3 1 0 0 2 pc plasma graphics tissue data ibm cells blood apple
  70. 81. Singular Value Decomposition A=UDV’
  71. 82. U -.52 .39 -.48 .02 .09 .41 -.09 .40 -.30 .08 .31 .43 -.26 -.39 -.6 .20 .00 -.00 -.00 -.02 -.01 .00 -.02 -.00 -.07 -.3 .14 -.49 -.07 .30 .25 .56 -.01 .08 .05 -.01 .24 -.08 .11 .46 .08 .03 -.04 .72 .09 -.31 -.01 .37 -.07 .01 -.21 -.31 -.34 -.45 -.68 .29 .00 .05 .83 .17 -.02 .25 -.45 .08 .03 .20 -.22 .31 -.60 .39 .13 .35 -.01 -.04 -.44 .08 .44 .59 -.49 .05 -.02 .63 .02 -.09 .52 -.2 .09 .35
  72. 83. D 0.00 0.00 0.00 0.66 1.26 2.30 2.52 3.25 3.99 6.36 9.19
  73. 84. V -.20 .22 -.07 -.10 -.87 -.07 -.06 .17 .19 -.26 .04 .03 .17 -.32 .02 .13 -.26 -.17 .06 -.04 .86 .50 -.58 .12 .09 -.18 -.27 -.18 -.12 -.47 .11 -.03 .12 .31 -.32 -.04 .64 -.45 -.14 -.23 .28 .07 -.23 -.62 -.59 .05 .02 -.12 .15 .11 .25 -.71 -.31 -.04 .08 .29 -.05 .05 .20 -.51 .09 -.03 .12 .31 -.01 .02 -.45 -.32 .50 .27 .49 -.02 .08 .21 -.06 .08 -.09 .52 -.45 -.01 .63 .03 -.12 -.31 .71 -.13 .39 -.12 .12 .15 .37 .07 .58 -.41 .15 .17 -.30 -.32 -.27 -.39 .11 .44 .25 .03 -.02 .26 .23 .39 .57 -.37 .04 .03 -.12 -.31 -.05 -.05 .04 .28 -.04 .08 .21
  74. 85. Word by Word Matrix After SVD 1.1 1.0 .98 1.7 .86 .72 .85 .77 memory .00 .00 .17 1.2 .77 .00 .84 .00 organ .00 1.5 .00 3.2 2.1 .00 .00 1.2 debt .13 1.1 .03 2.7 1.7 .16 .00 .96 linux .41 .85 .35 2.2 1.3 .39 .15 .73 sales 2.3 .18 2.5 1.7 .35 2.0 1.7 .21 lab 1.4 .00 1.5 .49 .00 1.2 1.1 .00 germ .00 .91 .00 2.1 1.3 .01 .00 .76 disk 1.5 .00 1.6 .33 .00 1.3 1.2 .00 body .09 .86 .01 2.0 1.3 .11 .00 .73 pc plasma graphics tissue data ibm cells blood apple
  75. 86. Second Order Representation <ul><li>These two contexts share no words in common, yet they are similar! disk and linux both occur with “Apple”, “IBM”, “data”, “graphics”, and “memory” </li></ul><ul><li>The two contexts are similar because they share many second order co-occurrences </li></ul><ul><li>I got a new disk today! </li></ul><ul><li>What do you think of linux? </li></ul>1.0 .72 memory .00 .00 organ .13 1.1 .03 2.7 1.7 .16 .00 .96 linux .00 .91 .00 2.1 1.3 .01 .00 .76 disk Plasma graphics tissue data ibm cells blood apple
  76. 87. Relationship to LSA <ul><li>Latent Semantic Analysis uses feature by context first order representation </li></ul><ul><ul><li>Indicates all the contexts in which a feature occurs </li></ul></ul><ul><ul><li>Use SVD to reduce dimensions (contexts) </li></ul></ul><ul><ul><li>Cluster features based on similarity of contexts in which they occur </li></ul></ul><ul><ul><li>Represent sentences using an average of feature vectors </li></ul></ul>
  77. 88. Feature by Context Representation 0 1 0 0 serious error 1 0 1 1 voodoo child 0 1 0 0 military might 1 0 0 1 island curse 1 0 1 1 black magic Cxt4 Cxt3 Cxt2 Cxt1
  78. 89. References <ul><li>Deerwester, S. and Dumais, S.T. and Furnas, G.W. and Landauer, T.K. and Harshman, R., Indexing by Latent Semantic Analysis, Journal of the American Society for Information Science, vol. 41, 1990 </li></ul><ul><li>Landauer, T. and Dumais, S., A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction and Representation of Knowledge, Psychological Review, vol. 104, 1997 </li></ul><ul><li>Sch ü tze, H, Automatic Word Sense Discrimination, Computational Linguistics, vol. 24, 1998 </li></ul><ul><li>Berry, M.W. and Drmac, Z. and Jessup, E.R.,Matrices, Vector Spaces, and Information Retrieval, SIAM Review, vol 41, 1999 </li></ul>
  79. 90. Clustering Partitional Methods Cluster Stopping Cluster Labeling
  80. 91. Many many methods… <ul><li>Cluto supports a wide range of different clustering methods </li></ul><ul><ul><li>Agglomerative </li></ul></ul><ul><ul><ul><li>Average, single, complete link… </li></ul></ul></ul><ul><ul><li>Partitional </li></ul></ul><ul><ul><ul><li>K-means (Direct) </li></ul></ul></ul><ul><ul><li>Hybrid </li></ul></ul><ul><ul><ul><li>Repeated bisections </li></ul></ul></ul><ul><li>SenseClusters integrates with Cluto </li></ul><ul><ul><li>http://www-users.cs.umn.edu/~karypis/cluto/ </li></ul></ul>
  81. 92. General Methodology <ul><li>Represent contexts to be clustered in first or second order vectors </li></ul><ul><li>Cluster the context vectors directly </li></ul><ul><ul><li>vcluster </li></ul></ul><ul><li>… or convert to similarity matrix and then cluster </li></ul><ul><ul><li>scluster </li></ul></ul>
  82. 93. Agglomerative Clustering <ul><li>Create a similarity matrix of contexts to be clustered </li></ul><ul><ul><li>Results in a symmetric “instance by instance” matrix, where each cell contains the similarity score between a pair of instances </li></ul></ul><ul><ul><li>Typically a first order representation, where similarity is based on the features observed in the pair of instances </li></ul></ul>
  83. 94. Measuring Similarity <ul><li>Integer Values </li></ul><ul><ul><li>Matching Coefficient </li></ul></ul><ul><ul><li>Jaccard Coefficient </li></ul></ul><ul><ul><li>Dice Coefficient </li></ul></ul><ul><li>Real Values </li></ul><ul><ul><li>Cosine </li></ul></ul>
  84. 95. Agglomerative Clustering <ul><li>Apply Agglomerative Clustering algorithm to similarity matrix </li></ul><ul><ul><li>To start, each context is its own cluster </li></ul></ul><ul><ul><li>Form a cluster from the most similar pair of contexts </li></ul></ul><ul><ul><li>Repeat until the desired number of clusters is obtained </li></ul></ul><ul><li>Advantages : high quality clustering </li></ul><ul><li>Disadvantages – computationally expensive, must carry out exhaustive pair wise comparisons </li></ul>
  85. 96. Average Link Clustering 1 2 4 S3 1 2 4 S3 0 2 S4 0 3 S2 2 3 S1 S4 S2 S1 0 S4 0 S2 S1S3 S4 S2 S1S3 S4 S1S3S2 S4 S1S3S2
  86. 97. Partitional Methods <ul><li>Randomly create centroids equal to the number of clusters you wish to find </li></ul><ul><li>Assign each context to nearest centroid </li></ul><ul><li>After all contexts assigned, re-compute centroids </li></ul><ul><ul><li>“best” location decided by criterion function </li></ul></ul><ul><li>Repeat until stable clusters found </li></ul><ul><ul><li>Centroids don’t shift from iteration to iteration </li></ul></ul>
  87. 98. Partitional Methods <ul><li>Advantages : fast </li></ul><ul><li>Disadvantages </li></ul><ul><ul><li>Results can be dependent on the initial placement of centroids </li></ul></ul><ul><ul><li>Must specify number of clusters ahead of time </li></ul></ul><ul><ul><ul><li>maybe not… </li></ul></ul></ul>
  88. 99. Vectors to be clustered
  89. 100. Random Initial Centroids (k=2)
  90. 101. Assignment of Clusters
  91. 102. Recalculation of Centroids
  92. 103. Reassignment of Clusters
  93. 104. Recalculation of Centroid
  94. 105. Reassignment of Clusters
  95. 106. Partitional Criterion Functions <ul><li>Intra-Cluster (Internal) similarity/distance </li></ul><ul><ul><li>How close together are members of a cluster? </li></ul></ul><ul><ul><li>Closer together is better </li></ul></ul><ul><li>Inter-Cluster (External) similarity/distance </li></ul><ul><ul><li>How far apart are the different clusters? </li></ul></ul><ul><ul><li>Further apart is better </li></ul></ul>
  96. 107. Intra Cluster Similarity <ul><li>Ball of String (I1) </li></ul><ul><ul><li>How far is each member from each other member </li></ul></ul><ul><li>Flower (I2) </li></ul><ul><ul><li>How far is each member of cluster from centroid </li></ul></ul>
  97. 108. Contexts to be Clustered
  98. 109. Ball of String (I1 Internal Criterion Function)
  99. 110. Flower (I2 Internal Criterion Function)
  100. 111. Inter Cluster Similarity <ul><li>The Fan (E1) </li></ul><ul><ul><li>How far is each centroid from the centroid of the entire collection of contexts </li></ul></ul><ul><ul><li>Maximize that distance </li></ul></ul>
  101. 112. The Fan (E1 External Criterion Function)
  102. 113. Hybrid Criterion Functions <ul><li>Balance internal and external similarity </li></ul><ul><ul><li>H1 = I1/E1 </li></ul></ul><ul><ul><li>H2 = I2/E1 </li></ul></ul><ul><li>Want internal similarity to increase, while external similarity decreases </li></ul><ul><li>Want internal distances to decrease, while external distances increase </li></ul>
  103. 114. Cluster Stopping
  104. 115. Cluster Stopping <ul><li>Many Clustering Algorithms require that the user specify the number of clusters prior to clustering </li></ul><ul><li>But, the user often doesn’t know the number of clusters, and in fact finding that out might be the goal of clustering </li></ul>
  105. 116. Criterion Functions Can Help <ul><li>Run partitional algorithm for k=1 to deltaK </li></ul><ul><ul><li>DeltaK is a user estimated or automatically determined upper bound for the number of clusters </li></ul></ul><ul><li>Find the value of k at which the criterion function does not significantly increase at k+1 </li></ul><ul><li>Clustering can stop at this value, since no further improvement in solution is apparent with additional clusters (increases in k) </li></ul>
  106. 117. SenseCluster’s Approach to Cluster Stopping <ul><li>Will be subject of Demo at EACL </li></ul><ul><ul><li>Demo Session 2 5th April, 14:30-16:00 </li></ul></ul><ul><ul><li>Ted Pedersen and Anagha Kulkarni: Selecting the &quot;Right&quot; Number of Senses Based on Clustering Criterion Functions </li></ul></ul>
  107. 118. H2 versus k T. Blair – V. Putin – S. Hussein
  108. 119. PK2 <ul><li>Based on Hartigan, 1975 </li></ul><ul><li>When ratio approaches 1, clustering is at a plateau </li></ul><ul><li>Select value of k which is closest to but outside of standard deviation interval </li></ul>
  109. 120. PK2 predicts 3 senses T. Blair – V. Putin – S. Hussein
  110. 121. PK3 <ul><li>Related to Salvador and Chan, 2004 </li></ul><ul><li>Inspired by Dice Coefficient </li></ul><ul><li>Values close to 1 mean clustering is improving … </li></ul><ul><li>Select value of k which is closest to but outside of standard deviation interval </li></ul>
  111. 122. PK3 predicts 3 senses T. Blair – V. Putin – S. Hussein
  112. 123. References <ul><li>Hartigan, J. Clustering Algorithms, Wiley, 1975 </li></ul><ul><ul><li>basis for SenseClusters stopping method PK2 </li></ul></ul><ul><li>Mojena, R., Hierarchical Grouping Methods and Stopping Rules: An Evaluation, The Computer Journal, vol 20, 1977 </li></ul><ul><ul><li>basis for SenseClusters stopping method PK1 </li></ul></ul><ul><li>Milligan, G. and Cooper, M., An Examination of Procedures for Determining the Number of Clusters in a Data Set, Psychometrika, vol. 50, 1985 </li></ul><ul><ul><li>Very extensive comparison of cluster stopping methods </li></ul></ul><ul><li>Tibshirani, R. and Walther, G. and Hastie, T., Estimating the Number of Clusters in a Dataset via the Gap Statistic,Journal of the Royal Statistics Society (Series B), 2001 </li></ul><ul><li>Pedersen, T. and Kulkarni, A. Selecting the &quot;Right&quot; Number of Senses Based on Clustering Criterion Functions, Proceedings of the Posters and Demo Program of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics, 2006 </li></ul><ul><ul><li>Describes SenseClusters stopping methods </li></ul></ul>
  113. 124. Cluster Labeling
  114. 125. Cluster Labeling <ul><li>Once a cluster is discovered, how can you generate a description of the contexts of that cluster automatically? </li></ul><ul><li>In the case of contexts, you might be able to identify significant lexical features from the contents of the clusters, and use those as a preliminary label </li></ul>
  115. 126. Results of Clustering <ul><li>Each cluster consists of some number of contexts </li></ul><ul><li>Each context is a short unit of text </li></ul><ul><li>Apply measures of association to the contents of each cluster to determine N most significant bigrams </li></ul><ul><li>Use those bigrams as a label for the cluster </li></ul>
  116. 127. Label Types <ul><li>The N most significant bigrams for each cluster will act as a descriptive label </li></ul><ul><li>The M most significant bigrams that are unique to each cluster will act as a discriminating label </li></ul>
  117. 128. Evaluation Techniques Comparison to gold standard data
  118. 129. Evaluation <ul><li>If Sense tagged text is available, can be used for evaluation </li></ul><ul><ul><li>But don’t use sense tags for clustering or feature selection! </li></ul></ul><ul><li>Assume that sense tags represent “true” clusters, and compare these to discovered clusters </li></ul><ul><ul><li>Find mapping of clusters to senses that attains maximum accuracy </li></ul></ul>
  119. 130. Evaluation <ul><li>Pseudo words are especially useful, since it is hard to find data that is discriminated </li></ul><ul><ul><li>Pick two words or names from a corpus, and conflate them into one name. Then see how well you can discriminate. </li></ul></ul><ul><ul><li>http://www.d.umn.edu/~tpederse/tools.html </li></ul></ul><ul><li>Baseline Algorithm– group all instances into one cluster, this will reach “accuracy” equal to majority classifier </li></ul>
  120. 131. Evaluation <ul><li>Pseudo words are especially useful, since it is hard to find data that is discriminated </li></ul><ul><ul><li>Pick two words or names from a corpus, and conflate them into one name. Then see how well you can discriminate. </li></ul></ul><ul><ul><li>http://www.d.umn.edu/~kulka020/kanaghaName.html </li></ul></ul>
  121. 132. Baseline Algorithm <ul><li>Baseline Algorithm – group all instances into one cluster, this will reach “accuracy” equal to majority classifier </li></ul><ul><li>What if the clustering said everything should be in the same cluster? </li></ul>
  122. 133. Baseline Performance <ul><li>(0+0+55)/170 = .32 if C3 is S1 (0+0+80)/170 = .47 if C3 is S3 </li></ul>170 55 35 80 Totals 170 55 35 80 C3 0 0 0 0 C2 0 0 0 0 C1 Totals S3 S2 S1 170 80 35 55 Totals 170 80 35 55 C3 0 0 0 0 C2 0 0 0 0 C1 Totals S1 S2 S3
  123. 134. Evaluation <ul><li>Suppose that C1 is labeled S1, C2 as S2, and C3 as S3 </li></ul><ul><li>Accuracy = (10 + 0 + 10) / 170 = 12% </li></ul><ul><li>Diagonal shows how many members of the cluster actually belong to the sense given on the column </li></ul><ul><li>Can the “columns” be rearranged to improve the overall accuracy? </li></ul><ul><ul><li>Optimally assign clusters to senses </li></ul></ul>170 55 35 80 Totals 65 10 5 50 C3 60 40 0 20 C2 45 5 30 10 C1 Totals S3 S2 S1
  124. 135. Evaluation <ul><li>The assignment of C1 to S2, C2 to S3, and C3 to S1 results in 120/170 = 71% </li></ul><ul><li>Find the ordering of the columns in the matrix that maximizes the sum of the diagonal. </li></ul><ul><li>This is an instance of the Assignment Problem from Operations Research, or finding the Maximal Matching of a Bipartite Graph from Graph Theory. </li></ul>170 80 55 35 Totals 65 50 10 5 C3 60 20 40 0 C2 45 10 5 30 C1 Totals S1 S3 S2
  125. 136. Analysis <ul><li>Unsupervised methods may not discover clusters equivalent to the classes learned in supervised learning </li></ul><ul><li>Evaluation based on assuming that sense tags represent the “true” cluster are likely a bit harsh. Alternatives? </li></ul><ul><ul><li>Humans could look at the members of each cluster and determine the nature of the relationship or meaning that they all share </li></ul></ul><ul><ul><li>Use the contents of the cluster to generate a descriptive label that could be inspected by a human </li></ul></ul>
  126. 137. Practical Session Experiments with SenseClusters
  127. 138. Things to Try <ul><li>Feature Identification </li></ul><ul><ul><li>Type of Feature </li></ul></ul><ul><ul><li>Measures of association </li></ul></ul><ul><li>Context Representation (1 st or 2 nd order) </li></ul><ul><li>Automatic Stopping (or not) </li></ul><ul><li>SVD (or not) </li></ul><ul><li>Clustering Algorithm and Criterion Function </li></ul><ul><li>Evaluation </li></ul><ul><li>Labeling </li></ul>
  128. 139. Experimental Data <ul><li>Available on Web Site </li></ul><ul><ul><li>http://senseclusters.sourceforge.net </li></ul></ul><ul><li>Available on LIVE CD </li></ul><ul><li>Mostly “Name Conflate” data </li></ul>
  129. 140. Creating Experimental Data <ul><li>NameConflate program </li></ul><ul><ul><li>Creates name conflated data from English GigaWord corpus </li></ul></ul><ul><li>Text2Headless program </li></ul><ul><ul><li>Convert plain text into headless contexts </li></ul></ul><ul><li>http://www.d.umn.edu/~tpederse/tools.html </li></ul>
  130. 141. Headed Clustering <ul><li>Name Discrimination </li></ul><ul><ul><li>Tom Hanks </li></ul></ul><ul><ul><li>Russell Crowe </li></ul></ul>
  131. 146. Headless Contexts <ul><li>Email / 20 newsgroups data </li></ul><ul><li>Spanish Text </li></ul>
  132. 149. Thank you! <ul><li>Questions or comments on tutorial or SenseClusters are welcome at any time [email_address] </li></ul><ul><li>SenseClusters is freely available via LIVE CD, the Web, and in source code form </li></ul><ul><ul><li>http://senseclusters.sourceforge.net </li></ul></ul><ul><li>SenseClusters papers available at: </li></ul><ul><ul><li>http://www.d.umn.edu/~tpederse/senseclusters-pubs.html </li></ul></ul>

×