Similarity based methods for word sense disambiguation

434
-1

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
434
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Similarity based methods for word sense disambiguation

  1. 1. <ul><li>Ido Dagan </li></ul><ul><li>Lillian Lee </li></ul><ul><li>Fernando Pereira </li></ul>
  2. 2. <ul><li>The problem is ” How to get sense from unseen word pairs that are not present in training set”. </li></ul><ul><li>Eg: I want to bee a scientist. </li></ul><ul><li>Robbed the bank </li></ul>
  3. 3. <ul><li>They compared four similarity based estimation methods: </li></ul><ul><li>KL divergence </li></ul><ul><li>Total divergence to average </li></ul><ul><li>L1 Norm </li></ul><ul><li>Confusion probability </li></ul><ul><li>Against two well established methods </li></ul><ul><li>Katz’s back-off scheme </li></ul><ul><li>Maximum likelihood estimation </li></ul>
  4. 4. <ul><li>Katz’s back-off scheme(1987) widely used in bigram language modeling, estimates the probability of an unseen bigram by utilizing unigram estimates using baye’s conditional probability theorem. </li></ul><ul><li>Eg: {make,take} plans. </li></ul>
  5. 5. <ul><li>As the estimation of probability of unseen bigram depends on unigram frequencies ,so this has undesirable result of assigning unseen bigrams the same probability if they are made up of unigrams of same frequency. </li></ul><ul><li>Eg:{a b} and {c b} </li></ul>
  6. 6. <ul><li>In this method, words of similar meaning are grouped together statically to form a class. </li></ul><ul><li>So for a group of words there is only one representative , which is its class. </li></ul><ul><li>A word is therefore modeled by average behavior of many words. </li></ul><ul><li>When in doubt between two words search the testing data related to words of those classes. </li></ul><ul><li>Eg: {a,b,c,d,e} & {f,g,h,I} W </li></ul>
  7. 7. <ul><li>As the word is modeled by average behavior of many words so the uniqueness of meaning of word is lost. </li></ul><ul><li>Eg: Thanda </li></ul><ul><li>Initially probability for unseen word pairs remains zero which leads to extremely inaccurate estimates for word pair probabilities. </li></ul><ul><li>Eg: Periodic table </li></ul>
  8. 8. <ul><li>Estimates for most compatible(similar) words with a word w are combined and based on evidence provided by word w ’ ,is weighted by a function of its compatibility with w . </li></ul><ul><li>No word pair is dropped even it is very rare one,as there were in katz’s back off scheme . </li></ul>
  9. 9. <ul><li>Similarity based word sense can be achieved in 3 steps… </li></ul><ul><li>A scheme for deciding which word pairs require similarity based estimation. </li></ul><ul><li>A method for combining information from similar words. </li></ul><ul><li>A function measuring similarity between words. </li></ul>
  10. 10. <ul><li>Good points of katz’s back off scheme and MLE are combined… </li></ul><ul><li>In the MLE probability is </li></ul><ul><li>P ML(w2/w1) =c(w1,w2)/c(w1) </li></ul><ul><li>But for similarity based sense </li></ul><ul><li>P(w2/w1)={ P d (w2/w1) c(w1,w2)>0 for seen pair </li></ul><ul><li>α (w1)P r (w2/w1) for unseen pair </li></ul>
  11. 11. <ul><li>Similarity based models assume that if word w1’ is similar to word w1 ,then w1’ can yield the information about probability of unseen word pairs involving w1. </li></ul><ul><li>It is proved that w2 is more likely to occur with w1 if it tends to occur with the words that are most similar to w1 . </li></ul><ul><li>They used a weighted average of evidence provided by similar words, where the weight given to a particular word depends on its similarity to w1. </li></ul>
  12. 12. <ul><li>Number of words similar to a word w1 are set up to a threshold value because in a large training set it will use very large amount of resources. </li></ul><ul><li>Number of similar words(k) and threshold of dissimilarity between words(t) is tuned experimentally. </li></ul>
  13. 13. <ul><li>These word similarity functions can be derived automatically from statistics of training data, as opposed to functions derived from manually constructed word classes. </li></ul><ul><li>KL divergence </li></ul><ul><li>Total divergence to average </li></ul><ul><li>L1 Norm </li></ul><ul><li>Confusion Probability </li></ul>
  14. 14. <ul><li>KL divergence is standard measure of dissimilarity between two probability mass functions </li></ul><ul><li>For D to be defined P(w2|w1’)>0 whenever P(w2|w1)>0. </li></ul><ul><li>Above condition might not hold good in some cases,So smoothing is required which is very expensive for large vocabularies. </li></ul>
  15. 15. <ul><li>It is a relative measure based on the total KL divergence to the average of two distributions: </li></ul><ul><li>This is reduced to </li></ul>
  16. 16. <ul><li>A (w1,w1’) is bounded ,ranging between 0 and 2log2. </li></ul><ul><li>Smoothed estimates are not required because probability ratios are not involved. </li></ul><ul><li>Calculation of A (w1,w1’) requires summing only over those w2 for which P(w2|w1) and P(w2|w1’) are both non zero, this makes computation quite fast. </li></ul>
  17. 17. <ul><li>L1 norm is defined as </li></ul><ul><li>by reducing it to form depending upon w2 </li></ul><ul><li>It is also bounded between 0 to 2. </li></ul>
  18. 18. <ul><li>It estimates that a word w1’ can be substituted with word w1 or not. </li></ul><ul><li>Unlike the D , A , L w1 may not be “closest” to itself ie. there may exist a word w1’ such that </li></ul>
  19. 19. <ul><li>As the sense of actual word may be very fine or very coarse, provided by the dictionary and it will take large amount of resources for training data to have correct sense,Experiment done on Pseudo Word . </li></ul><ul><li>Eg: {make,take} plans </li></ul><ul><li>{make,take} action </li></ul><ul><li>where {make,take} is a pseudo word tested with plans and action. </li></ul>
  20. 20. <ul><li>Each method in experiment is tested with a noun and two verbs and method decides which verb is more likely to have a noun as direct object. </li></ul><ul><li>Experiment used 587833 bigrams to make bigram language model. </li></ul><ul><li>Experiment tested with 17152 unseen bigrams by dividing it into five equal parts T1 to T5. </li></ul><ul><li>Used error rate as performance metric. </li></ul>
  21. 21. <ul><li>As Back off consistently performed worse than MLE so not including Back off in experiments. </li></ul><ul><li>As only experiment is only on unsmoothed data so KL divergence is not included in experiments. </li></ul>
  22. 25. <ul><li>Similarity based methods performed 40% better over Back off and MLE methods. </li></ul><ul><li>Singletons should not be omitted from training data for similarity based methods. </li></ul><ul><li>Total divergence to average method ( A ) performs best in all cases. </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×