Defense Slides


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • 1. In case of neural n/w for example, the learned model is quite meaningless.
  • 1. Bigrams and unigrams, (interest rate) and (rate) suggest the financial sense of interest.
  • If we know the pos of certain words, pretagging such words can improve overall quality of pos tagging by the automatic tagger. Note tag of chair (NN changed to VB) Note we are no longer confident of the quakity of tagging around chair now. We found a lot of such mis-taggings of the head words in Sval-1 and 2 data (5% of head words had radical mistags and 20% mistags in all (radical and subtle)). So we decided to find out why this was happening and hopefully do something abt it.
  • Notice the different tag sets on the right of turn . P0, P-2 etc have similar meanings By combination I mean one tree where the nodes may be any of the different pos features: P0 or P1 or P-2 and so on.
  • We wanted to utilize the guaranteed pre-tagging for a higher quality parsing. Head and parent words are marked in red and all 4 of them suggest a particular sense of hard and line . The hard work --- not easy, difficult sense The hard surface --- not soft, physical sense Fasten the line --- cord sense In the sentence fragments containing line, fasten and cross are the parents of the noun phrase “the line” Cross the line --- division sense
  • Sval-1 (2-24) and Sval-2 (2-32) data created such that target words with varying number of senses are represented. Sval-1 annotated with senses from HECTOR, Sval-2 from WordNet. 2. Interest data created by Bruce and Weibe from penn treebank and WSJ (ACL/DCI version) Annotated with 6 senses from LDOCE 3. Serve data created by Leacock Chodrow from WSJ (1987-89) and APHB corpus. Annotated with four senses from WordNet. 4. Hard data created by Leacock Chodrow from SJM corpus. Annotated with three senses from WordNet. 5. line data created by Leacock et al. from WSJ (1987-89) and APHB corpus. Annotated with 6 senses from WordNet.
  • 1. First instance of line/hard/serve/interest chosen as target word (assuming all instances in a instance have same sense – one sense per discourse).
  • Surface form does not do much better than baseline. Unigrams and Bigrams both do significantly well (esp. considering they are lexical features, easily captured).
  • We have improvements over baseline (much is not expected as we are using just individual pos) Interestingly P1 is found to be best (we found this in all data) Break down into individual pos shows that … Verbs and adjectives do best with P1 Verb-object relations is in effect getting captured. Nouns are helped by pos tags on either side Subj-verb and verb-object relation (hence both sides help).
  • 1. Similar results as in Sval-1.
  • Simple comb of pos ftrs does almost as well as unigrams and bigrams. Note, much lower number of features utilized as compared to unigrams and bigrams. P0,P1 found to be most potent combination for Sval-1 and 2. Larger context found to be much more helpful for line, hard, serve and interest data as compared to the Sval data. We think that this is because of the much larger amounts of training data.
  • Guaranteed pre-tagging found to help WSD Not much effect as contextual rule tagger of Brill is not being applied that much That’s because the tagger was trained on Penn Treebank and the Sval data comes from ??? But we see some improvement and that’s good 
  • Head found to be best Verbs are usually head themselves and hence the head ftr is not very useful for them. Parent found to do reasonable well.
  • 1. Similar results as last slide.
  • Optimal ensemble is the upper bound for accuracy achivable by an ensemble technique. One tree with all feature may yield even better results but we cannot say much about that and is beyond the scope of this work.
  • Note: reasonable amount of redundancy (Base): that was expected Note: the simple ensemble does slightly better than individual features in case of line and hard data it does worse (not sure why) Suggests that a powerful ensemble technique is desirable Note: the large amounts of complementarity as suggested by the optimal ensemble values which are around the best achieved so far. Combination of simple lexical and syntactic features can results close to state of art.
  • Defense Slides

    1. 1. Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation Masters Thesis : Saif Mohammad Advisor : Dr. Ted Pedersen University of Minnesota, Duluth
    2. 2. Path Map <ul><li>Introduction </li></ul><ul><li>Background </li></ul><ul><li>Data </li></ul><ul><li>Experiments </li></ul><ul><li>Conclusions </li></ul>
    3. 3. Word Sense Disambiguation (WSD) <ul><li>Harry cast a bewitching spell </li></ul><ul><li>Humans immediately understand spell to mean a charm or incantation. </li></ul><ul><ul><li>reading out letter by letter or a period of time ? </li></ul></ul><ul><ul><ul><li>Words with multiple senses – polysemy , ambiguity! </li></ul></ul></ul><ul><ul><li>Utilize background knowledge and context. </li></ul></ul><ul><li>Machines lack background knowledge. </li></ul><ul><ul><li>A utomatically i dentifying the intended sense of a word in written text, based on its context , remain s a hard problem. </li></ul></ul><ul><ul><li>Best accuracies in recent international event, around 65%. </li></ul></ul>
    4. 4. Why do we need WSD ! <ul><li>Information Retrieval </li></ul><ul><ul><li>Query: cricket bat </li></ul></ul><ul><ul><ul><li>Documents pertaining to the insect and the mammal, irrelevant. </li></ul></ul></ul><ul><li>Machine Translation </li></ul><ul><ul><li>Consider English to Hindi translation. </li></ul></ul><ul><ul><ul><li>head to sar (upper part of the body) or adhyaksh (leader)? </li></ul></ul></ul><ul><li>Machine-hu man interaction </li></ul><ul><ul><li>Instructions to machines. </li></ul></ul><ul><ul><ul><li>Interactive home system: turn on the lights </li></ul></ul></ul><ul><ul><ul><li>Domestic Android: get the door </li></ul></ul></ul><ul><li>Applications are widespread and will affect our way of life. </li></ul>
    5. 5. Terminology <ul><li>Harry cast a bewitching spell </li></ul><ul><li>Target word – the word whose intended sense is to be identified. </li></ul><ul><ul><li>spell </li></ul></ul><ul><li>Context – the sentence housing the target word and possibly, 1 or 2 sentences around it. </li></ul><ul><ul><li>Harry cast a bewitching spell </li></ul></ul><ul><li>Instance – target word along with its context. </li></ul><ul><li>WSD is a classification problem wherein the occurrence of the </li></ul><ul><li>target word is assigned to one of its many possible senses. </li></ul>
    6. 6. Corpus-Based Supervised Machine Learning <ul><li>A computer program is said to learn from experience … if its performance at tasks … improves with experience . </li></ul><ul><li>- Mitchell </li></ul><ul><li>Task : Word Sense Disambiguation of given test instances. </li></ul><ul><li>Performance : Ratio of instances correctly disambiguated to the total test instances – accuracy. </li></ul><ul><li>Experience : Manually created instances such that target words are marked with intended sense – training instances. </li></ul><ul><ul><li>Harry cast a bewitching spell / incantation </li></ul></ul>
    7. 7. Path Map <ul><li>Introduction </li></ul><ul><li>Background </li></ul><ul><li>Data </li></ul><ul><li>Experiments </li></ul><ul><li>Conclusions </li></ul>
    8. 8. Decision Trees <ul><li>A kind of classifier. </li></ul><ul><ul><li>Assigns a class by asking a series of questions. </li></ul></ul><ul><ul><li>Questions correspond to features of the instance. </li></ul></ul><ul><ul><li>Question asked depends on answer to previous question. </li></ul></ul><ul><li>Inverted tree structure. </li></ul><ul><ul><li>Interconnected nodes. </li></ul></ul><ul><ul><ul><li>Top most node is called the root. </li></ul></ul></ul><ul><ul><li>Each node corresponds to a question / feature. </li></ul></ul><ul><ul><li>Each possible value of feature has corresponding branch. </li></ul></ul><ul><ul><li>Leaves terminate every path from root. </li></ul></ul><ul><ul><ul><li>Each leaf is associated with a class. </li></ul></ul></ul>
    9. 9. Automating Toy Selection for Max Moving Parts ? Color ? Size ? Car ? Size ? Car ? LOVE LOVE SO SO LOVE HATE HATE SO SO HATE No No No Yes Yes Yes Blue Big Red Small Other Small Big ROOT NODES LEAVES
    10. 10. WSD Tree Feature 4? Feature 4 ? Feature 2 ? Feature 3 ? Feature 2 ? SENSE 4 SENSE 3 SENSE 2 SENSE 1 SENSE 3 SENSE 3 0 0 0 1 1 1 0 1 0 1 0 1 Feature 1 ? SENSE 1
    11. 11. Choice of Learning Algorithm <ul><li>Why use decision trees for WSD ? </li></ul><ul><ul><li>It has drawbacks – training data fragmentation. </li></ul></ul><ul><ul><li>What about other learning algorithms such as neural networks? </li></ul></ul><ul><li>Context is a rich source of discrete features. </li></ul><ul><li>The learned model likely meaningful. </li></ul><ul><ul><li>May provide insight into the interaction of features. </li></ul></ul><ul><li>Pedersen[2001]*: Choosing the right features is of </li></ul><ul><li>greater significance than the learning algorithm itself. </li></ul><ul><li>* “A Decision Tree of Bigrams is an Accurate Predictor of Word Sense”, T. Pedersen, In the Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics </li></ul><ul><li>(NAACL-01), June 2-7, 2001, Pittsburgh, PA. </li></ul>
    12. 12. Lexical Features <ul><li>Surface form </li></ul><ul><ul><li>A word we observe in text. </li></ul></ul><ul><ul><li>Case(n) </li></ul></ul><ul><ul><ul><li>1. Object of investigation 2. frame or covering 3. A weird person </li></ul></ul></ul><ul><ul><ul><li>Surface forms : case , cases , casing </li></ul></ul></ul><ul><ul><ul><li>An occurrence of casing suggests sense 2. </li></ul></ul></ul><ul><li>Unigrams and Bigrams </li></ul><ul><ul><li>One word and two word sequences in text. </li></ul></ul><ul><ul><li>The interest rate is low </li></ul></ul><ul><ul><li>Unigrams: the, interest, rate, is, low </li></ul></ul><ul><ul><li>Bigrams: the interest, interest rate, rate is, is low </li></ul></ul>
    13. 13. Part of Speech Tagging <ul><li>Pre-requisite for many natural language tasks. </li></ul><ul><ul><ul><li>Parsing, WSD, Anaphora resolution </li></ul></ul></ul><ul><li>Brill Tagger* – most widely used tool. </li></ul><ul><ul><li>Accuracy around 95%. </li></ul></ul><ul><ul><li>Source code available. </li></ul></ul><ul><ul><li>Easily understood rules. </li></ul></ul><ul><ul><li>Harry /NNP cast /VBD a /DT bewitching /JJ spell / NN </li></ul></ul><ul><ul><li>NNP proper noun, VBD verb past, DT determiner, NN noun </li></ul></ul><ul><li>* </li></ul>
    14. 14. Pre-Tagging <ul><li>Pre-tagging is the act of manually assigning tags to selected words in a text prior to tagging. </li></ul><ul><ul><li>Mona will sit in the pretty chair // NN this time </li></ul></ul><ul><ul><li>chair is the pre-tagged word, NN is its pre-tag. </li></ul></ul><ul><ul><li>Reliable anchors or seeds around which tagging is done. </li></ul></ul><ul><li>Brill Tagger facilitates pre-tagging. </li></ul><ul><ul><li>Pre-tag not always respected ! </li></ul></ul><ul><li>Mona /NNP will /MD sit /VB in /IN the /DT </li></ul><ul><li>pretty /RB chair // VB this /DT time /NN </li></ul>
    15. 15. The Brill Tagger <ul><li>Initial state tagger – assigns most frequent tag for a type based on entries in a Lexicon (pre-tag respected). </li></ul><ul><li>Final state tagger – may modify tag of word based on context (pre-tag not given special treatment). </li></ul><ul><li>Relevant Lexicon Entries </li></ul><ul><ul><li>Type Most frequent tag Other possible tags </li></ul></ul><ul><li>chair NN (noun) VB (verb) </li></ul><ul><li> pretty RB ( adverb ) JJ (adjective ) </li></ul><ul><li>Relevant Contextual Rules </li></ul><ul><ul><li>Current Tag New Tag When </li></ul></ul><ul><ul><li>NN VB NEXTTAG DT </li></ul></ul><ul><ul><li>RB JJ NEXTTAG NN </li></ul></ul>
    16. 16. Guaranteed Pre-Tagging <ul><li>A patch to the tagger provided – BrillPatch. </li></ul><ul><ul><li>Application of contextual rules to the pre-tagged words bypassed. </li></ul></ul><ul><ul><li>Application of contextual rules to non pre-tagged words unchanged. </li></ul></ul><ul><ul><ul><li>Mona /NNP will /MD sit /VB in /IN the /DT </li></ul></ul></ul><ul><ul><ul><li>pretty /JJ chair //NN this /DT time /NN </li></ul></ul></ul><ul><li>Tag of chair retained as NN . </li></ul><ul><ul><li>Contextual rule to change tag of chair from NN to VB not applied. </li></ul></ul><ul><li>Tag of pretty transformed. </li></ul><ul><ul><li>Contextual rule to change tag of pretty from RB to JJ applied. </li></ul></ul><ul><ul><li>* ”Guaranteed Pre-Tagging for the Brill Tagger ”, Mohammad, S. and Pedersen, T., In Proceedings of Fourth International Conference of Intelligent Systems and Text Processing , February 2003, Mexico. </li></ul></ul>
    17. 17. Part of Speech Features <ul><li>A word used in different senses is likely to have different sets of pos tags around it. </li></ul><ul><li>Why did jack turn /VB against /IN his /PRP$ team /NN </li></ul><ul><li>Why did jack turn /VB left /NN at /IN the /DT crossing </li></ul><ul><li>Features used </li></ul><ul><ul><li>Individual word POS: P -2 , P -1 , P 0 , P 1 , P 2 </li></ul></ul><ul><ul><ul><li>P 1 = JJ implies that the word to the right of the target word is an adjective. </li></ul></ul></ul><ul><ul><li>A combination of the above. </li></ul></ul>
    18. 18. Parse Features <ul><li>Collins Parser * used to parse the data. </li></ul><ul><ul><li>Source code available. </li></ul></ul><ul><ul><li>Uses part of speech tagged data as input. </li></ul></ul><ul><li>Head word of a phrase. </li></ul><ul><ul><li>the hard work , the hard surface </li></ul></ul><ul><ul><li>Phrase itself : noun phrase, verb phrase and so on. </li></ul></ul><ul><li>Parent : Head word of the parent phrase. </li></ul><ul><ul><li>fasten the line , cross the line </li></ul></ul><ul><ul><li>Parent phrase. </li></ul></ul><ul><li>* </li></ul>
    19. 19. Sample Parse Tree VERB PHRASE NOUN PHRASE Harry NOUN PHRASE SENTENCE spell cast a bewitching NNP VBD DT JJ NN
    20. 20. Path Map <ul><li>Introduction </li></ul><ul><li>Background </li></ul><ul><li>Data </li></ul><ul><li>Experiments </li></ul><ul><li>Conclusions </li></ul>
    21. 21. Sense-Tagged Data <ul><li>Senseval-2 data </li></ul><ul><ul><li>4,328 instances of test data and 8,611 instances of training data ranging over 73 different noun, verb and adjectives. </li></ul></ul><ul><li>Senseval-1 data </li></ul><ul><ul><li>8,512 test instances and 13,276 training instances, ranging over 35 nouns, verbs and adjectives. </li></ul></ul><ul><li>line, hard, interest, serve data </li></ul><ul><ul><li>4149, 4337, 4378 and 2476 sense-tagged instances with line, hard, serve and interest as the head words. </li></ul></ul><ul><ul><li>Around 50,000 sense-tagged instances in all! </li></ul></ul>
    22. 22. Data Processing <ul><li>Packages to convert line hard, serve and interest data to Senseval-1 and Senseval-2 data formats. </li></ul><ul><li>refine preprocesses data in Senseval-2 data format to make it suitable for tagging. </li></ul><ul><ul><li>Restore one sentence per line and one line per sentence, pre-tag the target words, split long sentences. </li></ul></ul><ul><li>posSenseval part of speech tags any data in Senseval-2 data format. </li></ul><ul><ul><li>Brill tagger along with Guaranteed Pre-tagging utilized. </li></ul></ul><ul><li>parseSenseval parses data in a format as output by the Brill Tagger. </li></ul><ul><ul><li>Restores xml tags, creating a parsed file in Senseval-2 data format. </li></ul></ul><ul><ul><li>Uses the Collins Parser. </li></ul></ul>
    23. 23. Sample line Data Instance <ul><li>Original instance: </li></ul><ul><li>art} aphb 01301041: </li></ul><ul><li>&quot; There's none there . &quot; He hurried outside to see if there were any dry ones on the line . </li></ul><ul><li>Senseval-2 data format: </li></ul><ul><li><instance id=&quot;} aphb 01301041: &quot;> </li></ul><ul><li><answer instance=&quot;} aphb 01301041: &quot; senseid=&quot; cord &quot;/> </li></ul><ul><li><context> </li></ul><ul><li><s> &quot; There's none there . &quot; </s> <s> He hurried outside to see if there were any dry ones on the <head> line </head> . </s> </li></ul><ul><li></context> </li></ul><ul><li></instance> </li></ul>
    24. 24. Sample Output from parseSenseval <ul><li><instance id=“harry&quot;> </li></ul><ul><li><answer instance=“harry&quot; senseid=“incantation&quot;/> </li></ul><ul><li><context> </li></ul><ul><li>Harry cast a bewitching <head> spell </head> </li></ul><ul><li></context> </li></ul><ul><li></instance> </li></ul><ul><li><instance id=“harry&quot;> </li></ul><ul><li><answer instance=“harry&quot; senseid=“incantation&quot;/> </li></ul><ul><li><context> </li></ul><ul><li><P=“TOP~cast~1~1”> <P=“S~cast~2~2”> <P=“NPB~Potter~2~2”> Harry </li></ul><ul><li><p=“NNP”/> <P=“VP~cast~2~1”> cast <p=“VB”/> <P=“NPB~ spell ~3~3”> </li></ul><ul><li>a <p=“DT”/> bewitching <p=“JJ”/> spell <p=“NN”/> </P> </P> </P> </P> </li></ul><ul><li></context> </li></ul><ul><li></instance> </li></ul>
    25. 25. Issues… <ul><li>How is the target word identified in line , hard and serve data? </li></ul><ul><li>How the data is tokenized for better quality pos tagging and parsing? </li></ul><ul><li>How is the data pre-tagged? </li></ul><ul><li>How is parse output of Collins Parser interpreted? </li></ul><ul><li>How is the parsed output XML’ized and brought back to Senseval-2 data format? </li></ul><ul><li>Idiosyncrasies of line , hard , serve , interest , Senseval-1 and Senseval-2 data and how they are handled? </li></ul>
    26. 26. Path Map <ul><li>Introduction </li></ul><ul><li>Background </li></ul><ul><li>Data </li></ul><ul><li>Experiments </li></ul><ul><li>Conclusions </li></ul>
    27. 27. Lexical: Senseval-1 & Senseval-2 72.9% 74.5% 54.3% 54.3% line 66.9% 66.9% 62.9% 56.3% Sval-1 89.5% 83.4% 81.5% 81.5% hard 72.1% 73.3% 44.2% 42.2% serve 79.9% 55.1% Bigram 75.7% 55.3% Unigram 64.0% 49.3% Surface Form 54.9% 47.7% Majority interest Sval-2
    28. 28. Individual Word POS (Senseval-1) 64.3% 58.2% 62.2% 59.2% P -1 64.3% 58.2% 62.5% 60.3% P 0 66.2% 64.4% 65.4% 63.9% P 1 64.0 58.6% 58.2% 57.5% P -2 65.2% 60.8% 60.0% 59.9% P -2 64.3% 56.9% 57.2% 56.3% Majority Adj. Verbs Nouns All
    29. 29. Individual Word POS (Senseval-2) 59.0% 40.2% 55.2% 49.6% P -1 58.2% 40.6% 55.7% 49.9% P 0 61.0% 49.1% 53.8% 53.1% P 1 57.9% 38.0% 51.9% 47.1% P -2 59.4% 43.2% 50.2% 48.9% P -2 59.0% 39.7% 51.0% 47.7% Majority Adj. Verbs Nouns All
    30. 30. Combining POS Features 62.3% 60.4% 54.1% 54.3% line 86.2% 84.8% 81.9% 81.5% hard 75.7% 73.0% 60.2% 42.2% serve 67.8% 68.0% 66.7% 56.3% Sval-1 80.6% 78.8% 70.5% 54.9% interest 54.6% P -2 , P -1 , P 0 , P 1 , P 2 54.6% P -1 , P 0 , P 1 54.3% P 0 , P 1 47.7% Majority Sval-2
    31. 31. Effect Guaranteed Pre-tagging on WSD Senseval-1 Senseval-2 54.7% 54.6% 67.6% 68.0% P -1 , P 0 , P 1 54.1% 54.6% 66.1% 67.8% P -2 , P -1 , P 0 , P 1 , P 2 53.8% 54.3% 66.7% 66.7% P 0 , P 1 50.9% 50.8% 62.1% 62.2% P -1 , P 0 Reg. P Guar. P. Reg. P. Guar. P.
    32. 32. Parse Features (Senseval-1) 65.8% 60.3% 62.6% 60.6% Parent 66.2% 57.2% 57.5% 58.5% Phrase 66.2% 58.3% 58.1% 57.9% Par. Phr. 66.9% 59.8% 70.9% 64.3% Head 64.3% 56.9% 57.2% 56.3% Majority Adj. Verbs Nouns All
    33. 33. Parse Features (Senseval-2) 59.3% 40.1% 56.1% 50.0% Parent 59.5% 40.3% 51.7% 48.3% Phrase 60.3% 39.1% 53.0% 48.5% Par. Phr. 64.0% 39.8% 58.5% 51.7% Head 59.0% 39.7% 51.0% 47.7% Majority Adj. Verbs Nouns All
    34. 34. Thoughts… <ul><li>Both lexical and syntactic features perform comparably. </li></ul><ul><li>But do they get the same instances right ? </li></ul><ul><ul><li>How much are the individual feature sets redundant. </li></ul></ul><ul><li>Are there instances correctly disambiguated by one feature set and not by the other ? </li></ul><ul><ul><li>How much are the individual feature sets complementary. </li></ul></ul><ul><ul><li>Is the effort to combine of lexical and syntactic </li></ul></ul><ul><ul><li>features justified? </li></ul></ul>
    35. 35. Measures <ul><li>Baseline Ensemble : accuracy of a hypothetical ensemble which predicts the sense correctly only if both individual feature sets do so. </li></ul><ul><ul><li>Quantifies redundancy amongst feature sets. </li></ul></ul><ul><li>Optimal Ensemble : a ccuracy of a hypothetical ensemble which predicts the sense correctly if either of the individual feature sets do so. </li></ul><ul><ul><li>Difference with individual accuracies quantifies complementarity. </li></ul></ul><ul><li>We used a simple ensemble which sums up the </li></ul><ul><li>probabilities for each sense by the individual feature </li></ul><ul><li>sets to decide the intended sense. </li></ul>
    36. 36. Best Combinations 89.0% 90.1% 83.2% 67.6% P -1 ,P 0 , P 1 78.8% Bigrams 79.9% interest 54.9% 83.0% 89.9% 81.6% 58.4% P -1 ,P 0 , P 1 73.0% Unigrams 73.3% serve 42.2% 83.0% 91.3% 88.9% 86.1% Head, Par 87.7% Bigrams 89.5% hard 81.5% 88.0% 82.0% 74.2% 55.1% P -1 ,P 0 , P 1 60.4% Unigrams 74.5% line 54.3% 81.1% 78.0% 71.1% 57.6% P -1 ,P 0 , P 1 68.0% Unigrams 66.9% Sval-1 56.3% 66.7% 67.9% 57.0% 43.6% P -1 ,P 0 , P 1 55.3% Unigrams 55.3% Sval-2 47.7% Best Opt. Ens. Base Set 2 Set 1 Data
    37. 37. Path Map <ul><li>Introduction </li></ul><ul><li>Background </li></ul><ul><li>Data </li></ul><ul><li>Experiments </li></ul><ul><li>Conclusions </li></ul>
    38. 38. Conclusions <ul><li>Significant amount of complementarity across lexical and syntactic features. </li></ul><ul><li>Combination of the two justified. </li></ul><ul><li>We show that simple lexical and part of speech features can achieve state of the art results. </li></ul><ul><li>How best to capitalize on the complementarity still an open issue. </li></ul>
    39. 39. Conclusions (continued) <ul><li>Part of speech of word immediately to the right of target word found most useful. </li></ul><ul><ul><li>Pos of words immediately to the right of target word best for verbs and adjectives. </li></ul></ul><ul><ul><li>Nouns helped by tags on either side. </li></ul></ul><ul><ul><li>(P 0 , P 1 ) found to be most potent in case of small training data per instance (Sval data) . </li></ul></ul><ul><ul><li>Larger pos context size (P -2 , P -1 , P 0 , P 1 , P 2 ) shown to be beneficial when training data per instance is large (line, hard, serve and interest data) </li></ul></ul><ul><li>Head word of phrase particularly useful for adjectives </li></ul><ul><ul><li>Nouns helped by both head and parent. </li></ul></ul>
    40. 40. Other Contributions <ul><li>Converted line , hard , serve and interest data into Senseval-2 data format. </li></ul><ul><li>Part of speech tagged and Parsed the Senseval2, Senseval-1, line , hard , serve and interest data. </li></ul><ul><li>Developed the Guaranteed Pre-tagging mechanism to improve quality of pos tagging. </li></ul><ul><ul><li>Showed that guaranteed pre-tagging improves WSD. </li></ul></ul><ul><ul><li>The hard and serve data, part of speech tagged using </li></ul></ul><ul><ul><li>Guaranteed Pre-tagging is part of NLTK data kit. </li></ul></ul>
    41. 41. Code, Data & Resources <ul><li>SyntaLex : A system to do WSD using lexical and syntactic features. Weka’s decision tree learning algorithm is utilized. </li></ul><ul><li>posSenseval : part of speech tags any data in Senseval-2 data format. Brill Tagger used. </li></ul><ul><li>parseSenseval : parses data in a format as output by the Brill Tagger. Output is in Senseval-2 data format with part of speech and parse information as xml tags. Uses Collins Parser. </li></ul><ul><li>Packages to convert line hard, serve and interest data to Senseval-1 and Senseval-2 data formats. </li></ul><ul><li>BrillPatch : Patch to Brill Tagger to employ Guaranteed </li></ul><ul><li>Pre-Tagging. </li></ul><ul><li> </li></ul><ul><li> </li></ul>
    42. 42. Documentation <ul><li>“ Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation ”, Mohammad, S. and Pedersen, T., To appear in the Proceedings of Eighth Conference on Natural Language Learning at HLT-NAACL, May 2004, Boston. </li></ul><ul><li>“ Guaranteed Pre-Tagging for the Brill Tagger ”, Mohammad, S. and Pedersen, T., In Proceedings of Fourth International Conference of Intelligent Systems and Text Processing , February 2003, Mexico. </li></ul><ul><li>“ Combining Lexical and Syntactic Features for Supervised Word Sense Disambiguation ”, Mohammad, S., Masters Thesis, August 2003, University of Minnesota, Duluth. </li></ul><ul><li> </li></ul>
    43. 43. Senseval-3 (Mar-1 to April 15, 2004) Around 8000 training and 4000 test instances. Results expected shortly. Thank You