• Save
8장발표
Upcoming SlideShare
Loading in...5
×
 

8장발표

on

  • 330 views

Wilson, MPQA chapter 8 presentation

Wilson, MPQA chapter 8 presentation

Statistics

Views

Total Views
330
Views on SlideShare
330
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    8장발표 8장발표 Presentation Transcript

    • 8.0 Recognizing Attitude Types: Sentiment and Arguing 8 장 발표 13. 04. 2011. 2011-22583 고민수
    • 8.0 Recognizing Attitude Types: Sentiment and Arguing
      • Recognizing sentiment/arguing attitudes
      • Attitude annotation
      • Attribution level : direct subjective and speech event (DSSE) expressions in the sentence.
      • Mixtures of positive and negative contextual polarity annotations
      • Attribution-level sentiment classification !!
      Single words Entire sentences
    • 8.1 Datasets
      • Attitude dataset : 284 MPQA corpus documents with attitude annotations
      • Full dataset : 494 MPQA corpus documents
      • Test folds
      • 1. The 4,499 sentences from the smaller attitude dataset are randomly assigned to the different folds.
      • 2. The 5,788 sentences from remaining documents in the full dataset are randomly assigned to the folds.
    • 8.2 Subjectivity Lexicon
      • Subjectivity lexicon – the same one in ch 6.
      • Each clue in the lexicon is tagged
      • 1. reliability class – strongly subjective ( strongsubj ) or weakly subjective ( weaksubj )
      • 2. prior polarity – positive, negative, both, or neutral
      • Additional information : prior arguing polarity
      • intended to capture whether a word out of context seems like it would be used to argue for or against something, or to argue that something is or is not true
      Positive PAP accuse, must, absolutely 2.6% Negative PAP deny, impossible, rather 1.8%
    • 8.3 Units of Classification
      • What units will be classified?
      • The attitude frames are linked to direct subjective frames, which raises the possibility of trying to classify the attitude of different attribution levels.
      • Each direct subjective or objective speech event annotation in a sentence represents an attribution level.
      • (8.1) [ implicit ] African observers generally approved of his victory while Western governments denounced it.
      Writer’s Speech event First direct subjective frame Second direct subjective frame
    • 8.3 Units of Classification The challenge to working with levels of attribution is the text for each level automatically. Problem 1 : Identifying the DSSEs that correspond to the attribution levels. Problem 2 : Defining the full span of text to be included in the attribution level represented by the DSSE.
    • 8.3.1 Identifying DSSEs Automatically
      • Breck et al. tagger (2007) for recognizing the combined set of direct subjective frames and expressive subjective elements.
      • Any successive string of words tagged as part of a DSSE is considered a DSSE phrase.
      • Breck tagger correctly does not identify implicit speech events.
      • However, almost all implicit speech events are speech events for the writer of the sentence, which makes them trivial to identify automatically.
    • 8.3.2 Defining Levels of Attribution
      • The text in the attribution levels represented by the DSSEs are not marked in the corpus, with the exception of DSSEs that are implicit.
      • For implicit DSSEs, the text for the attribution level is just the text of the entire sentences.
      • Defining the text for each attribution level
      • a. Using the parse tree of the sentence
      • b. The text for the attribution level represented by the DSSE is then all the text in the subtree rooted at that word.
    • 8.3.2 Defining Levels of Attribution
    • 8.3.2 Defining Levels of Attribution
      • In the MPQA corpus, a number of speech events are marked with the phrase according to.
      • In a dependency parse tree, “according” is typically a leaf node, even though in most cases the text for the attribution level is the entire sentences.
      • Thus, when a DSSE phrase includes the word “according” the entire sentence the text for the corresponding attribution level.
      • Not possible to evaluate the performance of the above heuristic specifically for identifying the text of attribution levels.
    • 8.3.2 Defining Levels of Attribution
      • Possible to evaluate whether the attitude annotations linked to the DSSE frames are encompassed by the text of the corresponding attribution levels.
      • Table 8.1 shows the results for the 4,243 attitudes linked to non-implicit DSSEs in the attitude dataset.
      • Confidence - the pertinent information for identifying the different attitudes at least is being included.
    • 8.3.3 Defining the Gold Standard Classes
      • Attitude classes for the attribution levels represented by the manual DSSE frames is straightforward.
      • Each DSSE that is not an objective speech event will be linked to one or more attitude frames.
      • If the DSSE for an attribution level is linked to an attitude with an intensity greater than low, then the gold class for that attitude for that attribution level, is true.
      • Implicit
      • Non-implicit
      • The gold-standard attitude classes – manual DSSEs
      Automatic DSSEs Non-automatic DSSEs Automatic DSSEs Non-automatic DSSEs align
    • 8.4 Expression-level Classifiers
      • Hypothesis : the low-level disambiguation of subjectivity clues is useful for higher-level classification tasks.
      • Using expression-level polarity and subjectivity classifiers to disambiguate clue instances will result in improved performance for attitude classification.
      • BoosTexter neutral-polar classifier : trained using all the neutral-polar features
      • One-step BoosTexter polarity classifier : trained using the combined set of neutral-polar and polarity features.
      • Subjective-expression classifier
    • 8.5 Features Five types of features in the classification experiments 1. Bag-of-word features - just the words in the text for that attribution level 2. Clueset features 3. Clue synset features 4. DSSE word features 5. DSSE wordnet features
    • 8.5.1 Clueset Features
      • The cluesets are defined based on reliability class(strongsubj, weaksubj) and attitude class .
      • The reliability class for a clue instance comes from the clue’s entry in the lexicon.
      SENTIMENT RECOGNITION strongsubj:sentiment-yes strongsubj:sentiment-no weaksubj:sentiment-yes weaksubj:sentiment-no POSITIVE-ARGUING CLASSIFICATION strongsubj:pos-arguing-yes strongsubj:pos-arguing-no weaksubj:pos-arguing-yes weaksubj:pos-arguing-no ARGUING RECOGNITION strongsubj:arguing-yes strongsubj:arguing-no weaksubj:arguing-yes weaksubj:arguing-no POSITVE SENTIMENT CLASSIFICATION strongsubj:pos-sentiment-yes strongsubj:pos-sentiment-no weaksubj:pos-sentiment-yes weaksubj:pos-sentiment-no
    • 8.5.2 Clue Synset Features
      • The motivation is that there may be useful groupings of clues, beyond those defined in the subjectivity lexicon.
      • To define the clue synset features,
      • Extract the synsets for every clue from WordNet 2.0 and add this information to the lexicon for each clue. (unique identifier)
      • The clue synset feature for a given attribution level is the union of the synsets of all the subjective clue instances that are found in the attribution level.
    • 8.5.3 DSSE Features
      • The motivation for the DSSE features : particularly important when it comes to recognizing attitude type.
      • Two types of features based on DSSEs
      • DSSE word features : just the set of words in the DSSE phrase.
      • DSSE wordnet features (DSSE synsets, DSSE hypernyms) : the union of the WordNet synsets for all the words in the DSSE phrase, with the exception of the words in the following stoplist (is, am, are, be, been, will, had, has, have, having, do, does)
      • If there is no DSSE phrase because the DSSE is implicit, then the value for this features is a special implicit token.
    • 8.6 Experiments
      • Goal : to test the two hypotheses.
      • Automatic systems for recognizing sentiment and arguing attitudes can be developed using the features, and these systems will perform better than baseline systems.
      • Disambiguating the polarity and subjectivity of clues instances used in attitude classification will result in improved performance.
      • All classifiers : binary classifiers.
      • BoosTexter, SVM-light – baseline : bag-of-word features
      • Two rule-based classifiers:
      • RB-cluelex (baseline) : only uses information about clue instances from the lexicon to make its prediction.
      • RB-clueauto : make predictions using information about clue instances obtained from one of the expression-level classifiers
    • 8.6 Experiments
    • 8.6 Experiments
      • whether disambiguating the clue instances helps to improve performance for attitude classification
      • comparing the results for 2 sets of experiments.
      • 1. the values of the clueset and clue synset features are determined based on the output of three expression-level classifiers. (should achieve better)
      • 2. The expression-level classifiers are not used to disambiguate the clue instances, and the values of the clueset and clue synset features are determined using only information about the clues from the lexicon.
    • 8.6 Experiments
    • 8.6.1.1 Analysis of Sentiment Classification Results
    • 8.6.1.1 Analysis of Sentiment Classification Results
      • RB-clueauto and the various boosting and SVM classifiers all improve nicely over their respective baselines.
      • The best performing sentiment classifier is the SVM classifier that uses all the features, SVM (6).
      • The best performing boosting sentiment classifier is Boosting (7), which uses all the features except for bag-of-words.
      • These results show that all three of the different types of features, clue synset, DSSE words, and DSSE wordnet, are useful for sentiment classification.
    • 8.6.1.2 Analysis of Arguing Classification Results
      • Arguing classification : performance in general is lower than for sentiment classification.
      • The high accuracies are due to the very skewed class distribution.
      • Fewer of the improvements are significant.
      • The DSSE wordnet features again seem to have an important role in achieving the best performance, particularly for SVM.
    • 8.6.1.3 Comparison with Upper Bound
      • To get better understanding : comparing the results with the upper bounds provided by the inter-annotator agreement study.
      • To get a meaningful upper bound requires calculating agreement individually for sentiment and arguing for the set of DSSEs.
      • The agreement numbers for DSSE-level attitudes are higher than they would be in reality, as an upper bound, they are still useful.
    • 8.6.1.4 Including Information from Nested Attribution Levels
      • How much performance degrades when information from nested attribution levels is included?
      • Excluding nested information ( fig 8.1 ) results in little or no information remaining at the outer attribution level for the writer of the sentence.
    • 8.6.2 Classification Results: Positive and Negative Sentiment
      • Only the improvements for positive are significant.
    • 8.6.3 Classification Results: Positive and Negative Arguing
      • The results are low, but the various classifiers do achieve significant improvements over their respective baselines.
    • 8.6.4 Benefit of Clue-Instance Disambiguation
      • Is there a benefit to disambiguating clues for the best attitude classifiers, when all the different features are combined and working together?
      • Clue-instance disambiguation has the potential to be useful for higher-level arguing classification.
    • 8.6.5 Results with Automatic DSSEs
      • How well the attitude classifiers will perform on attribution levels that are based on the automatic DSSEs?
      • 1. Tested the existing classifiers that were trained on the manual attribution levels on the automatic attribution levels.
      • 2. Retrained the classifiers on the automatic attribution levels and tested them on the automatic levels.
    • 8.6.5 Results with Automatic DSSEs
    • 8.6.6 Sentence-level Attitude Classification
      • Many NLP applications are interested in the attitude of sentences.
    • 8.7 Related Work
      • Attribution levels : Choi et al. (2006), Breck et al. (2007)
      • Additional level : the attribution level for the speaker of the sentence.
      • The most closely related research : the work on sentiment classification at the sentence level.
      • Contribution of this work.
      • Information from a lexicon + Information with other features that are new sentiment classification.
      • The first research on automatically recognizing arguing attitudes at the sentence-level or below.
    • 8.8 Conclusions