Twitter Agreement Analysis

541 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
541
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Twitter Agreement Analysis

  1. 1. Discovering Agreement andDisagreement between Userswithin a Twitter ConversationThreadPRESENTATION BY:ARVIND KRISHNAA JAGANNATHAN
  2. 2. Objective“Given the thread of conversationamong multiple users on Twitter, basedon the initiating statement (i.e Tweet)of a user (say the initiator),automatically those responses whichagree with the statement of the userand those that disagree”
  3. 3. Phase 1: Baseline Setup
  4. 4. The Experimental SetupPhase 1: Supervised Classifier – The Labor-Intensive Baseline Training Set: Initial Tweet + Response pairs for all twitter threads with <=15 and >=10responses. Hand-annotated as “Agreement”, “Disagreement” and “Neither” Around 10000 manually annotated pairs Test/Development Set: Tweet + Response pairs for threads with >15 responses. Around 1500 pairs Classifier Applied: MIRA classifier. Implemented in PythonResults:• 81.47% Accuracy on thetest/development data.• Will be the baseline forcomparison0204060801001205 10 15 20 25 30BaselineAccuracy on Development DataAccuracy on Training Data
  5. 5. Top 10 Lexical Features- By WeightVector f**k completely_disagree lol roflmao ROFL K_RT _RT love_you yeah_right #truth
  6. 6. Phase 2: StructuralCorrespondence Learning
  7. 7. Structural CorrespondenceLearning Domain adaptation technique to leverage abundantlabeled data in one domain and utilize it in a targetdomain with less/no labeled data Source Domain: 14 annotated meeting threads from AMImeeting corpus Around 10k statement-response adjacency pairs Target Domain: Initial tweet-response pairs from threadshaving 10-15 responses (~10k)
  8. 8. Structured CorrespondenceLearning Algorithm: Pivot FeaturesPhase 2: SCL Implementation Choose m pivot features from source and targetdomains, such that They occur frequently in both domains Are characteristic of the task we want to achieve (i.e., indicateagreement or disagreement) Chosen using labeled source data, unlabeled source and targetdata Pivot Features: 50 most frequently occurring terms in pairs annotatedas “agreement”, “disagreement” and “backchannel (AMI)/ neither(Twitter)”
  9. 9. Structured CorrespondenceLearning Algorithm Step 1: Construct m pivot feature vectors for the source and targetdomain Step 2: Construct one binary prediction problem per adjacency pairof source domain Binary prediction question: For the given adjacency pair, does the pivotfeature mi occur in the response? Train a classifier on the annotated AMI corpus to construct a weightvector, W such that,Wi = Weight assigned to the ith adjacency pair for a particular pivot feature For each pivot feature, there will be a weight vector W
  10. 10. Structure CorrespondenceLearningSource Domain(AMIAnnotated Meeting Corpus)Extract Features whichstrongly correlate withagreement/disagreementSource Feature VectorCommon LatentSpaceUSVTProject ontoTarget Domain3. Obtain themapping matrix UTTwitterCorpusTarget Feature VectorMIRAClassifierLabelsStructureCorrespondence
  11. 11. Structured Correspondence LearningAlgorithm: Application in Target Domain Step 3: Construct a matrix L, whose column vectors are the pivotpredictor weight vectors Step 4: Perform SVD on L, i.e., L = UDVT = UT , which is a projection from original feature space to a latentspace common to both source and target domains. Step 5: Apply the features from each row of on the data from Twitteradjacency pairs and AMI adjacency pairs. Step 6: Through Step 5 induce correspondences between featuresindicating agreement/disagreement in the AMI corpus and Twitter corpus
  12. 12. Results
  13. 13. Visualizing the correspondencesbetween source and target domainsAMI Corpus: Features stronglyassociated with the feature disagreedisagreewrongincorrectUhobviouslythoughtend_toumTwitter Corpus: Correspondingfeaturesdisagreecompletely#stupidROFLliarhave_tohate#WTF1. f**k2. completely_disagree3. lol4. roflmao5. ROFL6. K_RT7. _RT8. love_you9. yeah_right10.#truth
  14. 14. Results Three instances of the target classifier was set up: Labeled source domain data; unlabeled target domain data Labeled source domain data; unlabeled data from source and target Unlabeled data from source domain to augment extraction ofcorresponding features Annotated adjacency pairs from10 meeting threads(~8k) Labeled source domain data; unlabeled target and source domaindata; small amount of labeled target domain data Twitter conversation threads with exactly 10 responses (~2k) Features extracted from the target domain are applied to a MIRAclassifier, and the accuracy is computed in each of the threescenarios
  15. 15. Results: Comparison with Baselineresult77.6180.7483.547475767778798081828384Labeled Source + Unlabeled Target Labeled Source + Unlabeled Target + UnlabeledSourceLabeled Source + Unlabeled Target + UnlabeledSource + Labeled Target (~2k)ACCURACY(%)SCENARIOSCL: Accuracy on Twitter Test Data
  16. 16. Results: Comparison with Baselineresult81.0382.1682.7983.2483.5479.58080.58181.58282.58383.584500 750 1000 1500 2000ACCURACY(%)NUMBER OF LABELED TARGET DATAVarying the size of Labeled Target Data
  17. 17. Discussions
  18. 18. Salient Points of Discussion Purely unlabeled data, provides classification accuracy very close tobaseline Compared with gains from SCL applied in POS tagging Blitzer et. Al’s* task was from a significantly larger corpus Conversations in both AMI and Twitter corpus, are generally short (AMI –around 10-12 words; Twitter maximum of 140 characters) Certain twitter specific constructs were not leveraged (especially retweets) Significantly differing lexicons to convey a similar feeling (use of singleswear words followed by a retweet for instance) Able to beat the baseline, with minimally available annotated data fromtarget domain Current implementation does not take into account the initialstatement/tweet
  19. 19. Future Work Use more unlabeled data to see if baseline canbe defeated without any labeled target domaindata Incorporate the words used in the statement intothe model Restrict categories of Twitter conversation toparticular domain/personalities (perhaps maylead to better results) Clean up the code and make it ready for publicdistribution!

×