Sentiment analysis of tweets using Neural Networks


Published on

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Sentiment analysis of tweets using Neural Networks

  1. 1. Sentiment analysis of tweets using Neural NetworksAdri´an PalaciosUniversidad Polit´ecnica de ValenciaJune 6th, 20131 de 15
  2. 2. IntroductionThe objective of this work is:• To use Neural Networks (using the April toolkit) for the polarityclassification of tweets.• To check how NNs behave when applying different techniques forpreprocessing the data.• We don’t look for good results, we are just experimenting withthese techniques.2 de 15
  3. 3. Preprocessing of tweetsPrior to the training of NNs, we need to obtain a feature vectorrepresentation for the samples (tweets):3 de 15
  4. 4. Preprocessing techniquesTo achieve this, we create a bag of words after applying one of thefollowing preprocessing techniques:1. Unigrams.2. Bigrams.3. Stemming.4. Lemmatization.5. Part-of-Speech tagging.4 de 15
  5. 5. StemmingStemming: A process that chops off the suffixes of a given wordfollowing some predefined rules.Examples:• Stem(run): run.• Stem(ran): ran.• Stem(running): run.5 de 15
  6. 6. LemmatizationLemmatization: A process that determines the lemma (canonicalform of the lexeme) of a given word.Examples:• Lemma(run): run.• Lemma(ran): run.• Lemma(running): run.6 de 15
  7. 7. PoS taggingPoS tagging: The assignation Part-of-Speech tags to the words of agiven sentence.7 de 15
  8. 8. Learning techniquesThe polarity classification will be made:• Using a Multilayer Perceptron with a single layer,• after 5-fold cross-validation technique,• and an ensemble of the resulting MLPs.8 de 15
  9. 9. Hyper-parameter searchWe will perform a random search for hyper-parameter optimizationinstead of a grid search.9 de 15
  10. 10. Ensemble methodsAfter training is done, since we use 5-fold cross-validation, we get 5MLPs for each set of parameters.To be consistent, we merge these 5 classifiers into a single one usingthe bootstrap aggregating method (votes have equal weight) for theensemble.10 de 15
  11. 11. CorpusWe will work with the corpus provided at the 2012 edition of theWorkshop on Sentiment Analysis at SEPLN.Training TestSamples 7219 6079811 de 15
  12. 12. Training resultsAccuracy of the validation set classification:3 levels 5 levelsUnigrams 54.44 45.62Bigrams 54.09 39.99Stemming 62.34 47.49Lemmatization 61.60 46.75PoS-tagging 52.58 38.4012 de 15
  13. 13. Test resultsAccuracy of the test set classification (average and ensemble):3 levels 5 levelsUnigrams 32.13 26.12Bigrams 32.39 28.21Stem. 32.34 26.81Lemma. 31.84 26.18PoS-tag. 35.22 35.223 levels 5 levelsUnigrams 32.16 26.52Bigrams 32.32 29.32Stem. 32.23 27.16Lemma. 31.80 26.49PoS-tag. 35.22 35.2213 de 15
  14. 14. ConclusionsResults are bad, but we can improve by:• Using more complex techniques for preprocessing.• Using more complex models for learning.• Exploring more values for random hyper-parameter search.• Learning from PoS tagged tweets in a different way.14 de 15
  15. 15. Questions?The tools used for the experiments can be found at:• The NLTK:• Freeling:• The April toolkit: de 15