Mentor: Romil Bansal
Yash Shah (201101127)
Guided by : Dr. Vasudeva Varma
Problem Statement : To automatically classify Tweets
from Twitter into various genres based on predefined
o Twitter is a major social networking service with over 200
million tweets made every day.
o Twitter provides a list of Trending Topics in real time, but it
is often hard to understand what these trending topics are
o It is important and necessary to classify these topics into
general categories with high accuracy for better
o Input Data is the static / real-time data consisting of the
o Training dataset :
Fetched from twitter with twitter4j api.
o It will return list of all categories to which the input tweet
o It will also give the accuracy of the algorithm used for
We took following categories into consideration for
classifying twitter data.
1)Business 5)Law 9)Politics
2)Education 6)Lifestyle 10)Sports
3)Entertainment 7)Nature 11)Technology
Concepts used for better performance
To remove low frequent and high frequent words using
Bag of words approach .
Stop words removal
To remove most common words, such as the, is, at,
which, and on.
To reduce inflected words to their stem, base or root
form using porter stemming
Cleaning crawl data
Other Concepts used ..
To correct spellings using Edit distance method.
Named Entity Recognition:
For ranking result category and finding most
If feature(word) of test query not found as one of
dimension in feature space than replace that word with
its synonym. Done using WordNet.
Tweets Classification Algorithms
We used 3 algorithms for classification
1) Naïve based
2) SVM based Supervised
3) Rule based
Stop word removal
Create Index file
Of feature vector
vector for each
Create Index file
Of feature vectors
Stop word removal
vector for test tweet
Main idea for Supervised Learning
Assumption: training set consists of instances of
different classes described cj as conjunctions of
Task: Classify a new instance d based on a tuple of
attribute values into one of the classes cj C
Key idea: assign the most probable class using
supervised learning algorithm.
Method 1 : Bayes Classifier
Bayes rule states :
We used “WEKA” library for machine learning in Bayes
Classifier for our project.
Method 2 : SVM Classifier
(Support Vector Machine)
Given a new point x, we can score its projection
onto the hyperplane normal:
I.e., compute score: wTx + b = Σαiyixi
Tx + b
Decide class based on whether < or > 0
Can set confidence threshold t.
Score > t: yes
Score < -t: no
Multi-class SVM Approaches
Each of the SVMs separates a single class from all
remaining classes (Cortes and Vapnik, 1995)
Pair-wise. k(k-1)/2, k Y SVMs are trained. Each SVM
separates a pair of classes (Fridman, 1996)
Advantages of SVM
High dimensional input space
Few irrelevant features (dense concept)
Sparse document vectors (sparse instances)
Text categorization problems are linearly separable
For linearly inseparable data we can use kernels to map
data into high dimensional space, so that it becomes
linearly separable with hyperplane.
Method 3 : Rule Based
We defined set of rule to classify a tweet based on term
a. Extract the features of a tweet.
b. Count term frequency of each feature , the feature
having maximum term frequency from all categories
mentioned above will be our first classification.
c. As it cannot be right all time so now we maintain
count of categories in which tweet falls , category
which is near to tweet will be our next classification.
Tweet=sachin is a good player, who eats apple and
banana which is good for health.
Classification- Feature-category term-frequency
Max term-frequency - sachin
So our category is - sports
2nd approximation -
Max feature is laying in health i.e. 3 times ,
So our second approximation would be health.
If both of these are in same category then we have only
i.e. if here max feature would be laying in sports than we
have only one result that is sports.
Steps for k-fold cross-validation :
Step 1: split data into k subsets of equal size
Step 2 : use each subset in turn for testing, the
remainder for training
Often the subsets are stratified before the cross-
validation is performed
The error estimates are averaged to yield an
overall error estimate
Accuracy Results ( 10 folds)
Accuracy of Algorithm in %
Categories Algo. SVM Naïve Rule
Business 86.6 81.44 98.30
Education 85.71 76.07 81.8
Entertainment 86.8 79.1 87.49
Health 95.67 84.62 90.93
Law 81.17 73.38 75.25
Lifestyle 93.27 89.71 82.42
Nature 87.0 78.64 84.24
Places 81.01 75.35 80.73
Politics 81.91 81.88 76.31
Sports 87.11 83.57 81.87
Technology 83.64 82.44 77.05
Worked on latest Crawled tweeter data using tweeter4j api
Worked on Eleven different Categories.
Applied three different method of supervised learning to
classify in different categories.
Achieved high performance speed with accuracy in range of
85 to 95 %
Done Tweets Cleaning , Stemming , Stop Word removal.
Used Edit distance for spelling correction.
Used Named entity recognition for ranking.
Used WordNet for Query Expansion and Synonyms
Validated using CrossFold (10 fold) validation.