The Pulse of News in Social Media: Forecasting Popularity                      Roja Bandari∗                Sitaram Asur† ...
popular microblogging social network. Social popularity     with [19] empirically studying spread of news on thefor the ne...
1.2                                                                   Number of Links              t‐density   1 0.8 0.6 0...
Histogram of AvgSubjectivityretweeting its links.    Observing the variations in average tweets per linkfrom Figure 2 we u...
Histogram of log(t_density)                                                                                               ...
Google News 9 is one of the major aggregators and        an average of 16 tweets in our dataset with several ofproviders o...
regression, k-nearest neighbors (KNN) regression and          One of the reasons for this is that there seems to be asuppo...
by its category. Named entities and subjectivity did not   Google Blog are very widely shared in social media.provide more...
[11] Alexei V´zquez, Jo˜o Gama Oliveira, Zolt´n Dezs¨,               a          a                        a       o        ...
Upcoming SlideShare
Loading in...5

The Pulse of News in Social Media: Forecasting Popularity


Published on


Published in: Technology, Business
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The Pulse of News in Social Media: Forecasting Popularity

  1. 1. The Pulse of News in Social Media: Forecasting Popularity Roja Bandari∗ Sitaram Asur† Bernardo Huberman‡Abstract provide.News articles are extremely time sensitive by nature. There Due to the time-sensitive aspect and the intenseis also intense competition among news items to propagate competition for attention, accurately estimating theas widely as possible. Hence, the task of predicting the pop- extent to which a news article will spread on the webularity of news items on the social web is both interesting is extremely valuable to journalists, content providers,and challenging. Prior research has dealt with predicting advertisers, and news recommendation systems. Thiseventual online popularity based on early popularity. It is is also important for activists and politicians who aremost desirable, however, to predict the popularity of items using the web increasingly more to influence publicprior to their release, fostering the possibility of appropriate opinion.decision making to modify an article and the manner of its However, predicting online popularity of news arti-publication. In this paper, we construct a multi-dimensional cles is a challenging task. First, context outside the webfeature space derived from properties of an article and eval- is often not readily accessible and elements such as localuate the efficacy of these features to serve as predictors of and geographical conditions and various circumstancesonline popularity. We examine both regression and classifi- that affect the population make this prediction difficult.cation algorithms and demonstrate that despite randomness Furthermore, network properties such as the structurein human behavior, it is possible to predict ranges of pop- of social networks that are propagating the news, influ-ularity on twitter with an overall 84% accuracy. Our study ence variations among members, and interplay betweenalso serves to illustrate the differences between traditionally different sections of the web add other layers of com-prominent sources and those immensely popular on the so- plexity to this problem. Most significantly, intuitioncial web. suggests that the content of an article must play a cru- cial role in its popularity. Content that resonates with1 Introduction a majority of the readers such as a major world-wide event can be expected to garner wide attention whileNews articles are very dynamic due to their relation to specific content relevant only to a few may not be ascontinuously developing events that typically have short successful.lifespans. For a news article to be popular, it is essential Given the complexity of the problem due to thefor it to propagate to a large number of readers within above mentioned factors, a growing number of recenta short time. Hence there exists a competition among studies [1], [2], [3], [4], [5] make use of early measure-different sources to generate content which is relevant ments of an item’s popularity to predict its future suc-to a large subset of the population and becomes virally cess. In the present work we investigate a more difficultpopular. problem, which is prediction of social popularity with- Traditionally, news reporting and broadcasting has out using early popularity measurements, by insteadbeen costly, which meant that large news agencies dom- solely considering features of a news article prior to itsinated the competition. But the ease and low cost of on- publication. We focus this work on observable featuresline content creation and sharing has recently changed in the content of an article as well as its source of publi-the traditional rules of competition for public attention. cation. Our goal is to discover if any predictors relevantNews sources now concentrate a large portion of their only to the content exist and if it is possible to make aattention on online mediums where they can dissemi- reasonable forecast of the spread of an article based onnate their news effectively and to a large population. It content therefore common for almost all major news sources to The news data for our study was collected fromhave active accounts in social media services like Twitter Feedzilla 1 –a news feed aggregator– and measurementsto take advantage of the enormous reach these services of the spread are performed on Twitter 2 , an immensely ∗ UCLA. † HP Labs. 1 ‡ HP Labs. 2
  2. 2. popular microblogging social network. Social popularity with [19] empirically studying spread of news on thefor the news articles are measured as the number of social networks of digg and twitter and [20] studyingtimes a news URL is posted and shared on Twitter. facebook news feeds. To generate features for the articles, we consider A growing number of recent studies predict spreadfour different characteristics of a given article. Namely: of information based on early measurements (using early votes on digg, likes on facebook, click-throughs, and • The news source that generates and posts the comments on forums and sites). [1] found that eventual article popularity of items posted on youtube and digg has a • The category of news this article falls under strong correlation with their early popularity; [2] and [3] predict the popularity of a discussion thread using • The subjectivity of the language in the article features based on early measurements of user comments. [4] propose the notion of a virtual temperature of • Named entities mentioned in the article weblogs using early measurements. [5] predict diggWe quantify each of these characteristics by a score counts using stochastic models that combine designmaking use of different scoring functions. We then elements of the site -that in turn lead to collective useruse these scores to generate predictions of the spread behavior- with information from early votes.of the news articles using regression and classification Finally, recent work on variation in the spread ofmethods. Our experiments show that it is possible to content has been carried out by [21] with a focus onestimate ranges of popularity with an overall accuracy of categories of twitter hashtags (similar to keywords).84% considering only content features. Additionally, by This work is aligned with ours in its attention tocomparing with an independent rating of news sources, importance of content in variations among popularity,we demonstrate that there exists a sharp contrast however they consider categories only, with news beingbetween traditionally popular news sources and the top one of the hashtag categories. [22] conduct similar worknews propagators on the social web. on social marketing messages. In the next section we provide a survey of recentliterature related to this work. Section 3 describes the 3 Data and Featuresdataset characteristics and the process of feature score This section describes the data, the feature space, andassignment. In Section 4 we will present the results feature score assignment in detail.of prediction methods. Finally, in Section 5 we willconclude the paper and discuss future possibilities for 3.1 Dataset Description Data was collected in twothis research. steps: first, a set of articles were collected via a news feed aggregator, then the number of times each article2 Related Work was linked to on twitter was found. In addition, forStochastic models of information diffusion as well as de- some of the feature scores, we used a 50-day history ofterministic epidemic models have been studied exten- posts on twitter. The latter will be explained in sectionsively in an array of papers, reaffirming theories devel- 3.2 on feature scoring.oped in sociology such as diffusion of innovations [6]. 10000 qAmong these are models of viral marketing [7], modelsof attention on the web [8], cascading behavior in prop-agation of information [9] [10] and models that describe 1000 q qheavy tails in human dynamics [11]. While some studies q q q q qincorporate factors for content fitness into their model qqq q q qq qq 100 q[12], they only capture this in general terms and do not qq qqq qqq qq q q q qq qinclude detailed consideration of content features. q q q q q q q qq q q qq q qq qq q q q q q Salganik et al performed a controlled experiment qq qq q q q q q qq qqq 10 qqq qq q q q q qq q qq qq q q q q q q qon music, comparing quality of songs versus the effects q qqqq qq q qqqq qq q q qq q q q qq q q qq qqq q qq q qq q q q q qqq q qq qqq q q q q q q q q qq qqq q qqqq q q q qof social influence[13]. They found that song quality q qqqqqqqq qqqq qq q qq q q q qqq qqq q qq qq q qqqqqq q qq q qq qqq qdid not play a role in popularity of highly rated songs qq q qqq qqqq qqqqqqqq qqq qq q q q q qq qqqq q qq q qqq qqqq qq qqqqqq q qq qq qq qq q qqq q q qq q qq 1and it was social influence that shaped the outcome. 1 5 10 50 100 500 1000The effect of user influence on information diffusionmotivates another set of investigations [14], [15],[16], [5]. Figure 1: Log-log distribution of tweets. On the subject of news dissemination, [17] and [18]study temporal aspects of spread of news memes online, Online news feed aggregators are services that col-
  3. 3. 1.2  Number of Links  t‐density  1 0.8 0.6 0.4 0.2  0  S        T      SO S  T    S        G  S  A  E  O  BR   C    OP   Y  Y  S  OL       OD   SS TS CS ES BS TH TS TS NG C RY EL M   S LE ES PR UFF RA ITY OG IE IE ST EN AR NC CE USI ET ET US OG CO MIN DE W EW NE TI OR EN UC AV ER LITI JO AL ST UN  STY M IT BB PI NI CI NE OG AL M IE SI BL VI ST M GA SI  N EV DU HE TR SP HO M SC PO ER IN PR ITU D  N  BU E HN P LE LU SH O  IN LIF TA IV TO L FU IR OR DE C TE SP W VI T D  EN AN N  IO LIG Figure 2: Normalized t-density scores for categories RElect and deliver news articles as they are published on- the next section we will describe these features and theline. Using the API for a news feed aggregator named methods used to assign values or scores to features.Feedzilla, we collected news feeds belonging to all newsarticles published online during one week (August 8th 3.2 Feature Description and Scoring Choice ofto 16th, 2011). The feed for an article includes a title, a features is motivated by the following questions: Doesshort summary of the article, its url, and a time-stamp. the category of news affect its popularity within aIn addition, each article is pre-tagged with a category social network? Do readers prefer factual statementseither provided by the publisher or in some manner de- or do they favor personal tone and emotionally chargedtermined by Feedzilla. A fair amount of cleaning was language? Does it make a difference whether famousperformed to remove redundancies, resolve naming vari- names are mentioned in the article? Does it make aations, and eliminate spam through the use of auto- difference who publishes a news article?mated methods as well as manual inspection. As a re- These questions motivate the choice of the followingsult over 2000 out of a total of 44,000 items in the data characteristics of an article as the feature space: thewere discarded. category that the news belongs to (e.g. politics, sports, The next phase of data collection was performed etc.), whether the language of the text is objectiveusing Topsy 3 , a Twitter search engine that searches or subjective, whether (and what) named entities areall messages posted on Twitter. We queried for the mentioned, and what is the source that published thenumber of times each news link was posted or reshared news. These four features are chosen based on theiron Twitter (tweeted or retweeted). Earlier research [17] availability and relevance, and although it is possible toon news meme buildup and decay suggest that popular add any other available features in a similar manner, wenews threads take about 4 days until their popularity believe the four features chosen in this paper to be thestarts to plateau. Therefore, we allowed 4 days for each most to fully propagate before querying for the number We would like to point out that we use the termsof times it has been shared. article and link interchangeably since each article is The first half of the data was used in category score represented by its URL link.assignment (explained in the next section). The rest wepartitioned equally into 10,000 samples each for train- 3.2.1 Category Score News feeds provided bying and test data for the classification and regression Feedzilla are pre-tagged with category labels describ-algorithms. Figure 1 shows the log distribution of to- ing the content. We adopted these category labels andtal tweets over all data, demonstrating a long tail shape designed a score for them which essentially represents awhich is in agreement with other findings on distribu- prior disribution on the popularity of categories. Figuretion of Twitter information cascades [23]. The graph 2 shows a plot of categories and the number of articlealso shows that articles with zero tweets lie outside of links in each category. We observe that news relatedthe general linear trend of the graph because they did to Technology has a more prominent presence in ournot propagate on the Twitter social network. dataset and most probably on twitter as a whole. Fur- Our objective is to design features based on content thermore, we can see categories (such as Health) withto predict the number of tweets for a given article. In low number of published links but higher rates of tweet per link. These categories perhaps have a niche fol- 3 lowing and loyal readers who are intent on posting and
  4. 4. Histogram of AvgSubjectivityretweeting its links. Observing the variations in average tweets per linkfrom Figure 2 we use this quantity to represent the prior 200popularity for a category. In order to assign a value (i.e.score) to each category, we use the the first 22,000 points 150in the dataset to compute the average tweet per article Frequencylink in that category. We call this average tweet per 100link the t-density score and we will use this measure inscore assignments for some other features as well. 503.2.2 Subjectivity Another feature of an articlethat can affect the amount of online sharing is its lan- 0guage. We want to examine if an article written in a 0.0 0.2 0.4 0.6 0.8 1.0more emotional, more personal, and more subjective AvgSubjectivityvoice can resonate stronger with the readers. Accord-ingly, we design a binary feature for subjectivity where Figure 3: Distribution of average subjectivity of sources.we assign a zero or one value based on whether thenews article or commentary is written in a more sub-jective voice, rather than using factual and objective 40,000 named entities by studying historical prominencelanguage. We make use of a subjectivity classifier from of each entity on twitter over the timeframe of a month.LingPipe [24] a natural language toolkit. Since this re- The assigned score is the average t-density (tweet perquires training data, we use transcripts from well-known link) of each named entity. To assign a score for atv and radio shows belonging to Rush Limbaugh 4 and given article we use three different values: the number ofKeith Olberman 5 as the corpus for subjective language. named entities in an article, the highest score among allOn the other hand, transcripts from CSPAN 6 as well as the named entities in an article, and the average scorethe parsed text of a number of articles from the website among the entities.FirstMonday 7 are used as the training corpus for ob-jective language. The above two training sets provide 3.2.4 Source Score The data includes articles froma very high training accuracy of 99% and manual in- 1350 unique sources on the web. We assign scores tospection of final results confirmed that the classification each source based on the historical success of each sourcewas satisfactory. Figure 3 illustrates the distribution on Twitter. For this purpose, we collected the number ofof average subjectivity per source, showing that some times articles from each source were shared on Twittersources consistently publish news in a more objective in the past. We used two different scores, first thelanguage and a somwhat lower number in a more sub- aggregate number of times articles from a source werejective language. shared, and second the t-density of each source (average number of times each article belonging to a source3.2.3 Named Entities In this paper, a named en- was shared). The latter proved to be a better scoretity refers to a known place, person, or organization. assignment compared to the aggregate.Intuition suggests that mentioning well-known entities To investigate whether it is better to use a smallercan affect the spread of an article, increasing its chances portion of more recent history, or a larger portion goingof success. For instance, one might expect articles on back farther in time and possibly collecting outdatedObama to achieve a larger spread than those on a minor information, we start with the two most recent weekscelebrity. And it has been well documented that fans prior to our data collection and increase the numberare likely to share almost any content on celebrities like of days, going back in time. Figure 5 shows theJustin Bieber, Oprah Winfrey or Ashton Kutcher. We trend of correlation between the t-density of sources inmade use of the Stanford-NER 8 entity extraction tool historical data and their true t-density of our extract all the named entities present in the title and We observe that the correlation increases with moresummary of each article. We then assign scores to over datapoints from the history until it begins to plateau near 50 days. Using this result, we take 54 days of 4 5 history prior to the first date in our dataset. We find 6 that the correlation of the assigned score found in the 7 above manner has a correlation of 0.7 with the t-density 8 of the dataset. Meanwhile, the correlation between
  5. 5. Histogram of log(t_density) 600  140 Blog Maverick  500  Mashable  120 400  t‐density  300  100 200  80 100  Frequency 0  60 1  11  21  31  41  51  61  71  81  3me (days)  40 Figure 6: Timeline of t-density (tweet per link) of two 20 sources. 0 0 1 2 3 4 5 through a low-pass filter. Second is to weight the score log(t_density) by the percentage of times a source’s t-density is above Figure 4: Distribution of log of source t-density scores the mean t-density over all sources, penalizing sources that drop low too often. The mean value of t-densities over all sources is 6.4. Figure 7 shows the temporalthe source score and number of tweets of any given variations of tweets and links over all sources. Noticearticle is 0.35, suggesting that information about the that while both tweets and links have a weekly cycle, thesource of publication alone is not sufficient in predicting t-density (tweets over links) does not have this periodicpopularity. Figure 4 shows the distribution of log of nature.source scores (t-density). Taking the log of source scoresproduces a more normal shape, leading to improvements 2000k in regression algorithms. Tweets  1600k  Links  0.72  1200k  0.7  800k  0.68  Correlaon  400k  0.66  k  1  8  15  22  29  36  43  50  ,me (days)  0.64  (a) tweets and links 0.62  10  20  30  40  50  60  Time (days)  8  7  t‐density Figure 5: Correlation trend of source scores with t- 6 density in data. Correlation increases with more days 5 of historical data until it plateaus after 50 days. 4  1  8  15  22  29  36  43  50  4me (days)  We plot the timeline of t-densities for a few sourcesand find that t-density of a source can vary greatly over (b) t-densitytime. Figure 6 shows the t-density values belongingto the technology blog Mashable and Blog Maverick, a Figure 7: Temporal variations of tweets, links, and t-weblog of prominent entrepreneur, Mark Cuban. The density over all sourcest-density scores corresponding to each of these sourcesare 74 and 178 respectively. However, one can see thatMashable has a more consistent t-density compared to 3.2.5 Are top traditional news sources the mostBlog Maverick. propagated? As we assign scores to sources in our In order to improve the score to reflect consistency dataset, we are interested to know whether sources suc-we devise two methods; the first method is to smooth cessful in this dataset are those that are conventionallythe measurements for each source by passing them considered prominent.
  6. 6. Google News 9 is one of the major aggregators and an average of 16 tweets in our dataset with several ofproviders of news on the web. While inclusion in Google its articles not getting any tweets. On the other hand,news results is free, Google uses its own criteria to rank Mashable gained an average of nearly 1000 tweets withthe content and place some articles on its homepage, its least popular article still receiving 360 tweets. Highlygiving them more exposure. Freshness, diversity, and ranked news blogs such as The Huffington Post performrich textual content are listed as the factors used by relatively well in Twitter, possibly due to their activeGoogle News to automatically rank each article as it twitter accounts which share any article published onis published. Because Google does not provide overall the site.rankings for news sources, to get a rating of sources weuse NewsKnife 10 . NewsKnife is a service that rates top NewsKnife Reuters, Los Angeles Times, Newnews sites and journalists based on analysis of article’s York Times, Wall Street Jour-positions on the Google news homepage and sub-pages nal, USA Today, Washington Post,internationally. We would like to know whether the ABC News, Bloomberg, Christiansources that are featured more often on Google news Science Monitor, BBC News(and thus deemed more prominent by Google and rated Twitter Dataset Blog Maverick, Search Enginemore highy by NewsKnife) are also those that become Land, Duct-tape Marketing, Seth’smost popular on our dataset. Blog, Google Blog, Allfacebook, Mashable, Search Engine Watch Total Links Total Tweets t-density Correlation 0.57 0.35 -0.05 Table 2: Highly rated sources on NewsKnife versus those popular on the Twitter datasetTable 1: Correlation values between NewsKnife sourcescores and their performance on twitter dataset. 4 Prediction Accordingly we measure the correlation values for In this work, we evaluate the performance of boththe 90 top NewsKnife sources that are also present in regression and classification methods to this problem.our dataset. The values are shown in Table 1. It can be First, we apply regression to produce exact values ofobserved that the ratings correlate positively with the tweet counts, evaluating the results by the R-squarednumber of links published by a source (and thus the sum measure. Next we define popularity classes and predictof their tweets), but have no correlation (-0.05) with t- which class a given article will belong to. The followingdensity which reflects the number of tweets that each of two sections describe these methods and their results.their links receives. For our source scoring scheme thiscorrelation was about 0.7. Variable Description Table 2 shows a list of top sources according to S Source t-density scoreNewsKnife, as well as those most popular sources in C Category t-density scoreour dataset. While NewsKnife rates more tradition- Subj Subjectivity (0 or 1)ally prominent news agencies such as Reuters and the Entct Number of named entitiesWall Street Journal higher, in our dataset the top ten Entmax Highest score among named entitiessources (with highest t-densities) include sites such as Entavg Average score of named entitiesMashable, AllFacebook (the unofficial facebook blog),the Google blog, marketing blogs, as well as weblogs Table 3: Feature set (prediction inputs)of well-known people such as Seth Godin’s weblog andMark Cuban’s blog (BlogMaverick). It is also worth not-ing that there is a bias toward news and opinion on webmarketing, indicating that these sites actively use their 4.1 Regression Once score assignment is complete,own techniques to increase their visibility on Twitter. each point in the data (i.e. a given news article) will While traditional sources publish many articles, correspond to a point in the feature space defined by itsthose more successful on the social web garner more category, subjectivity, named entity, and source scores.tweets. A comparison shows that a NewsKnife top As described in the previous section, category, source,source such as The Christian Science Monitor received and named entity scores take real values while the subjectivity score takes a binary value of 0 or 1. Table 3 9 lists the features used as inputs of regression algorithms. 10 We apply three different regression algorithms - linear
  7. 7. regression, k-nearest neighbors (KNN) regression and One of the reasons for this is that there seems to be asupport vector machine (SVM) regression. lot of overlap across categories. For example, one would expect World News and Top News to have some over- Linear SVM lap, or the category USA would feature articles that All Data 0.34 0.32 overlap with others as well. So the categories provided Tech Category 0.43 0.36 by Feedzilla are not necessarily disjoint and this is the Within Twitter 0.33 0.25 reason we observe a low prediction accuracy. To evaluate this hypothesis, we repeated the pre- Table 4: Regression Results diction algorithm for particular categories of content. Using only the articles in the Technology category, we reached an R2 value of 0.43, indicating that when em- Since the number of tweets per article has a long- ploying regression we can predict the popularity of ar-tail distribution (as discussed previously in Figure 1), we ticles within one category (i.e. Technology) with betterperformed a logarithmic transformation on the number results.of tweets prior to carrying out the regression. We alsoused the log of source and category scores to normalize 4.2 Classification Feature scores derived from his-these scores further. Based on this transformation, torical data on Twitter are based on articles that havewe reached the following relationship between the final been tweeted and not those articles which do not makenumber of tweets and feature scores. it to Twitter. As discussed in Section 3.1 this is ev- ident in how the zero-tweet articles do not follow the ln(T ) = 1.24ln(S) + 0.45ln(C) + 0.1Entmax − 3 linear trend of the rest of datapoints in Figure 1. Con-where S is the source t-density score, C is the category sequently, we do not include a zero-tweet class in ourt-density score, and Entmax is the maximum t-density classification scheme and perform the classification byof all entities found in the article. Equivalently, only considering those articles that were posted on twit- ter. T = S 1.24 C 0.45 e−(0.1Entmax +3) Table 5 shows three popularity classes A (1 to 20 tweets), B (20 to 100 tweets), C (more than 100)with coefficient of determination R2 = 0.258. Note that and the number of articles in each class in the set ofthe R2 is the coefficient of determination and relates to 10,000 articles. Table 6 lists the results of supportthe mean squared error and variance: vector machine (SVM) classification, decision tree, and M SE bagging [27] for classifying the articles. All methods R2 = 1 − were performed with 10-fold cross-validation. We can V AR see that classification can perform with an overall Alternatively, the following model provided im- accuracy of 84% in determining whether an article willproved results: belong to a low-tweet, medium-tweet, or high-tweet 2 class. T 0.45 = (0.2S − 0.1Entct − 0.1Entavg + 0.2Entmax ) In order to determine which features play a more significant role in prediction, we repeat SVM classifica-with an improved R2 = 0.34. Using support vector tion leaving one of the features out at each step. Wemachine (SVM) regression [25], we reached similar found that publication source plays a more importantvalues for R2 as listed in Table 4. role compared to other predictors, while subjectivity, In K-Nearest Neighbor Regression, we predict the categories, and named entities do not provide much im-tweets of a given article using values from it’s nearest provement in prediction of news popularity on Twitter.neighbors. We measure the Euclidean distance betweentwo articles based on their position in the feature space 4.2.1 Predicting Zero-tweet Articles We per-[26]. Parameter K specifies the number of nearest form binary classification to predict which articles willneighbors to be considered for a given article. Results be at all mentioned on Twitter (zero tweet versuswith K = 7 and K = 3 for a 10k test set are R-sq= 0.05, nonzero tweet articles). Using SVM classification wewith mean squared error of 5101.695. We observe that can predict –with 66% accuracy– whether an article willKNN performs increasingly more poorly as the dataset be linked to on twitter or whether it will receive zerobecomes larger. tweets. We repeat this operation by leaving out one feature at a time to see a change in accuracy. We find4.1.1 Category-specific prediction One of the that the most significant feature is the source, followedweakest predictors in regression was the Category score.
  8. 8. by its category. Named entities and subjectivity did not Google Blog are very widely shared in social media.provide more information for this prediction. So despite Overall, we discovered that one of the most importantone might expect, we find that readers overall favor nei- predictors of popularity was the source of the article.ther subjectivity nor objectivity of language in a news This is in agreement with the intuition that readersarticle. are likely to be influenced by the news source that It is interesting to note that while category score disseminates the article. On the other hand, thedoes not contribute in prediction of popularity within category feature did not perform well. One reasonTwitter, it does help us determine whether an article for this is that we are relying on categories providedwill be at all mentioned on this social network or by Feedzilla, many of which overlap in content. Thusnot. This could be due to a large bias toward sharing a future task is to extract categories independentlytechnology-related articles on Twitter. and ensure little overlap. Combining other layers of complexity described in the introduction opens up the Class name Range of tweets Number of articles possibility of better prediction. It would be interesting A 1–20 7,600 to incorporate network factors such as the influence of B 20–100 1,800 individual propagators to this work. C 100–2400 600 References Table 5: Article Classes [1] G´bor Szab´ and Bernardo A. Huberman, “Predicting a o the popularity of online content,” Commun. ACM, vol. Method Accuracy 53, no. 8, pp. 80–88, 2010. Bagging 83.96% [2] Jong Gun Lee, Sue Moon, and Kav´ Salamatian, e J48 Decision Trees 83.75% “An approach to model and predict the popularity SVM 81.54% of online contents with explanatory factors,” in Web Intelligence. 2010, pp. 623–630, IEEE. Naive Bayes 77.79% [3] Alexandru Tatar, J´r´mie Leguay, Panayotis Anto- e e niadis, Arnaud Limbourg, Marcelo Dias de Amorim, Table 6: Classification Results and Serge Fdida, “Predicting the popularity of online articles based on user comments,” in Proceedings of the International Conference on Web Intelligence, Mining and Semantics, New York, NY, USA, 2011, WIMS ’11,5 Discussion and Conclusion pp. 67:1–67:8, ACM.In this work we predicted the popularity of news items [4] Su-Do Kim, Sung-Hwan Kim, and Hwan-Gue Cho,on Twitter using features extracted from the content “Predicting the virtual temperature of web-blog arti-of news articles. We have taken into account four cles as a measurement tool for online popularity,” infeatures that cover the spectrum of the information that IEEE 11th International Conference on Computer and Information Technology (CIT), 31 2011-sept. 2 2011,can be gleaned from the content - the source of the pp. 449 –454.article, the category, subjectivity in the language and [5] Kristina Lerman and Tad Hogg, “Using a model ofthe named entities mentioned. Our results show that social dynamics to predict popularity of news,” inwhile these features may not be sufficient to predict the WWW. 2010, pp. 621–630, ACM.exact number of tweets that an article will garner, they [6] E.M. Rogers, Diffusion of innovations, Free Pr, 1995.can be effective in providing a range of popularity for [7] Jure Leskovec, Lada A. Adamic, and Bernardo A. Hu-the article on Twitter. We achieved an overall accuracy berman, “The dynamics of viral marketing,” TWEB,of 84% using classifiers. It is important to bear in mind vol. 1, no. 1, 2007.that while it is intriguing to pay attention to the most [8] Fang Wu and Bernardo A. Huberman, “Noveltypopular articles –those that become viral on the web– and collective attention,” Proceedings of the Nationala great number of articles spread in medium numbers. Academy of Sciences, vol. 104, no. 45, pp. 17599–17601, 2007.These medium levels can target highly interested and [9] Daniel Gruhl, David Liben-Nowell, Ramanathan V.informed readers and thus the mid-ranges of popularity Guha, and Andrew Tomkins, “Information diffusionshould not be dismissed. through blogspace,” SIGKDD Explorations, vol. 6, no. Interestingly we have found that in terms of number 2, pp. 43–52, 2004.of retweets, the top news sources on twitter are not [10] Jure Leskovec, Mary Mcglohon, Christos Faloutsos,necessarily the conventionally popular news agencies Natalie Glance, and Matthew Hurst, “Cascadingand various technology blogs such as Mashable and the behavior in large blog graphs,” in In SDM, 2007.
  9. 9. [11] Alexei V´zquez, Jo˜o Gama Oliveira, Zolt´n Dezs¨, a a a o vol. 2, pp. 27:1–27:27, 2011, Software available at Kwang-Il Goh, Imre Kondor, and Albert-L´szl´ a o cjlin/libsvm. Barab´si, “Modeling bursts and heavy tails in human a [26] T. Hastie, R. Tibshirani, and J.H. Friedman, The dynamics,” Phys. Rev. E, vol. 73, pp. 036127, Mar elements of statistical learning: data mining, inference, 2006. and prediction, Springer series in statistics. Springer,[12] M. V. Simkin and V. P. Roychowdhury, “A theory of 2008. web traffic,” EPL (Europhysics Letters), vol. 82, no. 2, [27] Mark Hall, Eibe Frank, Geoffrey Holmes, Bern- pp. 28006, Apr. 2008. hard Pfahringer, Peter Reutemann, and Ian H. Wit-[13] Matthew J. Salganik, Peter Sheridan Dodds, and Dun- ten, “The weka data mining software: an update,” can J. Watts, “Experimental study of inequality and SIGKDD Explor. Newsl., vol. 11, pp. 10–18, November unpredictability in an artificial cultural market,” Sci- 2009. ence, vol. 311, no. 5762, pp. 854–856, 02 2006. ´[14] David Kempe, Jon M. Kleinberg, and Eva Tardos, “Maximizing the spread of influence through a social network,” in KDD. 2003, pp. 137–146, ACM.[15] Dan Cosley, Daniel Huttenlocher, Jon Kleinberg, Xi- angyang Lan, and Siddharth Suri, “Sequential influ- ence models in social networks,” in 4th International Conference on Weblogs and Social Media, 2010.[16] Nitin Agarwal, Huan Liu, Lei Tang, and Philip S. Yu, “Identifying the influential bloggers in a community,” in Proceedings of the international conference on Web search and web data mining, New York, NY, USA, 2008, WSDM ’08, pp. 207–218, ACM.[17] Jure Leskovec, Lars Backstrom, and Jon M. Kleinberg, “Meme-tracking and the dynamics of the news cycle,” in KDD. 2009, pp. 497–506, ACM.[18] Jaewon Yang and Jure Leskovec, “Patterns of tempo- ral variation in online media,” in WSDM. 2011, pp. 177–186, ACM.[19] Kristina Lerman and Rumi Ghosh, “Information contagion: An empirical study of the spread of news on digg and twitter social networks,” in ICWSM. 2010, The AAAI Press.[20] Eric Sun, Itamar Rosenn, Cameron Marlow, and Thomas M. Lento, “Gesundheit! modeling contagion through facebook news feed,” in ICWSM. 2009, The AAAI Press.[21] Daniel M. Romero, Brendan Meeder, and Jon Klein- berg, “Differences in the mechanics of information diffusion across topics: idioms, political hashtags, and complex contagion on twitter,” in Proceedings of the 20th international conference on World wide web, New York, NY, USA, 2011, WWW ’11, pp. 695–704, ACM.[22] Bei Yu, Miao Chen, and Linchi Kwok, “Toward predicting popularity of social marketing messages,” in SBP. 2011, vol. 6589 of Lecture Notes in Computer Science, pp. 317–324, Springer.[23] Zicong Zhou, Roja Bandari, Joseph Kong, Hai Qian, and Vwani Roychowdhury, “Information resonance on twitter: watching iran,” in Proceedings of the First Workshop on Social Media Analytics, New York, NY, USA, 2010, SOMA ’10, pp. 123–131, ACM.[24] Alias-i., “Lingpipe 4.1.0,”, 2008.[25] Chih-Chung Chang and Chih-Jen Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology,