January 2013 Portland User Group MeetUp PresentationR & Text Analytics Daniel Fennelly Portland R User Group Portland, Oregon 15 January 2013
Following are some notes on the usage of the R package TopicWatchr.TopicWatchr is designed to neatly access the [Luckysort API](http://luckysort.com/products/api/docs/intro)TopicWatchr was authored by Homer Strong and is currently maintained and updated by Daniel Fennelly.
> library(TopicWatchr) Loading required package: RJSONIO Loading required package: RCurl Loading required package: bitops Welcome to TopicWatchr! Remember to check for updates regularly. Found TopicWatch account file in ~/.tw Welcome email@example.comCredentials can be stored in `~/.tw` firstname.lastname@example.org hunter2Or you can authenticate in the interactive shell... > clearCredentials() > setCredentials() Enter username: email@example.com Enter password: >Note: Be careful about the password prompt in ESS. It seems ESS hides thepassword in the minibuffer before displaying it in the *R* buffer.
Package Summary1. Formulate and send API requests according to task2. Receive and parse JSON response3. Page through multiple requests, offer quick visualization tools, other utilitiesOther end-user tools to access this data include the[TopicWatch](https://studio.luckysort.com/) web interface and the tw.pypython client.
The BasicsThe data we work with at LuckySort and which well be talking about herehave a few specific qualities:1. Text Sources2. Terms3. Time
The BasicsText Sources• Hourly: Twitter Data, StockTwits, Consumer Facebook statuses, Wordpress posts and comments...• Daily: RSS news sources, Amazon.com product reviews, Benzinga News Updates• your data? (talk with us!)Lets fetch our personal list of our sources.> my.sources <- getSources()> head(my.sources) name id 1 Wordpress Intense Debate comments wp_en_comments-id 2 StockTwits stock_twits 3 Benzinga News Updates benzinga_news_updates_1 4 AngelList angelco 5 Amazon.com Shoes best sellers reviews amzn-bestsellers-shoes 6 Amazon.com Home & Kitchen best sellers reviews amzn-bestsellers-home> dim(my.sources) 35 2
The BasicsText SourcesLets get some more specific metadata. > twitter.info <- getSourceInfo("twitter_sample") > names(twitter.info)  "metrics" "resolutions" "users"  "name" "finest_resolution" "owner"  "aggregate_type" "type" "id" > twitter.info$finest_resolution  3600 > twitter.info$metrics  "documentcounts"Sources have specific resolutions available to them, given in seconds. Thefinest resolution for Twitter is one hour. The metrics are almost always goingto just be "documentcounts", although were working on making availablenumeric sources like stock market or exchange rate information.
The BasicsTerms in TimeOur most basic analysis is that of the term occurring within a streamingdocument source. How are <term> occurrences in <document source>changing over time from <start> to <finish>. > end <- Sys.time() > start <- ISOdate(2012, 12, 01, tz="PST") > start; end  "2012-12-01 12:00:00 PST"  "2013-01-14 23:10:19 PST" > terms <- c("obama", "mayan", "newtown", "iphone") > resolution <- 3600 * 24 > recent.news <- metric.counts(terms, src="twitter_sample", start=start, end=end, resolution=resolution, freq=T, debug=T) get: https://api.luckysort.com/v1/sources/twitter_sample/metrics/documentcounts? start=2012-12-01T12:00:00Z&end=2013-01-15T07:10:19Z&grams=obama,mayan,newtown, iphone&limit=300&resolution=86400&offset=0&freq=TRUE
The BasicsTerms in TimeLets plot our data and see what it looks like! The function `plotSignal` justwraps some handy ggplot2 code. For anything sophisticated youll probablywant to tailor your plotting to your own needs. > png("news.png", width=1280, height=720) > plotSignal(recent.news) > dev.off() ![news_plot](http://github.com/danielfennelly/prug-topicwatchr/raw/master/images/news.png)
The BasicsTerms in TimeOf course ones choice of resolution is going to change the look of the data. Atthe daily resolution theres no way to disambiguate between sustained dailyusage of a term or rapid usage within a short time span. Take a look at theseplots of the same terms over the same time span collected at hourly and dailyresolution.
The BasicsTerms in Time![tech words daily resolution](http://github.com/danielfennelly/prug-topicwatchr/raw/master/images/techWordsDaily.png)
The BasicsTerms in Time![tech words hourly resolution](http://github.com/danielfennelly/prug-topicwatchr/raw/master/images/techWordsHourly.png)
The BasicsTerm Co-occurrencesMoving beyond simple word counts, were often interested in the subset of atext source mentioning a specific term. We also might want to compact theoccurrence of several related terms into a single signal. This is where *filters*and *indices* come in handy! An index like `~bullish` is just a weighted sumof terms. For example, the terms `buy`, `upgrade`, `longterm` and `added`are all contained within the `~bullish` index. Weve created several publicindices like these which we feel are useful in certain applications like stockmarket or consumer sentiment analysis. (Of course users can also create theirown indices too.)Lets look at the behavior of the `~bullish` and `~bearish` indices onStockTwits, a twitter-like community around the stock market. We filter ondocuments containing Apples ticker symbol "$aapl" so that the only signalswere looking at are in some way related to Apple.
The BasicsTerm Co-occurrences![aapl sentiment](http://github.com/danielfennelly/prug-topicwatchr/raw/master/images/aapl_sentiment.png) ![aapl sentiment](http://github.com/danielfennelly/prug-topicwatchr/raw/master/images/aapl_sentiment.png)
Prototyping Event AnalysisHow do we identify transient spikes corresponding to real world events?Suppose we want to use only these document count time-series and that wehave a sliding history window. We might start with example data of eventsand try the performance of a couple different algorithms. source.id,datetime,gram,event,n twitter_sample,2012-09-12 05:45:00 -0700,apple,true,2 twitter_sample,2012-08-24 15:45:00 -0700,patent,true,1 twitter_sample,2012-10-29 08:00:00 -0700,#sandy,true,3 stock_twits,2012-10-02 08:15:00 -0700,$CMG,true,1 stock_twits,2012-09-13 09:30:00 -0700,fed,true,2 stock_twits,2012-04-11 07:00:00 -0700,lawsuit,true,1 ...Lets look more specifically at the case of the term "fed" on Stock Twits. Fromhere on were going to be looking at some code I used to prototype the alertsfeature on TopicWatch. This prototyping code is not part of TopicWatchr, but isan example application of the package.
Source Statistics > twitter.docs <- document.submatrix("twitter_sample", end=Sys.time(), hours=8, to.df=FALSE) > length(twitter.docs)  225 > twitter.docs[] best reality best reality show reality love 1 1 1 1 flava best flava of love show 1 1 1 1 reality show 1 > twitter.docterm <- submatrix.to.dataframe(twitter.docs, max.n=1) > dim(twitter.docterm)  225 1280 > term.sums <- colSums(twitter.docterm) > mean(term.sums) mean(term.sums)  1.283594 > max(term.sums)  14Now we have some information about our sampling of twitter documents. We have 225documents, with 1280 unique terms. Right now the above function is simply grabbing 25twitter documents per hour over the past 8 hours.
Source Statistics[Zipfs Law](http://en.wikipedia.org/wiki/Zipfs_law) is a classic finding in thefield of lexical analysis. > term.sums <- sort(term.sums, decreasing=TRUE) > qplot(x=log(1:length(term.sums)), y=log(term.sums)) ![twitter zipf](http://github.com/danielfennelly/prug-topicwatchr/raw/master/images/twitter_zipf.png)
Feeling Adventurous?Last time at LuckySort HQ: Were looking for beta testers for the R package! In Shackletons words, what to expect: **...BITTER COLD, LONG MONTHS OF COMPLETE DARKNESS, CONSTANT DANGER, SAFE RETURN DOUBTFUL...**This time around were in a slightly more stable place. Theres more data, more options,and more opportunities to maybe discover some cool stuff! (Expect some darkness,minimal danger, and a shrinking population of software bugs.)prug-topicwatchrSee also these notes used at the Portland R Users Group meeting on 15 January 2013on GitHub. They cover basic usage of the TopicWatchr package to pull time series textdata from the LuckySort API, with some examples of prototyping event detectionheuristics with R. ( https://github.com/danielfennelly/prug-topicwatchr )Talk with me about it, or get in touch later at firstname.lastname@example.org