Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

How to create a word cloud in R


Published on

  • Be the first to comment

  • Be the first to like this

How to create a word cloud in R

  1. 1. How to create a Word Cloud in R Mona Swarnakar Date: 9th January 2014 Email id: Blog:
  2. 2. I have seen Word-cloud in so many places such as magazines, websites, blogs, etc. however never thought of making it by myself. I would like to thank R that taught me such a wonderful technique which everybody would like to learn. I don’t know if there are any other free options are available to create word-cloud. Let me give a quick explanation about R first, R is a free source packages and very useful for statistical analysis. We can use R for various purposes, from data mining to data visualization. Word-cloud is a tool where you can highlight the words which have been used the most in quick visualization. If you know the correct procedure and usage, then Word Cloud is simple in R Studio. Further, a package called “Word-cloud” is released in R, which will help us to create word-cloud. You can follow my simple four steps mentioned below to create word-cloud. Those are new to R or Word Cloud, I would suggest first install R studio from the link Also, the following packages are required to create word cloud in R, so install these following packages as well: library(twitteR) library(tm) library(SnowballC) library(wordcloud) Note: You can see on the right side of the image, there is an option of the packages you want to install
  3. 3. Step 1-> First we have to install the below package in R: library (twitteR) Once installation is done, we will load the Tweets data from D drive (that you have saved in your drive) in the below mentioned codes: > load("F:MonaMona RTweets.RData") For the Tweets to convert into a data frame, we will write the below codes; >"rbind",lapply(tweets, >dim(df) Step 2 -> Now install the below package: library(tm) Corpus is collection of data texts. VectorSource is a very useful command based on which we can create a corpus of character vectors. >mydata=Corpus(VectorSource(df$text)) Transformations: Once we have corpus we can modify the document (for example stopwords removal, stemming, etc.). Transformations are done via tm_map () function which applies to all elements of corpus and all transformation can be done in single text documents. To clean the data file various commands are used, which are listed below: To Eliminating extra white spaces: > mydata=tm_map(mydata, stripWhitespace) To Convert to Lower Case: >mydata=tm_map(mydata, tolower) To remove punctuations: >mydata=tm_map(mydata,removePunctuation) To remove numbers: >mydata=tm_map(mydata, removeNumbers
  4. 4. Stopwords: A further preprocessing technique is the removal of stopwords. Stopwords are words that are so common in a language that their information value is almost zero, in other words their entropy is very low. Therefore it is usual to remove them before further analysis. At first we set up a tiny list of stopwords: In this we are adding “R” and “online” to remove from wordlist. >my_stopwords=c(stopwords('english'),c('R','online')) >mydata=tm_map(mydata, removeWords, my_stopwords) Stemming: Stemming is the process of removing suffixes from words to get the common origin. For example, remove ing, ed from word to make it simple. Another example would be we would like to count the words stopped and stopping as being the same and derived from stop. Step 3 -> Now install the below package: library(SnowballC) >mydata=tm_map(mydata, stemDocument) Term-Document Matrix: A common approach in text mining is to create a term-document matrix from a corpus. In the tm package the classes Term Document Matrix (tdm)and Document Term Matrix(dtm) (depending on whether you want terms as rows and documents as columns, or vice versa) employ sparse matrices for corpora. >tdm<-TermDocumentMatrix(mydata) Frequent Terms: Now we can have a look at the popular words in the term-document matrix. >wordfreq=findFreqTerms(tdm, lowfreq=70) >termFrequency=rowSums(as.matrix(tdm1[wordfreq,])) Now we can have a look at the popular words in the term-document matrix.
  5. 5. Step 4 -> Word Cloud: After building a term-document matrix and frequency terms, we can show the importance of words with a word cloud. Now install the below package: library(wordcloud) library(RColorBrewer) pal2 <- brewer.pal(8,"Dark2") There are three options; you can apply any one for different wordcloud colour: >wordcloud(words=names(wordFreq),freq=wordFreq,min.freq=5,max.words=50,random.order= F,colors="red") >wordcloud(words=names(wordFreq),freq=wordFreq,scale=c(5,.2),min.freq=3,max.words= 200, random.order=F, rot.per=.15, colors=brewer.pal(8, "Dark2")) >wordcloud(words=names(wordFreq),freq=wordFreq, scale=c(5,.2),min.freq=3, max.words=Inf, random.order=F,rot.per=.15,random.color=TRUE,colors=rainbow(7)) To get multiple colour in word cloud we use (pal2 <- brewer.pal(8,"Dark2"), if you want only one colour in word cloud you can simply write “red” or “blue” in colour option. The above word cloud clearly shows that "data", "example" and "research" are the three most important words, which validates that the in twitter these words have been used the most.
  6. 6. o o o o o o o o o o Words: the words Freq: their frequencies Scale: A vector of length 2 indicating the range of the size of the words. min.freq: words with frequency below min.freq will not be plotted max.words: Maximum number of words to be plotted. least frequent terms dropped random.order: plot words in random order. If false, they will be plotted in decreasing frequency random.color: choose colors randomly from the colors. If false, the color is chosen based on the frequency rot.per: proportion words with 90 degree rotation Colors color words from least to most frequent Ordered.colors if true, then colors are assigned to words in order Hope this helps. Thanks for reading………….