2005 Web Content Mining 4

924 views

Published on

Published in: Technology, Self Improvement
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
924
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
48
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

2005 Web Content Mining 4

  1. 1. Sentiment Classification and Analysis of Consumer Reviews
  2. 2. Word of mouth on the Web The Web has dramatically changed the way that consumers express their opinions. One can post reviews of products at merchant sites, Web forums, discussion groups, blogs Techniques are being developed to exploit these sources to help companies and individuals to gain market intelligence info. Benefits: Potential Customer: No need to read many reviews Product manufacturer: market intelligence, product benchmarking
  3. 3. Introduction Sentiment classification Whole reviews Sentences Consumer review analysis Going inside each sentence to find what exactly consumers praise or complain. Extraction of product features commented by consumers. Determine whether the comments are positive or negative (semantic orientation) Produce a feature based summary (not text summarization).
  4. 4. Sentiment Classification of Reviews Classify reviews (or other documents) based on the overall sentiment expressed by the authors, i.e., Positive or negative Recommended or not recommended This problem has been mainly studied in natural language processing (NLP) community. The problem is related but different from traditional text classification, which classifies documents into different topic categories.
  5. 5. Unsupervised review classification (Turney ACL-02) Data: reviews from epinions.com on automobiles, banks, movies, and travel destinations. The approach: Three steps Step 1: Part-of-speech tagging Extracting two consecutive words (two-word phrases) from reviews if their tags conform to some given patterns, e.g., (1) JJ, (2) NN.
  6. 6. Step 2: Estimate the semantic orientation of the extracted phrases Use Pointwise mutual information Semantic orientation (SO): SO(phrase) = PMI(phrase, .excellent.) - PMI(phrase, .poor.) Using AltaVista NEAR operator to do search to find the number of hits to compute PMI and SO.
  7. 7. Step 3: Compute the average SO of all phrases classify the review as recommended if average SO is positive, not recommended otherwise. Final classification accuracy: automobiles - 84% banks - 80% movies - 65.83% travel destinations - 70.53%
  8. 8. Sentiment classification using machine learning methods Apply several machine learning techniques to classify movie reviews into positive and negative. Three classification techniques were tried: Naïve Bayes Maximum entropy Support vector machine Pre-processing settings: negation tag, unigram (single words), bigram, POS tag, position. SVM: the best accuracy 83% (unigram)
  9. 9. Review classification by scoring features (Dave, Lawrence and Pennock, WWW-03)
  10. 10. Feature Selection Sentences are split into single-word tokens Metadata and statistical substitutions “I called Nikon” and “I called Kodak” substituted by “I called X” Substitute numerical tokens by NUMBER Linguistic substitutions WordNet to find similarities Colocation – Word(part-of-speech): Relation: Word(part-of-speech). E.g “This stupid ugly piece of garbage” → (stupid(A):subj:piece(N)) Language-based modification Stemming Negating phrases, e.g. “not good”, “not useful” N-gram and proximity N adjacent tokens
  11. 11. Evaluation The technique does well for review classification with accuracy of 84-88% It does not do so well for classifying review sentences, max accuracy = 68% even after removing hard and ambiguous cases. Sentence classification is much harder.
  12. 12. Other related works Estimate semantic orientation of words and phrases (Hatzivassiloglou and Wiebe COLING-00, Hatzivassiloglou and McKeown ACL-97; Wiebe, Bruce and O.Hara, ACL-99) Generating semantic timelines by tracking online discussion of movies and display a plot of the number positive and negative messages (Tong, 2001). Determine subjectivity and extract subjective sentences, e.g., (Wilson, Wiebe and Hwa, AAAI-04; Riloff and Wiebe, EMNLP-03) Mining product reputation (Morinaga et al, KDD-02). Classify people into opposite camps in newsgroups (Agrawal et al WWW-03) More …
  13. 13. Consumer Review Analysis Going inside each sentence to find what exactly consumers praise or complain. Extraction of product features commented by consumers. Determine whether the comments are positive or negative (semantic orientation) Produce a feature based summary (not text summarization)
  14. 14. Mining and summarizing reviews Sentiment classification is useful. But can we go inside each sentence to find what exactly consumers praise or complain about? That is, Extract product features commented by consumers. Determine whether the comments are positive or negative (semantic orientation) Produce a feature based summary (not text summary).
  15. 15. In online shopping, more and more people are writing reviews online to express their opinions A lot of reviews … Time consuming and tedious to read all the reviews Benefits: Potential Customer: No need to read many reviews Product manufacturer: market intelligence, product benchmarking
  16. 16. Different Types of Consumer Reviews (Hu and Liu, KDD-04; Liu et al WWW-05) Format (1) - Pros and Cons: The reviewer is asked to describe Pros and Cons separately. C|net.com uses this format. Format (2) - Pros, Cons and detailed review: The reviewer is asked to describe Pros and Cons separately and also write a detailed review. Epinions.com uses this format. Format (3) - free format: The reviewer can write freely, i.e., no separation of Pros and Cons. Amazon.com uses this format.
  17. 17. Feature Based Summarization Extracting product features (called Opinion Features) that have been commented on by customers Identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative Summarizing and comparing results. Note: a wrapper can be used to extract reviews from Web pages as reviews are all regularly structured.
  18. 18. The Problem Model Product feature: product component, function feature, or specification Model: Each product has a finite set of features, F = {f1, f2, … , fn}. Each feature fi in F can be expressed with a finite set of words or phrases Wi. Each reviewer j comments on a subset Sj of F, i.e., Sj ⊆ F. For each feature fk ∈ F that reviewer j comments, he/she chooses a word/phrase w ∈ Wk to represent the feature. The system does not have any information about F or Wi beforehand. This simple model covers most but not all cases.
  19. 19. Example 1: Format 3
  20. 20. Example 2: Format 2
  21. 21. Example 3: Format 1
  22. 22. Visual Summarization & Comparison
  23. 23. Observations Each sentence segment contains at most one product feature. Sentence segments are separated by ‘,’, ‘.’, ‘and’, and ‘but’. 5 segments in Pros great photos <photo> easy to use <use> good manual <manual> many options <option> takes videos <video> 3 segments in Cons battery usage <battery> included software could be improved <software> included 16MB is stingy <16MB> ⇒ <memory>
  24. 24. Analyzing Reviews of formats 1 and 3 Reviews are usually full sentences “The pictures are very clear.” Explicit feature: picture “It is small enough to fit easily in a coat pocket or purse.” Implicit feature: size Synonyms – Different reviewers may use different words to mean the same produce feature. For example, one reviewer may use “photo”, but another may use “picture”. Synonym of features should be grouped together. Granularity of features: “battery usage”, “battery size”, “battery weight” can be individual features but it will generate too many features and insufficient comments for each features They are group together into one feature “battery” Frequent and infrequent features Frequent features (commented by many users) Infrequent features
  25. 25. Step 1: Mining product features 1. Part-of-Speech tagging - in this work, features are nouns and nouns phrases (which is insufficient!). 2. Frequent feature generation (unsupervised) Association mining to generate candidate features Feature pruning. 3. Infrequent feature generation Opinion word extraction. Find infrequent feature using opinion words.
  26. 26. Part-of-Speech tagging Segment the review text into sentences. Generate POS tags for each word. Syntactic chunking recognizes boundaries of noun groups and verb groups. <S> <NG> <W C=’PRP’ L=’SS’ T=’w’ S=’Y’> I </W> </NG> <VG> <W C=’VBP’> am </W> <W C=’RB’> absolutely </W> </VG> <W C=’IN’> in </W> <NG> <W C=’NN’> awe </W> </NG> <W C=’IN’> of </W> <NG> <W C=’DT’> this </W> <W C=’NN’> camera</W> </NG> <W C=’.’> . </W> </S>
  27. 27. Frequent feature identification Frequent features: those features that are talked about by many customers. Use association (frequent itemset) Mining Why use association mining? Different reviewers tell different stories (irrelevant) When people discuss the product features, they use similar words. Association mining finds frequent phrases. Let I = {i1, …, in} be a set of items, and D be a set of transactions. Each transaction consists of a subset of items in I. An association rule is an implication of the form X → Y, where X ⊂ I, Y ⊂ I, and X ∩ Y = ∅. The rule X→ Y holds in D with confidence c if c% of transactions in D that support X also support Y. The rule has support s in D if s% of transactions in D contain X ∪ Y. Note: only nouns/noun groups are used to generate frequent itemsets (features) Some example rules: <N1>, <N2> → [feature] <V>, easy, to → [feature] <N1> → [feature], <N2> <N1>, [feature] → <N2>
  28. 28. Generating Extraction Patterns Rule generation <NN>, <JJ> → [feature] <VB>, easy, to → [feature] Considering word sequence <JJ>, <NN> → [feature] <NN>, <JJ> → [feature] (pruned, low support/confidence) easy, to, <VB> → [Feature] Generating language patterns, e.g., from <JJ>, <NN> → [feature] easy, to, <VB> → [feature] to <JJ> <NN> [feature] easy to <VB> [feature]
  29. 29. Feature extraction using language patterns Length relaxation: A language pattern does not need to match a sentence segment with the same length as the pattern. For example, pattern “<NN1> [feature] <NN2>” can match the segment “size of printout”. Ranking of patterns: If a sentence segment satisfies multiple patterns, use the pattern with the highest confidence. No pattern applies: use nouns or noun phrases.
  30. 30. Feature Refinement Correct some mistakes made during extraction. Two main cases: Feature conflict: two or more candidate features in one sentence segment. Missed feature: there is a more likely feature in the sentence segment but not extracted by any pattern. E.g., “slight hum from subwoofer when not in use.” (“hum” was found to be the feature) What is the ture feature? “hum” or “subwoofer”? how does the system know this? Use candidate feature “subwoofer” (as it appears elsewhere): “subwoofer annoys people” “subwoofer is bulky” “hum” is not used in other reviews An iterative algorithm can be used to deal with the problem by remembering occurrence counts.
  31. 31. Feature pruning Not all candidate frequent features generated by association mining are genuine features. Compactness pruning: remove those non-compact feature phrases: compact in a sentence “I had searched a digital camera for months.” -- compact “This is the best digital camera on the market.” -- compact “This camera does not have a digital zoom.” -- not compact A feature phrase, if compact in at least two sentences, then it is a compact feature phrase Digital camera is a compact feature phrase p-support (pure support). manual (sup = 12), manual mode (sup = 5) p-support of manual = 7 life (sup = 5), battery life (sup = 4) p-support of life = 1 set a minimum p-support value to do pruning. life will be pruned while manual will not, if minimum p-support is 4.
  32. 32. Infrequent features generation How to find the infrequent features? Observation: one opinion word can be used to describe different objects. “The pictures are absolutely amazing.” “The software that comes with it is amazing.”
  33. 33. Step 2: Identify Orientation of an Opinion Sentence Use dominant orientation of opinion words (e.g., adjectives) as sentence orientation. The semantic orientation of an adjective: positive orientation: desirable states (e.g., beautiful, awesome) negative orientation: undesirable states (e.g., disappointing). no orientation. e.g., external, digital. Using a seed set to grow a set of positive and negative words using WordNet, synonyms, antonyms.
  34. 34. Feature extraction evaluation n is the total number of reviews of a particular product, ECi is the number of extracted features from review i that are correct, Ci is the number of actual features in review i, Ei is the number of extracted features from review i Opinion sentence extraction (Avg): Recall: 69.3% Precision: 64.2% Opinion orientation accuracy: 84.2%
  35. 35. Summary Automatic opinion analysis has many applications. Some techniques have been proposed. However, the current work is still preliminary. Other supervised or unsupervised learning should be tried. Additional NLP is likely to help. Much future work is needed: Accuracy is not yet good enough for industrial use, especially for reviews in full sentences. Analyzing blogspace is also an promising direction (Gruhl et al, WWW-04). Trust and distrust on the Web is an important issue too (Guha et al, WWW-04)
  36. 36. Partition authors into opposite camps within a given topic in the context of newsgroups based on their social behavior (Agrawal, WWW2003) A typical newsgroup posting consists of one or more quoted lines from another posting followed by the opinion of the author This social behavior gives rise to a network in which the vertices are individuals and the links represent "responded-to" relationships An interesting characteristic of many newsgroups is that people more frequently respond to a message when they disagree than when they agree This behavior is opposite to the WWW link graph, where linkage is an indicator of agreement or common interest
  37. 37. Interactions between individuals have two components: The content of the interaction – text. The choice of person who an individual chooses to interact with – link. The structure of newsgroup postings Newsgroup postings tend to be largely "discussion" oriented A newsgroup discussion on a topic typically consists of some seed postings, and a large number of additional postings that are responses to a seed posting or responses to responses Responses typically quote explicit passages from earlier postings.
  38. 38. "social network" between individuals participating in the newsgroup can be generated Definition 1 (Quotation Link) There is a quotation link between person i and person j if i has quoted from an earlier posting written by j. Characteristics of quotation link they are created without mutual concurrence: the person quoting the text does not need the permission of the author to quote in many newsgroups, quotation links are usually "antagonistic": it is more likely that the quotation is made by a person challenging or rebutting it rather than by someone supporting it.
  39. 39. Consider a graph G(V,E) where the vertex set V has a vertex per participant within the newsgroup discussion. Therefore, the total number of vertices in the graph is equal to the number of distinct participants. An edge e ∈ E , e = (v1, v2), vi ∈ V indicates that person v1 has responded to a posting by person v2. Unconstrained Graph Partition – Optimum Partitioning Consider any bipartition of the vertices into two sets F and A, representing those for and those against an issue. We assume F and A to be disjoint and complementary, i.e., F U A = V and F ∩ A = ∅ . Such a pair of sets can be associated with the cut function, f(F,A) = |E ∩ (F × A)| , the number of edges crossing from F to A. If most edges in a newsgroup graph G represent disagreements, then the following holds: Proposition 1 The optimum choice of F and A maximizes f(F,A). This problem is known as maximum cut.
  40. 40. Consider the co-citation matrix of the graph G. This graph, D = GGT is a graph on the same set of vertices as G. There is a weighted edge e = (u1, u2) in D of weight w if and only if there are exactly w vertices v1, …, vw such that each edge (u1,vi) and (u2,vi) is in G. In other words, w measures the number of people that u1 and u2 have both responded to. Observation 1 (EV Algorithm) The second eigenvector of D = GGT is a good approximation of the desired bipartition of G. Observation 2 (EV+KL Algorithm) Kernighan-Lin heuristic on top of spectral partitioning can improve the quality of partitioning .
  41. 41. Experiment Data Abortion: The dataset consists of the 13,642 postings in talk.abortion that contain the words "Roe" and "Wade". Gun Control: The dataset consists of the 12,029 postings in talk.politics.guns that include the words "gun", "control", and "opinion". Immigration: The dataset consists of the 10,285 postings in alt.politics.immigration that include the word "jobs".
  42. 42. Constrained Graph Partitioning

×