Informatics of Decision Making by Expedia Group PM
%22The Customer is Always Right - Analyzing Market Feedback to Improve TVs%22
1. Features
22 features were chosen, based on presence in
product description and frequency in consumer
reviews; ‘price’ and ‘screen’ were most mentioned
The Customer is Always Right:
Analyzing Existing Market Feedback to Improve TVs
¹Jose Valderrama, University of Central Florida
josevalderrama@knights.ucf.edu
Jose Valderrama¹, Laurel Rawley², Simon Smith³ Research Advisor: Mark Whiting⁴
Abstract
The evident growth in user generated content online
can be seen through social media and product review
sites. Analyzing reviews and ratings on these sites
can reveal new information about a marketplace.
Our method of analysis suggests improvements for
existing products as well as exposing openings for
new products with unique feature sets
Selected reviews (excerpts):
“coby is a really awful brand and their stuff breaks literally all the time” - 0.0
sentiment score
“The audio isn't bad it just isn't amazing” - 2.375 sentiment score
“The quality of the screen is very crisp and clear; tinkering with the settings will
make the picture as clear as you want it...good blacks and crisp whites.” - 2.984
sentiment score
“I have to say the picture quality is definetly [sic] good for the price paid.” - 4.25
sentiment score
“Added to the manufacturer's 2-year warranty that gives you a 3-year warranty on
your TV (a great peace of mind).” - 4.5 sentiment score
Background
Online reviews are a valuable source of user-
generated content
Consumers pay more for higher-rated products
24% of Internet users report consulting online reviews
prior to paying for a service delivered offline
Results
Higher correlation between screen sentiment score and average star rating as screen
size gets larger
Consumers are neutral about brands initially, but express preference for brands that
have a longer lifecycle
Consumers respond positively towards distinct ‘price’ but give an overall neutral star
rating
Methods
Specific set of products and features chosen for
analysis: midrange TVs from Amazon.com
Specs: 33” - 43” screen, price < $500
1,000 reviews collected across 25 TVs
Reviews extracted using import.io
Reviews and product data parsed using NLTK
Sentiment analysis performed using TextBlob
Sentiment analysis scores were converted to a 5-point
scale so they could be compared with star ratings
Higher scores indicate positive favorability, negative
scores indicate low favorability
Sentiment analysis was performed for individual
features by focusing only on specific sentences that
contained those features
Using these results, favorability for each feature
could be represented
Acknowledgements
Thank you to our advisor Mark Whiting for all of his
help, support, and advice
Thank you to Mike Depew, Courtney Loder, Lauren
Kilgour, and everyone at the i3 Program
²Laurel Rawley, University of Houston
llrawley@uh.edu
³Simon Smith, University of Wisconsin-Madison
spsmith5@wisc.edu
⁴Mark Whiting, Carnegie Mellon University
mwhiting@andrew.cmu.edu
Future research
Gathering a larger data set or implementing these
methods into a different product may demonstrate
new patterns
More complex models may construct an optimal set
of features for best possible market performance
Covariance of features
shows whether they are
positively or negatively
correlated. Each square
represents a data point
containing the covariance
between two features
-1.0 1.0