The sites in the top sites lists are ordered by their 1 month alexa traffic rank. The 1 month rank is calculated using a combination of average daily visitors and pageviews over the past month. The site with the highest combination of visitors and pageviews is ranked #1.
Facebook – updated March 2011 Twitter – June 2010 YouTube – May 2010
To mention that table entries can be filled by implicit feedback
Outside the box – since similar users can recommend surprising items compared to content-based approaches which is limited to items that fit the user profile
What a Difference a Group Makes:Web-Based Recommendations for Interrelated Users (Mason & Smith)
Explaining collaborative filtering
Tagsplanations: explaining recommendations using tags, Vig, IUI09
User Evaluation Framework of Recommender Systems, Chen (SRS 2010)
Social Recommender Systems Ido Guy, David Carmel IBM Research-Haifa, Israel WWW 2011, March 28 th -April 1 st , Hyderabad, India
Recommender Systems are an augmentation of the social process, in which we rely on advices or suggestions from other people [Resnick97]
Any CF system has social characteristics. However, here we focus on the social media domain
Social Media and Recommender Systems can mutually benefit each other:
Recommender Systems Social Media Social media introduces new types of data and metadata that can be leveraged by RS (tags, comments, votes, explicit social relationships) RS can significantly impact the success of social media, ensuring each user is present with the most relevant items that suits her personal needs
Want to know what movie you should see this weekend? Ask Facebook . What should you eat for dinner tonight? Yelp knows! Should you wear the pink sweater or the blue one? You don’t have to decide – you can just ask Twitter !
“ The Culture of Recommendation: Has It Gone Too Far?” Susan Murphy at SuzeMuse Blog, Feb 22 nd 2011
Fundamental Recommendation Approaches IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
Aggregate ratings of objects from users and generate recommendation based on inter-user similarity
Categorize users based on personal attributes (age, gender, income..) and make recommendation based on demographic classes
A user profile is constructed based on the features of the items the user has rated/consumed. This profile is used to identify new interesting items for the user (that match his profile)
Compute the utility of each item to the user and the user needs
combine several approaches together
Recommendation techniques ( Burke,2002 ) Generate a classifier based on u ’s ratings, use it to classify new items Rating from u to items Features of items Content-based Infer a match between items and u ’s needs User needs Features of items Knowledge-based Identify similar users, extrapolate from their rating Demographic information about u Demographic information about users Demographic Identify similar users, extrapolate from their rating Rating from u to items User-item matrix CF Process Input Background Technique
Jon’s taste is similar to both Chris and Alice tastes Do not recommend Superman to Jon Shall we recommend Superman for John? ? Like Like Jon Dislike Like Like Chris Like Dislike ? Bob Dislike Like Like Alice Superman Snow-white Shrek
Baatarjav, E.A., Phithakkitnukoon S., & Dantu R. Group recommendation system for Facebook. Proc. OTM ’08 , 211-219.
Chen W.Y., Chu J.C., Luan J., Bai H., Wang, Y., & Chang E.Y. Collaborative filtering for Orkut communities: discovery of user latent behavior. Proc. WWW '09 , 681-690.
Official LinkedInBlog - The engineering behind LinkedIn products “you may like”: http://blog.linkedin.com/2011/03/02/linkedin-products-you-may-like/
Saha & Getoor. Group Proximity Measure for Recommending Groups in Online Social Networks. Proc. SNA-KDD ’08 .
Spertus, E., Sahami, M., & Buyukkokten O. Evaluating similarity measures: a large-scale study in the Orkut social network. Proc. KDD '05 , 678-684.
Zheng N., Li Q., Liao S., & Zhang L. Flickr group recommendation based on tensor decomposition. Proc. SIGIR '10 , 737-738.
Recommendations for Groups IBM Research WWW 2011, March 28th-April 1st, Hyderabad, India
Recommendation for a group rather than individuals
Friends collaborating with a recommender system to design the perfect vacation
Family selecting a movie or TV show to watch together
Group of colleagues choosing a restaurant for an evening out
Or looking for a recipe for a joint meal
Issues in recommendation for groups How to support the process of arriving at a final decision Negotiation may be required. Members decide which recommendation (if any) to accept Aggregation methods Aggregating preferences/results must be applied System generates recommendations Benefits/drawback for the group/system Members can examine each other Members specify their preferences General issues Differences from individuals Phase of recommendation
Explicit specification of preferences: MusicFX ( McCarthy, CSCW 2000) Hate Don’t mind Love Alternative rock : 1 2 3 4 5 Hot country: 1 2 3 4 5 50’oldies 1 2 3 4 5 …90 more genres
Collaborative speciation of preferences in the Travel Decision Forum [Jameson 04]
Use the distrust set to filter out `unwanted' individuals from collaborative recommendation processes
Ignore users from R Dist in the CF process
Distrust as an indicator to reverse deviations
consider distrust scores as negative weights
Reduce the recommendation score of items recommended by distrusted neighbors
Using distrust for recommendation is still an open challenge
Trust in Recommendation (by explanations) Good explanations could help inspire user trust and loyalty, increase satisfaction, make it quicker and easier for users to find what they want, and persuade them to try or purchase a recommended item MoviExplain: A Recommender System with Explanations (Symeonidis09)
Open competition for the best collaborative filtering algorithm to predict user ratings for films
based on previous movie ratings (1-5 scale)
Netflix provided a training data set of
by 480,189 users
for 17,770 movies
Contestants were judged according to their predictions for 3 million withheld ratings
$1 million Prize was given to a team who achieved 10 percent improvement over the accuracy of the Netflix movie recommendation system
Evaluating Prediction Accuracy – the user opinions/ratings over items Root Mean Squared Error (RMSE) the most popular metric used in evaluating accuracy of predicted ratings A popular alternative: Mean Absolute Error (MAE)
Both measure the average error of the system predictions
RMSE disproportionably penalizes large errors, compared to MAE