Collaborative Filtering


Published on

Collaborative Filtering, Web 2.0 and developments regarding the new media.

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Collaborative Filtering

  1. 1. Collaborative Filtering <ul><ul><li>Tayfun Şen </li></ul></ul><ul><ul><li>18 December 2006 </li></ul></ul>You can reach the author at: stayfun{at}
  2. 2. What is the problem? <ul><ul><li>In a nutshell: </li></ul></ul><ul><ul><li>Life is too short! </li></ul></ul><ul><ul><li>We don't have time to watch all the movies, listen to all the music, read every book etc... </li></ul></ul>
  3. 3. <ul><li>Overwhelming quantity of information on the web </li></ul><ul><li>We all ask our friends for recommendations. </li></ul><ul><li>We read newspapers, web sites, watch TV to create an opinion for ourselves. </li></ul><ul><li>We want to be sure that the activity we spend our time is worthwhile. </li></ul><ul><li>We take into consideration the recommendations made by people we trust. </li></ul>
  4. 4. Time Person of the Year 2006: You? Yes, you. You control the Information Age. Welcome to your world.
  5. 5. From Time (25 Dec. 2006 edition) “It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's network YouTube and the online metropolis MySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes.”
  6. 6. Futurism <ul><li>Semantic Web? </li></ul><ul><li>In his seminal paper published in Scientific American [1], creator of the WWW, Tim Berners-Lee talks about the semantic web. Adding meaning to the Internet looks like a ground breaking idea, but when will it be implemented? </li></ul><ul><li>Standard ontologies, mappings between them, some sort of acceptance by the web community. </li></ul><ul><li>10-15 years needed maybe? </li></ul><ul><li>Collaborative filtering saves the day. </li></ul>
  7. 7. Implications of the Recommendation in the Internet <ul><li>There are basically two types of filtering techniques in the Internet in use today: </li></ul><ul><ul><ul><ul><ul><li>Content based filtering </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Collaborative filtering </li></ul></ul></ul></ul></ul>
  8. 8. Examples on the Internet <ul><li>Netflix, Amazon,, ... </li></ul><ul><li>It is natural for Web 2.0 too! </li></ul><ul><ul><ul><ul><li>Digg, flickr, stumbleupon etc. </li></ul></ul></ul></ul><ul><li>All these websites rely on their users' interaction to generate content relevant to every user. That's what Web 2.0 means. User interaction. </li></ul>
  9. 9. Content based algorithms <ul><li>These rely on the implicit data on the domain. </li></ul><ul><li>For example, in a movie recommendation site, this could be the director information, movie length, PG rating, cast etc. </li></ul><ul><li>For the song recommendation this could be song date, other albums/songs from the same group, type of the song (jazz, classic, rock etc.) </li></ul><ul><li>Implicit data is used in generating recommendations. </li></ul><ul><li>For example: You see that a user has rated high to Brad Pitt movies, so you recommend her Babel. </li></ul>
  10. 10. Collaborative Filtering algorithms <ul><li>In CF, it is a little different: </li></ul><ul><ul><ul><li>Other users have impact on the recommendations. Users generate recommendations implicitly. </li></ul></ul></ul><ul><li>Similar users to the active user (user that recommendations are prepared for) are found. </li></ul><ul><li>By weighting the users, a recommendation list is prepared from other user data. </li></ul>
  11. 11. CF Example <ul><li>It is found that a lot of users like Ayumi Hamasaki songs, given that they also like Ai Otsuka songs. </li></ul><ul><li>In this case, if the active user does not know about Ai Otsuka but she knows that she likes Ayumi, then Ai Otsuka is recommended to her. </li></ul>
  12. 12. CF Example (continued) <ul><li>In the movie domain: </li></ul><ul><li>There is a user-movie-rating table. </li></ul><ul><li>It is very sparse. That is, for many users, for many movies no ratings exist. </li></ul>
  13. 13. CF Algorithms <ul><li>Two types of algorithms exist for CF: </li></ul><ul><ul><ul><ul><ul><ul><ul><ul><li>Model based algorithms </li></ul></ul></ul></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><ul><ul><ul><li>Memory based algorithms </li></ul></ul></ul></ul></ul></ul></ul></ul><ul><li>In model based algorithms, you create a model of the domain. Most of the work is done offline. </li></ul><ul><li>In memory based algorithms, you use the whole database in creating recommendations. Most of the work is done online. </li></ul>
  14. 14. Model based CF <ul><li>Model based algorithms are efficient (fast when recommending) and quite accurate (predictions are quite good). </li></ul><ul><li>But they rely on long off line computations. Thus they are harder to maintain and update. In the Internet, new users need to be added all the time, so this creates a setback or model based algorithms. </li></ul><ul><li>An example is Bayesian Networks: </li></ul>
  15. 15. Bayesian Networks
  16. 16. Memory based CF <ul><li>Many memory based CF algorithms exist, with the most known one described by Herlocker [4], the neighborhood based algorithm. </li></ul><ul><li>In neighborhood based algorithms, most similar users to the active user is selected as that users neighborhood. </li></ul><ul><li>After the neighborhood is found, the predictions are made using a weighted sum of the ratings by those neighboring users. </li></ul>
  17. 17. Neighborhood based algorithms <ul><li>For finding the neighbors, several correlation methods could be used. </li></ul><ul><li>One such method is Pearson's correlation coefficient. </li></ul>
  18. 18. Neighborhood based algorithms <ul><li> is the standard deviation, a is the subscript for active user, u for the user considered as neighbor. </li></ul><ul><li>After the similarity weights are found, one needs to select the most similar users and generate a prediction. </li></ul><ul><li>The neighborhood used in prediction can be selected in many ways: </li></ul><ul><ul><ul><ul><ul><ul><ul><ul><ul><li>Top-n method </li></ul></ul></ul></ul></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><ul><ul><ul><ul><li>Thresholding method </li></ul></ul></ul></ul></ul></ul></ul></ul></ul>
  19. 19. Neighborhood based algorithms <ul><li>After selecting the neighbors to be considered, you weigh these users and generate a prediction. </li></ul><ul><li>Z-scores are used to normalize the ratings. </li></ul>
  20. 20. Cluster based algorithms <ul><li>The naive neighborhood based algorithm is computationally too complex. It is O(mn) where m and n are number of items and users respectively. </li></ul><ul><li>In clustering approach, if you have constant number of clusters, the complexity is O(m). </li></ul><ul><li>It is easier to compute the predictions for new users. </li></ul><ul><li>Details are given next. </li></ul>
  21. 21. Cluster based algorithms <ul><li>Users are members of clusters. </li></ul><ul><li>Clusters can be formed using many different algorithms, described in detail in the paper by Jain et al., Data Clustering, a review [7]. </li></ul><ul><li>The goal is to group together similar users and use these clusters in choosing the neighborhood of the active user. Very efficient, scalable, easy to update. </li></ul><ul><li>If the number of clusters = n, then it degrades into the neighborhood based algorithm. </li></ul><ul><li>There are accuracy considerations. </li></ul>
  22. 22. Cluster based algorithms <ul><li>If you choose the number of clusters to be small, your predictions get worse. </li></ul><ul><li>You have a trade-off of speed and accuracy. </li></ul><ul><li>Best method is to use empirical methods in determining the best cluster size and number. </li></ul>
  23. 23. CF Metrics <ul><li>The two main metrics for CF algorithms are accuracy and complexity. </li></ul><ul><li>For the accuracy MAE is used frequently. The absolute errors are averaged to find this value. </li></ul><ul><li>For the complexity, one can use the big-oh metric. </li></ul><ul><li>Other qualities are also important for predictions: </li></ul><ul><ul><ul><ul><ul><li>These are: Coverage, novelty and serendipity, confidence and user feedback. </li></ul></ul></ul></ul></ul>
  24. 24. CF Metrics <ul><li>Coverage refers to the percent of the movies is the system able to make prediction. </li></ul><ul><li>Serendipity and novelty refers to the novel recommendations made by the recommender. </li></ul><ul><li>Confidence is the value of how confident the system is while making a recommendation. </li></ul><ul><li>User feedback is important in fine tuning the system so it should be used also. </li></ul>
  25. 25. Conclusion <ul><li>CF is already in use on the Internet, although its history only dates back several years. </li></ul><ul><li>It still has development potential. </li></ul><ul><li>Offers great improvements to user enjoyment. </li></ul><ul><li>Thanks for your attention. </li></ul><ul><li>Any questions? </li></ul>
  26. 26. References [1] May 2001 issue of the Scientific American: [2] For more information about the Web 2.0, see the wikipedia article at: [3] Jon Herlocker, Joseph Konstan, John Riedl. An empirical analysis of design choices in neighborhood-based algorithms. Information Retrieval , 2002. [4] Jon Herlocker, Joseph A. Konstan, Al Borchers, and John Riedl. An algorithmic framework for performing collaborative filtering. SIGIR'99. [5] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time CF algorithm. [6] Al Manumur Rashid, Shyong K. Lam, George Karypis, and John Riedl. ClustKNN: A highly scalable hybrid model & Memory based algorithm. WEBKDD'06, 2006. [7] A. K. Jain, M. N. Murty, P. J. Flynn. Data Clustering: a review. ACM Computing Survey 1999.
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.