Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Diversity and novelty for recommendation system

4,530 views

Published on

A simple survey of Diversity and novelty metrics for recommender systems

Published in: Education, Technology, Career

Diversity and novelty for recommendation system

  1. 1. A simple survey of Diversity andnovelty metrics for recommendersystems Reporter: 孙建凯 2012.07.11
  2. 2. Move beyond accuracy metrics while the majority of algorithms proposed in recommender systems literature have focused on improving recommendation accuracy other important aspects of recommendation quality, such as the diversity of recommendations, have often been overlooked. The recommendations that are most accurate according to the standard metrics are sometimes not the recommendations that are most useful to users[1] 2 Copyright © 2012 by IRLAB@SDU
  3. 3. Diversity and Novelty Accurate is not always good: How Accuracy Metrics have hurt Recommender Systems GroupLensResearch,CHI06  Copyright © 2012 by IRLAB@SDU
  4. 4. Accuracy does not tell the whole story Copyright © 2012 by IRLAB@SDU
  5. 5. Diversity Individual Diversity  Aggregate Diversity Copyright © 2012 by IRLAB@SDU
  6. 6. Individual Diversity Diversity Difficulty[3] Average dissimilarity between all pairs of items recommended to a given user(intra-list similarity) [2,4] Copyright © 2012 by IRLAB@SDU
  7. 7. Diversity Difficulty What We Talk About When We Talk About Diversity [DDR’12 Northeastern University USA] Like query difficulty in IR For a specific query and corpus, query difficulty is a measure of how successful the average search engine should be at ad-hoc retrieval. Copyright © 2012 by IRLAB@SDU
  8. 8. Diversity Difficulty Diversity Difficulty is defined with respect to a query and a corpus. Describes diversity-the number of subtopics which are covered by a list; Describes novelty-which is inversely proportional to the number of times a list repeats a subtopic Copyright © 2012 by IRLAB@SDU
  9. 9. Finding needles in the haystack Imagine a query with 10 subtopics ,1000 documents relevant to only the first subtopic, and each of the remaining subtopics covered by a single, unique document. On the other hand ,if there are large numbers of documents relevant to multiple subtopics, it would be easy to produce a diversity list. Copyright © 2012 by IRLAB@SDU
  10. 10. Diversity Difficulty function The maximum amount of diversity achievable by any ranked list-dmax The ease with a system can produce a diverse ranked list.-dmean Harmonic function Copyright © 2012 by IRLAB@SDU
  11. 11. Examples Copyright © 2012 by IRLAB@SDU
  12. 12. Improving Recommendation Lists ThroughTopic Diversification Introduce the intra-list similarity metric to access the topic diversification of recommendation lists and the topic diversification approach for decreasing the intra-list similarity Average dissimilarity between all pairs of items recommended to a given user Copyright © 2012 by IRLAB@SDU
  13. 13. Intra-list Similarity Copyright © 2012 by IRLAB@SDU
  14. 14. Taxonomy-based similarity Metrics Instantiate c with their metric for taxonomy- driven filtering.[5] Copyright © 2012 by IRLAB@SDU
  15. 15. Topic Diversification AlgorithmAlgorithm A brief textual sketch Copyright © 2012 by IRLAB@SDU
  16. 16. Experiments precision  diversity
  17. 17. Aggregate Diversity improving recommendation Diversity using ranking- based techniques[IEEE transaction’12] Use the total number of distinct items recommended across all users as an aggregate diversity measure, define as follows: Copyright © 2012 by IRLAB@SDU
  18. 18. General overview of ranking-basedapproaches for improving diversity Copyright © 2012 by IRLAB@SDU
  19. 19. Re-Ranking Approach Copyright © 2012 by IRLAB@SDU
  20. 20. Other Re-ranking Approach Copyright © 2012 by IRLAB@SDU
  21. 21. Other Re-ranking Approach Copyright © 2012 by IRLAB@SDU
  22. 22. Other Re-ranking Approach Copyright © 2012 by IRLAB@SDU
  23. 23. Other Re-ranking Approach Copyright © 2012 by IRLAB@SDU
  24. 24. Other Re-ranking Approach Copyright © 2012 by IRLAB@SDU
  25. 25. Other Re-ranking Approach Copyright © 2012 by IRLAB@SDU
  26. 26. Combining Ranking Approaches Many possible ways to combine several ranking functions In this paper , linear combination Open issue: letor ? Neural network? Copyright © 2012 by IRLAB@SDU
  27. 27. Entropy A study of Heterogeneity in Recommendations for a social Music Service[6] Copyright © 2012 by IRLAB@SDU
  28. 28. Open issue:probability Copyright © 2012 by IRLAB@SDU
  29. 29. EntropyAggregate Entropy: Individual Entropy: Item popularity  subtopic popularity? between lists? Copyright © 2012 by IRLAB@SDU
  30. 30. Bipartite network Bipartite network projection and personal recommendation[Tao Zhou, Physical Review] Solving the apparent diversity-accuracy dilemma of recommender systems[Tao Zhou] Copyright © 2012 by IRLAB@SDU
  31. 31. Illustration of resource-allocationprocess in bipartite network Copyright © 2012 by IRLAB@SDU
  32. 32. Solving the apparent diversity-accuracydilemmaheats probs Copyright © 2012 by IRLAB@SDU
  33. 33. Hybrid Methodsweight hybrid Copyright © 2012 by IRLAB@SDU
  34. 34. Diversity Measure Copyright © 2012 by IRLAB@SDU
  35. 35. Surprisal/novelty Copyright © 2012 by IRLAB@SDU
  36. 36. Results-why better? Copyright © 2012 by IRLAB@SDU
  37. 37. Surprise me Tangent: A novel, ‘surprise me’, recommendation algorithm [kdd’09] Copyright © 2012 by IRLAB@SDU
  38. 38. Framework of Tangent Algorithm Suggest items which are not only relevant to user preference but also have a large connectivity to other groups. Consisting three parts as follows: 1 Calculate relevance score(RS) for each node 2 Calculate bridging score(BRS) for each node 3 Compute the Tangent score by somehow merging two criteria above Copyright © 2012 by IRLAB@SDU
  39. 39. Case study Copyright © 2012 by IRLAB@SDU
  40. 40. Case study Copyright © 2012 by IRLAB@SDU
  41. 41. Call for papers September 20, 2012 Copyright © 2012 by IRLAB@SDU
  42. 42. Reference 1. Accurate is not always good: How Accuracy Metrics have hurt Recommender Systems 2.improving recommendation Diversity using ranking-based techniques 3. What We Talk About When We Talk About Diversity 4. Improving Recommendation Lists Through Topic Diversification 5. Taxonomy-driven computation of product recommendations Copyright © 2012 by IRLAB@SDU
  43. 43. Reference 6. A study of Heterogeneity in Recommendations for a social Music Service 7. Bipartite network projection and personal recommendation 8.Solving the apparent diversity-accuracy dilemma of recommender systems 9. Tangent: A novel, ‘surprise me’, recommendation algorithm Copyright © 2012 by IRLAB@SDU
  44. 44.  thanks Copyright © 2012 by IRLAB@SDU

×