Advertisement
Advertisement

More Related Content

More from Amit Sharma(20)

Advertisement

[RecSys '13]Pairwise Learning: Experiments with Community Recommendation on LinkedIn

  1. Pairwise Learning: Experiments with Community Recommendation on LinkedIn Amit Sharma*, Baoshi Yan asharma@cs.cornell.edu, byan@linkedin.com
  2. Typical online recommendation interfaces
  3. Community Recommendation on LinkedIn Observed preference user u joins a community y (u,y) The recommendation problem Given a set of (u, y) tuples, predict a set R(u) for each user which are the recommendations for a user u. A content-based approach Owing to the rich profile data for users, we use a contentbased model that computes similarity between users and groups.
  4. An intuitive logistic model (pointwise) fu, fy: features of user u and community y wi : parameters for the model Communities that a user has joined are relevant.
  5. Understanding implicit feedback from users 1 2 3 4 5 Clicked 2 is better than 1.
  6. Can pairwise learning help for community recommendation? ● A reliable technique used in search engines. [Joachims 01] ● Has been proposed for some collaborative filtering models. [Rendle et al. 09, Pessiot et al. 07] ● Empirical evidence shows promising results. [Balakrishnan and Chopra 10] Caveat Learning time is quadratic in number of communities. How fast is the inference?
  7. Outline ● Propose pairwise models for content-based recommendation ● Augment pairwise learning with a latent preference model ● Show both offline and online evaluation on linkedin data for our proposed models
  8. Expressing pairwise preference We establish a pair (yi, yj) if yi was ranked higher than yj and only yj was selected by the user. We can define a ranking function h such that:
  9. Building a pairwise logistic recommender Maximizing the likelihood of observed preference among pairs:
  10. Model 1: Feature Difference Model Assuming h to be a linear function, Equivalent to logistic classification with features (yj - yi) Ranking: Can simply rank by computing for each community
  11. Model 2: Logistic Loss Model Assuming a more general ranking function: Ranking: As long as we choose h to be a nondecreasing function, we can still rank by computing weighted sum of features for each community.
  12. Pairwise learning improves the classification of pairs Task: For each pair, predict which community is more preferred by a user ...but the gains are only slight.
  13. Digging deeper: Joining statistics for LinkedIn communities Random sample, 1M users FACT: Most users join different types of groups. Possible hypothesis: There are different reasons for joining different types of groups.
  14. Digging deeper: the effect of group types PREFERRED ML Group Interest Feature > User1 Cornell Alumni Education Feature PREFERRED Cornell Alumni Education Feature > User2 ML Group Interest Feature When learning a single weight for each feature, varying preferences of users may cancel out the effects.
  15. Different reasons for joining a community can be treated as a set of latent preferences within a user Pair of communities User Core preference
  16. Model 3: Pairwise PLSI model Extend the Probabilistic Latent Semantic Indexing recommendation model for pairwise learning [Hofmann 02] We assume users are composed of a set of latent preferences. Each user differs in how she combines the available latent preferences.
  17. Latent preferences over pairs help retain differing user preferences ML Group Interest Feature > User1 Cornell Alumni Education Feature Cornell Alumni z1 Education Feature > User2 ML Group Interest Feature User1 puts more weight to z1’s preference. User2 puts more weight to z2’s preference. z2
  18. Some details about the model Number of core preferences (Z) small ~ {2, 4, 8} Choosing probability models Use logistic loss or feature difference for modeling conditional preference. Multinomial model for modeling the probability of a latent preference given a user.
  19. Ranking Thus, we can still rank communities individually (without constructing pairs).
  20. Evaluation Offline evaluation: Evaluated on group join data on linkedin.com during the summer of 2012. Train-test data separated chronologically.
  21. Pairwise PLSI performs improves performance on learning pairwise preference
  22. Pairwise PLSI leads to more successful recommendations
  23. Online evaluation ● Tested out Logistic Loss and Feature Difference models on 5% of LinkedIn users, and the baseline model on the rest ● Measured average click-through-rate (CTR) over 2 weeks ● Feature difference reported a 5% increase in CTR, logistic loss reported 3%.
  24. Conclusion: Pairwise learning can be a useful addition. However, gains may depend on the context / domain. Important to understand and model the special characteristics of a target domain. thank you Amit Sharma, @amt_shrma www.cs.cornell.edu/~asharma
Advertisement