0
Pairwise Learning:
Experiments with Community
Recommendation on LinkedIn
Amit Sharma*, Baoshi Yan
asharma@cs.cornell.edu, ...
Typical online recommendation
interfaces
Community Recommendation on
LinkedIn
Observed preference
user u joins a community y (u,y)
The recommendation problem
Given...
An intuitive logistic model (pointwise)

fu, fy: features of user u and community y
wi : parameters for the model
Communit...
Understanding implicit feedback
from users

1
2
3
4
5

Clicked

2 is better
than 1.
Can pairwise learning help for
community recommendation?
● A reliable technique used in search engines. [Joachims
01]
● Ha...
Outline
● Propose pairwise models for content-based
recommendation
● Augment pairwise learning with a latent
preference mo...
Expressing pairwise preference
We establish a pair (yi, yj) if yi was ranked higher than yj
and only yj was selected by th...
Building a pairwise logistic
recommender
Maximizing the likelihood of observed preference among
pairs:
Model 1: Feature Difference Model
Assuming h to be a linear function,

Equivalent to logistic classification with features...
Model 2: Logistic Loss Model
Assuming a more general ranking function:

Ranking: As long as we choose h to be a nondecreas...
Pairwise learning improves the
classification of pairs
Task: For each pair, predict which community is
more preferred by a...
Digging deeper: Joining statistics
for LinkedIn communities
Random sample, 1M users

FACT: Most users join
different types...
Digging deeper: the effect of group
types
PREFERRED
ML
Group

Interest Feature

>

User1
Cornell
Alumni

Education Feature...
Different reasons for joining a
community can be treated as a set of
latent preferences within a user

Pair of
communities...
Model 3: Pairwise PLSI model
Extend the Probabilistic Latent Semantic
Indexing recommendation model for pairwise
learning ...
Latent preferences over pairs help
retain differing user preferences
ML
Group

Interest Feature

>

User1
Cornell
Alumni

...
Some details about the model
Number of core preferences (Z)
small ~ {2, 4, 8}
Choosing probability models
Use logistic los...
Ranking

Thus, we can still rank communities individually
(without constructing pairs).
Evaluation
Offline evaluation: Evaluated on group join
data on linkedin.com during the summer of
2012.

Train-test data se...
Pairwise PLSI performs improves
performance on learning pairwise
preference
Pairwise PLSI leads to more
successful recommendations
Online evaluation
● Tested out Logistic Loss and Feature
Difference models on 5% of LinkedIn users,
and the baseline model...
Conclusion: Pairwise learning can
be a useful addition.
However, gains may depend on the context /
domain.
Important to un...
Upcoming SlideShare
Loading in...5
×

[RecSys '13]Pairwise Learning: Experiments with Community Recommendation on LinkedIn

1,789

Published on

Published in: Technology, Education
0 Comments
9 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,789
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
18
Comments
0
Likes
9
Embeds 0
No embeds

No notes for slide

Transcript of "[RecSys '13]Pairwise Learning: Experiments with Community Recommendation on LinkedIn"

  1. 1. Pairwise Learning: Experiments with Community Recommendation on LinkedIn Amit Sharma*, Baoshi Yan asharma@cs.cornell.edu, byan@linkedin.com
  2. 2. Typical online recommendation interfaces
  3. 3. Community Recommendation on LinkedIn Observed preference user u joins a community y (u,y) The recommendation problem Given a set of (u, y) tuples, predict a set R(u) for each user which are the recommendations for a user u. A content-based approach Owing to the rich profile data for users, we use a contentbased model that computes similarity between users and groups.
  4. 4. An intuitive logistic model (pointwise) fu, fy: features of user u and community y wi : parameters for the model Communities that a user has joined are relevant.
  5. 5. Understanding implicit feedback from users 1 2 3 4 5 Clicked 2 is better than 1.
  6. 6. Can pairwise learning help for community recommendation? ● A reliable technique used in search engines. [Joachims 01] ● Has been proposed for some collaborative filtering models. [Rendle et al. 09, Pessiot et al. 07] ● Empirical evidence shows promising results. [Balakrishnan and Chopra 10] Caveat Learning time is quadratic in number of communities. How fast is the inference?
  7. 7. Outline ● Propose pairwise models for content-based recommendation ● Augment pairwise learning with a latent preference model ● Show both offline and online evaluation on linkedin data for our proposed models
  8. 8. Expressing pairwise preference We establish a pair (yi, yj) if yi was ranked higher than yj and only yj was selected by the user. We can define a ranking function h such that:
  9. 9. Building a pairwise logistic recommender Maximizing the likelihood of observed preference among pairs:
  10. 10. Model 1: Feature Difference Model Assuming h to be a linear function, Equivalent to logistic classification with features (yj - yi) Ranking: Can simply rank by computing for each community
  11. 11. Model 2: Logistic Loss Model Assuming a more general ranking function: Ranking: As long as we choose h to be a nondecreasing function, we can still rank by computing weighted sum of features for each community.
  12. 12. Pairwise learning improves the classification of pairs Task: For each pair, predict which community is more preferred by a user ...but the gains are only slight.
  13. 13. Digging deeper: Joining statistics for LinkedIn communities Random sample, 1M users FACT: Most users join different types of groups. Possible hypothesis: There are different reasons for joining different types of groups.
  14. 14. Digging deeper: the effect of group types PREFERRED ML Group Interest Feature > User1 Cornell Alumni Education Feature PREFERRED Cornell Alumni Education Feature > User2 ML Group Interest Feature When learning a single weight for each feature, varying preferences of users may cancel out the effects.
  15. 15. Different reasons for joining a community can be treated as a set of latent preferences within a user Pair of communities User Core preference
  16. 16. Model 3: Pairwise PLSI model Extend the Probabilistic Latent Semantic Indexing recommendation model for pairwise learning [Hofmann 02] We assume users are composed of a set of latent preferences. Each user differs in how she combines the available latent preferences.
  17. 17. Latent preferences over pairs help retain differing user preferences ML Group Interest Feature > User1 Cornell Alumni Education Feature Cornell Alumni z1 Education Feature > User2 ML Group Interest Feature User1 puts more weight to z1’s preference. User2 puts more weight to z2’s preference. z2
  18. 18. Some details about the model Number of core preferences (Z) small ~ {2, 4, 8} Choosing probability models Use logistic loss or feature difference for modeling conditional preference. Multinomial model for modeling the probability of a latent preference given a user.
  19. 19. Ranking Thus, we can still rank communities individually (without constructing pairs).
  20. 20. Evaluation Offline evaluation: Evaluated on group join data on linkedin.com during the summer of 2012. Train-test data separated chronologically.
  21. 21. Pairwise PLSI performs improves performance on learning pairwise preference
  22. 22. Pairwise PLSI leads to more successful recommendations
  23. 23. Online evaluation ● Tested out Logistic Loss and Feature Difference models on 5% of LinkedIn users, and the baseline model on the rest ● Measured average click-through-rate (CTR) over 2 weeks ● Feature difference reported a 5% increase in CTR, logistic loss reported 3%.
  24. 24. Conclusion: Pairwise learning can be a useful addition. However, gains may depend on the context / domain. Important to understand and model the special characteristics of a target domain. thank you Amit Sharma, @amt_shrma www.cs.cornell.edu/~asharma
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×