SlideShare a Scribd company logo
1 of 81
Recommendation and
Information Retrieval:
Two sides of the same coin?
Prof.dr.ir. Arjen P. de Vries
arjen@acm.org
CWI, TU Delft, Spinque
Outline
• Recommendation Systems
– Collaborative Filtering (CF)
• Probabilistic approaches
– Language modelling for Information Retrieval
– Language modelling for log-based CF
– Brief: adaptations for rating-based CF
• Vector Space Model (“Back to the Future”)
– User and item spaces, orthonormal bases and
“the spectral theorem”
Recommendation
• Informally:
– Search for information “without a query”
• Three types:
– Content-based recommendation
– Collaborative filtering (CF)
• Memory-based
• Model-based
– Hybrid approaches
Recommendation
• Informally:
– Search for information “without a query”
• Three types:
– Content-based recommendation
– Collaborative filtering
• Memory-based
• Model-based
– Hybrid approaches
Today’s focus!
Collaborative Filtering
• Collaborative filtering (originally introduced by
Patti Maes as “social information filtering”)
1. Compare user judgments
2. Recommend differences between
similar users
• Leading principle:
People’s tastes are not randomly
distributed
–A.k.a. “You are what you buy”
User-based CF
Similarity
Measure
Rating Matrix
Users
Items
Rating
User Profile
Item Profile
Unknown rating
Objective
If user Boris watched
Love Actually, how
would he rate it?
Prediction
Collaborative Filtering
• Standard item-based formulation
(Adomavicius & Tuzhilin 2005)
( )
( )
( )
( )
sim ,
rat , rat ,
sim ,u
u
j I
j I
i j
u i u j
i j∈
∈
= ∑
∑
Collaborative Filtering
• Benefits over content-based approach
– Overcomes problems with finding suitable
features to represent e.g. art, music
– Serendipity
– Implicit mechanism for qualitative aspects like
style
• Problems: large groups, broad domains
Prediction vs. Ranking
• Original formulations focused on
modelling the users’ item ratings: rating
prediction
– Evaluation of algorithms (e.g., Netflix prize) by
Mean Absolute Error (MAE) or Root Mean
Square Error (RMSE) between predicted and
actual ratings
• However, for the end user, the ranking of
recommended items is the essential
problem: relevance ranking
– Evaluation by precision at fixed rank (P@N)
Relevance Ranking
• Core problem of Information Retrieval!
Generative Model
• A statistical model for generating data
– Probability distribution over samples in a
given ‘language’
M
P ( | M ) = P ( | M )
P ( | M, )
P ( | M, )
P ( | M, )
© Victor Lavrenko, Aug. 2002
Unigram models etc.
• Unigram Models
• N-gram Models (here, N=2)
= P ( ) P ( | ) P ( | ) P ( | )
P ( ) P ( ) P ( ) P ( )
P ( )
P ( ) P ( | ) P ( | ) P ( | )
© Victor Lavrenko, Aug. 2002
Fundamental Problem
• Usually we don’t know the model M
– But have a sample representative of that
model
• First estimate a model from a sample
• Then compute the observation probability
P ( | M ( ) )
M
© Victor Lavrenko, Aug. 2002
• Unigram Language Models (LM)
– Urn metaphor
Language Models…
• P( ) ~ P ( ) P ( ) P ( ) P ( )
= 4/9 * 2/9 * 4/9 * 3/9
© Victor Lavrenko, Aug. 2002
… for Information Retrieval
• Rank models (documents) by probability of
generating the query:
Q:
P( | ) = 4/9 * 2/9 * 4/9 * 3/9 = 96/9
P( | ) = 3/9 * 3/9 * 3/9 * 3/9 = 81/9
P( | ) = 2/9 * 3/9 * 2/9 * 4/9 = 48/9
P( | ) = 2/9 * 5/9 * 2/9 * 2/9 = 40/9
Zero-frequency Problem
• Suppose some event not in our example
– Model will assign zero probability to that event
– And to any set of events involving the unseen event
• Happens frequently in natural language text, and
it is incorrect to infer zero probabilities
– Especially when dealing with incomplete samples
?
Smoothing
• Idea:
Shift part of probability mass to unseen
events
• Interpolate document-based model with a
background model (of “general English”)
– Reflects expected frequency of events
– Plays role of IDF
λ + (1-λ)
Relevance Ranking
• Core problem of Information Retrieval!
– Question arising naturally:
Are CF and IR, from a modelling perspective,
really two different problems then?
Jun Wang, Arjen P. de Vries, Marcel JT Reinders, A User-Item
Relevance Model for Log-Based Collaborative Filtering, ECIR 2006
User-Item Relevance Models
• Idea: apply the probabilistic retrieval
model to CF
User-Item Relevance Models
• Idea: apply the probabilistic retrieval
model to CF
• Treat user profile as query and answer
the following question:
– “ What is the probability that this item
is relevant to this user, given his or her
profile”
• Hereto, apply the language modelling
approach to IR as a formal model to
compute the user-item relevance
Implicit or explicit
relevance?• Rating-based CF:
– Users explicitly rate “items”
We use “items” to represent contents (movie, music,
etc.)
• Log-based CF:
– User profiles are gathered by logging the
interactions. Music play-list, web surf log, etc.
User-Item Relevance Models
• Existing User-based/Item-based
approaches
– Heuristic implementations of “word-of-mouth”
– Unclear how to best deal with the sparse data!
• User-Item Relevance Models
– Give probabilistic justification
– Integrate smoothing to tackle the problem of
sparsity
User-Item Relevance Models
Other Items that
the target user liked
Other users who liked
the target itemTarget Item
Target User
Relevance
?
Item Representation
User Representation
User-Item Relevance Models
• We introduce the following random
variables:
• Using the probabilistic model to IR, we
rank items by their log odds of relevancy:
( | , )
RSV ( ) log
( | , )
U
P R r U I
I
P R r U I
=
=
=
1 1Users: { ,..., } Items: I { ,..., }K MU u u i i∈ ∈
Relevance : { , } , "relevant", "not relevant"R r r r r∈
User-Item Relevance Model
• [Lafferty, Zhai 02]
( | , )
RSV ( ) log
( | , )
( | , ) ( | )
log log
( | , ) ( | )
Q
P r Q D
D
P r Q D
P D r Q P r Q
P D r Q P r Q
=
= +
The Robertson-
Sparck Jones
Models
( | , ) ( | )
log log
( | , ) ( | )
P Q r D P r D
P Q r D P r D
= +
Or:
The Language
Models
( | , )P D r•
D: Document; Q: Query
: relevance{ , }R r r=
Item Representation
Query Items:
other Items that
the target user liked
Target Item
Target User
Relevance
Item Representation
{ib}
im?
uk
User-Item Relevance Models
• Item representation
– Use items that I liked to represent target user
– Assume the item “ratings” are independent
– Linear interpolation smoothing to address
sparsity
: ( , ) 0
( | , ) ( | , ) ( | )
RSV ( ) log log
( | , ) ( | , ) ( | )
...
(1 ) ( | , )
log(1 ) log ( | )
( | )
k
b b u b mk
m k k m m
u m
m k k m m
ml b m
m
i i L c i i b
P r i u P u r i P r i
i
P r i u P u r i P r i
P i i r
P i r
P i r
λ
λ∀ ∈ ∩ >
= =
−
= + +∑
( | , ) (1 ) ( | , ) ( | )b m ml b m ml bP i i r P i i r P i rλ λ= − +
[0,1] is a parameter to adjust the strength of smoothingλ ∈
Co-occurrence popularity
User-Item Relevance Models
• Probabilistic justification of Item-based CF
– The RSV of a target item is the combination of
its popularity and its co-occurrence with
items (query items) that the target user liked.
: ( , ) 0
(1 ) ( | , )
RSV ( ) log(1 ) log ( | )
( | )k
b b u b mk
ml b m
u m m
i i L c i i b
P i i r
i P i r
P i r
λ
λ∀ ∈ ∩ >
−
= + +∑
Co-occurrence between target item and query item
Popularity of query item
User-Item Relevance Models
• Probabilistic justification of Item-based CF
– The RSV of a target item is the combination of
its popularity and its co-occurrence with
items (query items) that the target user liked
• Item co-occurrence should be emphasized if more
users express interest in both target & query item
• Item co-occurrence should be suppressed when
the popularity of the query item is high
: ( , ) 0
(1 ) ( | , )
RSV ( ) log(1 ) log ( | )
( | )k
b b u b mk
ml b m
u m m
i i L c i i b
P i i r
i P i r
P i r
λ
λ∀ ∈ ∩ >
−
= + +∑
User Representation
Other users who liked
the target itemTarget Item
Target User
Relevance
{ub}im?
uk
User-Item Relevance Models
• User representation
– Represent target item by users who like it
– Assume the user profiles are independent
– Linear interpolation smoothing to address sparsity
:
( | , ) ( | , ) ( | )
RSV ( ) log log
( | , ) ( | , ) ( | )
...
(1 ) ( | , )
log(1 )
( | )
k
b b im
m k m k k
u m
m k m k k
ml b k
u u L b
P r i u P i r u P r u
i
P r i u P i r u P r u
P u u r
P u r
λ
λ∀ ∈
= =
−
= +∑
( | , ) (1 ) ( | , ) ( | )ml b k ml b k ml bP u u r P u u r P u rλ λ= − +
[0,1] is a parameter to adjust the strength of smoothingλ ∈
Co-occurrence between the target user and the other users
Popularity of the other users
User-Item Relevance Models
• Probabilistic justification of User-based CF
– The RSV of a target item towards a target user is
calculated by the target user’s co-occurrence with
other users who liked the target item
• User co-occurrence is emphasized if more items liked by
target user are also liked by the other user
• User co-occurrence should be suppressed when this user
liked many items
:
(1 ) ( | , )
RSV ( ) log(1 )
( | )k
b b im
ml b k
u m
u u L b
P u u r
i
P u r
λ
λ∀ ∈
−
= +∑
Empirical Results
• Data Set:
– Music play-lists from audioscrobbler.com
– 428 users and 516 items
– 80% users as training set and 20% users as test set.
– Half of items in test set as ground truth, others as user
profiles
• Measurement
– Recommendation Precision:
(num of corrected items)/(num. of recommended)
– Averaged over 5 runs
– Compared with the suggestion lib developed in GroupLens
P@N vs. lambda
Effectiveness (P@N)
So far…
• User-Item relevance models
– Give a probabilistic justification for CF
– Deal with the problem of sparsity
– Provide state-of-art performance
Rating Prediction?
• Previous log-based CF method predicts
nor uses rating information
– Ranks items solely by usage frequency
– Appropriate for, e.g., music recommendation
in a service like Spotify or personalised TV
…… bi Bi1i
,1ax
1,mx
,A bx
,a Bx, ?a bxau
Au
1u
……
bi … …Sorted Item Similarity
1,bx
,A bx
au , ?a bx Rating Prediction
SIR
Unknown Rating
bi
au
……
,1ax ,a Bx, ?a bx
SortedUserSimilarity
RatingPrediction
Unknown Rating
SUR
Sparseness
• Whether you choose SIR or SUR, in many
cases, the neighborhood extends to
include “not-so-similar” users and/or items
• Idea:
Take into considerations the similar item
ratings made by similar users as extra
source for prediction
Jun Wang, Arjen P. de Vries, Marcel JT Reinders, Unifying user-based
and item-based collaborative filtering approaches by similarity
fusion, SIGIR 2006
bi …
, ?a bxau
……SortedUserSimilarity
… Sorted Item Similarity
SIR
Unknown Rating
SUIR
SUR
Rating Prediction
Rating
Prediction
,a bx
,k mx
2I1I
1 1I =1 0I =
2 1I =
,a bx SIR∈ ,a bx SUR∈
,a bx SUIR∈ ,a bx SUIR∈
2 0I =
Similarity Fusion
Sketch of Derivation
2
,
, 2 2
, 2 2
, 2 2
, ,
, , ,
( | , , )
( , | , , ) ( )
( , 1| , , ) ( 1)
( , 0 | , , )(1 ( 1))
( | ) ( | , )(1 )
( | ) ( ( ) ( )(1 )
k m
k m
I
k m
k m
k m k m
k m k m k m
P x SUR SIR SUIR
P x I SUR SIR SUIR P I
P x I SUR SIR SUIR P I
P x I SUR SIR SUIR P I
P x SUIR P x SUR SIR
P x SUIR P x SUR P x SIR
δ δ
δ λ λ
=
= = = +
= − =
= + −
= + + −
∑
)(1 )δ−
See SIGIR 2006 paper for more details
User-Item Relevance
Models
Theoretical Level
Information
Retrieval
Field
Machine
Learning
Field
User RepresentationItem Representation
Combination rules
Similarity Fusion
Individual Predictor
Latent Predictor Space,
Latent semantic analysis,
manifold learning etc.
Relevance Feedback.
Query expansion etc
Remarks
• SIGIR 2006 paper estimates probabilities
directly from the similarity distance given
between users and items
• TOIS 2008 paper below applies Parzen window
kernel density estimation to the rating data itself,
to give a full probabilistic derivation
– Shows how the “kernel trick” let’s us generalize the
distance measure; such that a cosine (projection)
kernel (length-normalized dot product) can be
chosen, while keeping Gaussian kernel Parzen
windows
Jun Wang, Arjen P. de Vries, and Marcel J. T. Reinders. Unified
relevance models for rating prediction in collaborative filtering. ACM
TOIS 26 (3), June 2008
Relevance Feedback
• Relevance Models for query expansion in IR
– Language model estimated from known relevant or
from top-k documents (Pseudo-RFB)
– Expand query with terms generated by the LM
• Application to recommendation
– User profile used to identify neighbourhood; a
Relevance Model estimated from that neighbourhood
used to expand the profile
– Deploy probabilistic clustering method PPC to
construct the neighbourhood
– Very good empirical results on P@N
Javier Parapar, Alejandro Bellogín, Pablo Castells, Álvaro
Barreiro. Relevance-Based Language Modelling for Recommender
Systems.Information Processing & Management 49 (4), pp. 966-980
CF =~ IR?
Follow-up question:
Can we go beyond “model level”
equivalences observed so far, and actually
cast the CF problem such that we can use
the full IR machinery?
Alejandro Bellogín, Jun Wang, and Pablo Castells.Text
Retrieval Methods for Item Ranking in Collaborative
Filtering. ECIR 2011
IR System
Query Process
Text
Retrieval
Engine
OutputInverted
Index
Term
occurrences
(term-doc
matrix)
Query
CF RecSys?!
User Profile Process
Item
Similarity
Text
Retrieval
Engine
OutputInverted
Index
User
Profiles
(User-item
matrix)
User profile
(as query)
Collaborative Filtering
• Standard item-based formulation
• More general
( ) ( )
( )
( ) ( )
( )
1 2rat , , , , ,
j g u j g u
u i f u i j f u j f i j
∈ ∈
= =∑ ∑
( )
( )
( )
( )
sim ,
rat , rat ,
sim ,u
u
j I
j I
i j
u i u j
i j∈
∈
= ∑
∑
Text Retrieval
• In (Metzler & Zaragoza, 2009)
– In particular: factored form
( ) ( )
( )
, , ,
t g q
s q d s q d t
∈
= ∑
( ) ( ) ( )1 2, , , ,s q d t w q t w d t=
Text Retrieval
• Examples
– TF:
– TF-IDF:
– BM25:
( ) ( )
( ) ( )
1
2
, qf
, tf ,
w q t t
w d t t d
=
=
( ) ( )
( ) ( )
( )
1
2
, qf
, tf , log
df
w q t t
N
w d t t d
t
=
 
=  ÷ ÷
 
( )
( ) ( )
( )
( )
( )
( )
( ) ( )
( ) ( )( ) ( )
3
1
3
1
2
1
1 qf
,
qf
df 0.5 1 tf ,
, log
df 0.5 1 dl / dl tf ,
k t
w q t
k t
N t k t d
w d t
t k b b d t d
+
=
+
 − + +
=  ÷ ÷+ − + × + 
IR =~ CF?
• In item-based Collaborative Filtering
• Apply different models
– With different normalizations and norms: sqd,
L1 and L2
( ) ( )
( ) ( )
tf , sim ,
qf rat ,
t d i j
t u j
=
=
sqd
Document
No norm Norm ( /|D|)
Query
No norm s00 s01
Norm ( /|Q|) s10 s11
t j
d i
q u
≈
≈
≈
IR =~ CF!
• TF L1 s01 is equivalent to item-based CF
( ) ( )
( )
( )
sim ,
rat , rat ,
sim ,u
u
j I
j I
i j
u i u j
i j∈
∈
= ∑
∑
( ) ( ) ( )
( )
( )
( )
( )
( )
( )
1 2
tf ,
, , , qf
tf ,t g q t g q
t g q
t d
s q d w q t w d t t
t d∈ ∈
∈
= =∑ ∑
∑
( ) ( )
( ) ( )
tf , sim ,
qf rat ,
t d i j
t u j
=
=
Empirical Results
• Movielens 1M
– Movielens100k: comparable results
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
TF L1
s01
TF-IDF
L1 s01
TF-IDF
L2 s11
BM25
L2 s11
TF L1
s10
BM25
L1 s01
TF-IDF
L1 s10
BM25
L1 s00
TF-IDF
L2 s10
TF-IDF
L1 s00
TF L2
s10
BM25
L2 s10
TF L1
s00
BM25
L1 s11
BM25
L1 s10
BM25
L2 s01
TF L2
s11
TF-IDF
L2 s01
TF L2
s01
nDCG
Vector Space Model
• Challenge: get users and items in
common space
– Intuition: the shared “words” are missing,
which in IR directly relate documents to
queries
– Two extreme settings:
• Project items into a space with dimensionality of
the number of items
• Project users into a space with dimensionality of
the number of users
A. Bellogín, J. Wang, P. Castells. Bridging Memory-Based
Collaborative Filtering and Text Retrieval.
Information Retrieval (to appear)
Item Space
• User
• Item
• Rank
• Predict rate
User space
• User
• Item
• Rank
• Predict rate
Linear Algebra
• Users and items in shared orthonormal
space:
• Consider covariance matrix
• Spectral theorem now states that an
orthonormal basis of eigenvectors exists
Linear Algebra
• Use this basis to represent items and
users:
• The dot product then has a remarkable
form (of the IR models discussed):
Subspaces…
• Number of items (n) vs. number of users
(m):
– If n < m, a linear dependency must exist
between users in terms of the item space
components
– In this case, it has been known empirically
that item-based algorithms tend to perform
better
• Dimension of sub-space key for the performance
of the algorithm?
• ~ better estimation (more data per item) in the
probabilistic versions
Subspaces…
• Matrix Factorization methods are captured
by assuming a lower-dimensionality space
to project items and users into (usually
considered “model-based” rather than
“memory-based”)
~ Latent Semantic Indexing (a VSM method
replicated as pLSA and variants)
Ratings into Inverted File
• Note: distribution of item occurrences not Zipfian
like text, so existing implementations (including
choice of compression etc.) may be sub-optimal
for CF runtime performance
Weighting schemes
Empirical results 1M
Empirical results 10M
Rating prediction
Concluding Remarks
• The probabilistic models are elegant (often
deploying impressive maths), but what do
they really add in understanding IR & CF –
i.e., beyond the (often claimed to be “ad-
hoc”) approaches of the VSM?
Concluding Remarks
• Clearly, the models in CF & IR are closely
related
• Should these then really be studied in two
different (albeit overlapping) communities,
RecSys vs. SIGIR?
References
• Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems:
a survey of the state-of-the-art and possible extensions. IEEE TKDE 17(6), 734-749
(2005)
• Alejandro Bellogín, Jun Wang, and Pablo Castells.Text Retrieval Methods for Item
Ranking in Collaborative Filtering. ECIR 2011.
• Metzler, D., Zaragoza, H.: Semi-parametric and non-parametric term weighting for
information retrieval. ECIR 2009.
• Javier Parapar, Alejandro Bellogín, Pablo Castells, Álvaro Barreiro. Relevance-
Based Language Modelling for Recommender Systems.Information Processing &
Management 49 (4), pp. 966-980
• A. Bellogín, J. Wang, P. Castells. Bridging Memory-Based Collaborative Filtering
and Text Retrieval.Information Retrieval (to appear)
• Jun Wang, Arjen P. de Vries, Marcel JT Reinders, Unifying user-based and item-
based collaborative filtering approaches by similarity fusion, SIGIR 2006
• Jun Wang, Arjen P. de Vries, Marcel JT Reinders, A User-Item Relevance Model for
Log-Based Collaborative Filtering, ECIR 2006
• Jun Wang, Arjen P. de Vries, and Marcel J. T. Reinders. Unified relevance models
for rating prediction in collaborative filtering. ACM TOIS 26 (3), June 2008.
• Jun Wang, Stephen Robertson, Arjen P. de Vries, and Marcel J.T. Reinders.
Probabilistic relevance ranking for collaborative filtering. Information Retrieval 11
(6):477-497, 2008
Thanks
• Alejandro Bellogín
• Jun Wang
• Thijs Westerveld
• Victor Lavrenko

More Related Content

What's hot

Recommender Engines
Recommender EnginesRecommender Engines
Recommender Engines
Thomas Hess
 
Matrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender SystemsMatrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender Systems
Lei Guo
 

What's hot (20)

The continuous cold-start problem in e-commerce recommender systems
The continuous cold-start problem in e-commerce recommender systemsThe continuous cold-start problem in e-commerce recommender systems
The continuous cold-start problem in e-commerce recommender systems
 
Collaborative Filtering using KNN
Collaborative Filtering using KNNCollaborative Filtering using KNN
Collaborative Filtering using KNN
 
Incorporating Diversity in a Learning to Rank Recommender System
Incorporating Diversity in a Learning to Rank Recommender SystemIncorporating Diversity in a Learning to Rank Recommender System
Incorporating Diversity in a Learning to Rank Recommender System
 
Boston ML - Architecting Recommender Systems
Boston ML - Architecting Recommender SystemsBoston ML - Architecting Recommender Systems
Boston ML - Architecting Recommender Systems
 
Recent advances in deep recommender systems
Recent advances in deep recommender systemsRecent advances in deep recommender systems
Recent advances in deep recommender systems
 
Recommendation system
Recommendation systemRecommendation system
Recommendation system
 
IR Modeling(검색 시스템의 모델링)
IR Modeling(검색 시스템의 모델링)IR Modeling(검색 시스템의 모델링)
IR Modeling(검색 시스템의 모델링)
 
Tutorial: Context In Recommender Systems
Tutorial: Context In Recommender SystemsTutorial: Context In Recommender Systems
Tutorial: Context In Recommender Systems
 
Recommender system
Recommender systemRecommender system
Recommender system
 
Neural Collaborative Filtering Explanation & Implementation
Neural Collaborative Filtering Explanation & ImplementationNeural Collaborative Filtering Explanation & Implementation
Neural Collaborative Filtering Explanation & Implementation
 
Session-Based Recommender Systems
Session-Based Recommender SystemsSession-Based Recommender Systems
Session-Based Recommender Systems
 
Recommender Systems (Machine Learning Summer School 2014 @ CMU)
Recommender Systems (Machine Learning Summer School 2014 @ CMU)Recommender Systems (Machine Learning Summer School 2014 @ CMU)
Recommender Systems (Machine Learning Summer School 2014 @ CMU)
 
Recommender systems
Recommender systemsRecommender systems
Recommender systems
 
Recommender systems using collaborative filtering
Recommender systems using collaborative filteringRecommender systems using collaborative filtering
Recommender systems using collaborative filtering
 
Recommender Engines
Recommender EnginesRecommender Engines
Recommender Engines
 
Overview of recommender system
Overview of recommender systemOverview of recommender system
Overview of recommender system
 
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
Learning to Rank for Recommender Systems -  ACM RecSys 2013 tutorialLearning to Rank for Recommender Systems -  ACM RecSys 2013 tutorial
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
 
Matrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender SystemsMatrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender Systems
 
Challenges and Solutions in Group Recommender Systems
Challenges and Solutions in Group Recommender SystemsChallenges and Solutions in Group Recommender Systems
Challenges and Solutions in Group Recommender Systems
 
Learn to Rank search results
Learn to Rank search resultsLearn to Rank search results
Learn to Rank search results
 

Similar to Recommendation and Information Retrieval: Two Sides of the Same Coin?

CUHK intern PPT. Machine Translation Evaluation: Methods and Tools
CUHK intern PPT. Machine Translation Evaluation: Methods and Tools CUHK intern PPT. Machine Translation Evaluation: Methods and Tools
CUHK intern PPT. Machine Translation Evaluation: Methods and Tools
Lifeng (Aaron) Han
 

Similar to Recommendation and Information Retrieval: Two Sides of the Same Coin? (20)

Models for Information Retrieval and Recommendation
Models for Information Retrieval and RecommendationModels for Information Retrieval and Recommendation
Models for Information Retrieval and Recommendation
 
Combining content based and collaborative filtering
Combining content based and collaborative filteringCombining content based and collaborative filtering
Combining content based and collaborative filtering
 
collaborative filtering
collaborative filteringcollaborative filtering
collaborative filtering
 
Filtering content bbased crs
Filtering content bbased crsFiltering content bbased crs
Filtering content bbased crs
 
Language Models for Information Retrieval
Language Models for Information RetrievalLanguage Models for Information Retrieval
Language Models for Information Retrieval
 
Apache Mahout Tutorial - Recommendation - 2013/2014
Apache Mahout Tutorial - Recommendation - 2013/2014 Apache Mahout Tutorial - Recommendation - 2013/2014
Apache Mahout Tutorial - Recommendation - 2013/2014
 
Chapter 02 collaborative recommendation
Chapter 02   collaborative recommendationChapter 02   collaborative recommendation
Chapter 02 collaborative recommendation
 
Chapter 02 collaborative recommendation
Chapter 02   collaborative recommendationChapter 02   collaborative recommendation
Chapter 02 collaborative recommendation
 
[系列活動] 人工智慧與機器學習在推薦系統上的應用
[系列活動] 人工智慧與機器學習在推薦系統上的應用[系列活動] 人工智慧與機器學習在推薦系統上的應用
[系列活動] 人工智慧與機器學習在推薦系統上的應用
 
Recommender Systems and Linked Open Data
Recommender Systems and Linked Open DataRecommender Systems and Linked Open Data
Recommender Systems and Linked Open Data
 
Recommenders.ppt
Recommenders.pptRecommenders.ppt
Recommenders.ppt
 
Recommenders.ppt
Recommenders.pptRecommenders.ppt
Recommenders.ppt
 
Intelligent Search
Intelligent SearchIntelligent Search
Intelligent Search
 
Learning Content and Usage Factors Simultaneously
Learning Content and Usage Factors SimultaneouslyLearning Content and Usage Factors Simultaneously
Learning Content and Usage Factors Simultaneously
 
CUHK intern PPT. Machine Translation Evaluation: Methods and Tools
CUHK intern PPT. Machine Translation Evaluation: Methods and Tools CUHK intern PPT. Machine Translation Evaluation: Methods and Tools
CUHK intern PPT. Machine Translation Evaluation: Methods and Tools
 
Modelling User Interaction utilising Information Foraging Theory (and a bit o...
Modelling User Interaction utilising Information Foraging Theory (and a bit o...Modelling User Interaction utilising Information Foraging Theory (and a bit o...
Modelling User Interaction utilising Information Foraging Theory (and a bit o...
 
Hybridisation Techniques for Cold-Starting Context-Aware Recommender Systems
Hybridisation Techniques for Cold-Starting Context-Aware Recommender SystemsHybridisation Techniques for Cold-Starting Context-Aware Recommender Systems
Hybridisation Techniques for Cold-Starting Context-Aware Recommender Systems
 
Recsys 2018 overview and highlights
Recsys 2018 overview and highlightsRecsys 2018 overview and highlights
Recsys 2018 overview and highlights
 
Recommendation Systems
Recommendation SystemsRecommendation Systems
Recommendation Systems
 
Research on Recommender Systems: Beyond Ratings and Lists
Research on Recommender Systems: Beyond Ratings and ListsResearch on Recommender Systems: Beyond Ratings and Lists
Research on Recommender Systems: Beyond Ratings and Lists
 

More from Arjen de Vries

The personal search engine
The personal search engineThe personal search engine
The personal search engine
Arjen de Vries
 
ESSIR 2013 - IR and Social Media
ESSIR 2013 - IR and Social MediaESSIR 2013 - IR and Social Media
ESSIR 2013 - IR and Social Media
Arjen de Vries
 
Looking beyond plain text for document representation in the enterprise
Looking beyond plain text for document representation in the enterpriseLooking beyond plain text for document representation in the enterprise
Looking beyond plain text for document representation in the enterprise
Arjen de Vries
 

More from Arjen de Vries (20)

Doing a PhD @ DOSSIER
Doing a PhD @ DOSSIERDoing a PhD @ DOSSIER
Doing a PhD @ DOSSIER
 
Masterclass Big Data (leerlingen)
Masterclass Big Data (leerlingen) Masterclass Big Data (leerlingen)
Masterclass Big Data (leerlingen)
 
Beverwedstrijd Big Data (klas 3/4/5/6)
Beverwedstrijd Big Data (klas 3/4/5/6) Beverwedstrijd Big Data (klas 3/4/5/6)
Beverwedstrijd Big Data (klas 3/4/5/6)
 
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
 
Web Archives and the dream of the Personal Search Engine
Web Archives and the dream of the Personal Search EngineWeb Archives and the dream of the Personal Search Engine
Web Archives and the dream of the Personal Search Engine
 
Information Retrieval and Social Media
Information Retrieval and Social MediaInformation Retrieval and Social Media
Information Retrieval and Social Media
 
Information Retrieval intro TMM
Information Retrieval intro TMMInformation Retrieval intro TMM
Information Retrieval intro TMM
 
ACM SIGIR 2017 - Opening - PC Chairs
ACM SIGIR 2017 - Opening - PC ChairsACM SIGIR 2017 - Opening - PC Chairs
ACM SIGIR 2017 - Opening - PC Chairs
 
Data Science Master Specialisation
Data Science Master SpecialisationData Science Master Specialisation
Data Science Master Specialisation
 
PUC Masterclass Big Data
PUC Masterclass Big DataPUC Masterclass Big Data
PUC Masterclass Big Data
 
Bigdata processing with Spark - part II
Bigdata processing with Spark - part IIBigdata processing with Spark - part II
Bigdata processing with Spark - part II
 
Bigdata processing with Spark
Bigdata processing with SparkBigdata processing with Spark
Bigdata processing with Spark
 
TREC 2016: Looking Forward Panel
TREC 2016: Looking Forward PanelTREC 2016: Looking Forward Panel
TREC 2016: Looking Forward Panel
 
The personal search engine
The personal search engineThe personal search engine
The personal search engine
 
Better Contextual Suggestions by Applying Domain Knowledge
Better Contextual Suggestions by Applying Domain KnowledgeBetter Contextual Suggestions by Applying Domain Knowledge
Better Contextual Suggestions by Applying Domain Knowledge
 
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
 
ESSIR 2013 - IR and Social Media
ESSIR 2013 - IR and Social MediaESSIR 2013 - IR and Social Media
ESSIR 2013 - IR and Social Media
 
Looking beyond plain text for document representation in the enterprise
Looking beyond plain text for document representation in the enterpriseLooking beyond plain text for document representation in the enterprise
Looking beyond plain text for document representation in the enterprise
 
Searching Political Data by Strategy
Searching Political Data by StrategySearching Political Data by Strategy
Searching Political Data by Strategy
 
How to Search Annotated Text by Strategy?
How to Search Annotated Text by Strategy?How to Search Annotated Text by Strategy?
How to Search Annotated Text by Strategy?
 

Recently uploaded

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Recently uploaded (20)

psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Third Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxThird Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptx
 

Recommendation and Information Retrieval: Two Sides of the Same Coin?

  • 1. Recommendation and Information Retrieval: Two sides of the same coin? Prof.dr.ir. Arjen P. de Vries arjen@acm.org CWI, TU Delft, Spinque
  • 2. Outline • Recommendation Systems – Collaborative Filtering (CF) • Probabilistic approaches – Language modelling for Information Retrieval – Language modelling for log-based CF – Brief: adaptations for rating-based CF • Vector Space Model (“Back to the Future”) – User and item spaces, orthonormal bases and “the spectral theorem”
  • 3. Recommendation • Informally: – Search for information “without a query” • Three types: – Content-based recommendation – Collaborative filtering (CF) • Memory-based • Model-based – Hybrid approaches
  • 4. Recommendation • Informally: – Search for information “without a query” • Three types: – Content-based recommendation – Collaborative filtering • Memory-based • Model-based – Hybrid approaches Today’s focus!
  • 5. Collaborative Filtering • Collaborative filtering (originally introduced by Patti Maes as “social information filtering”) 1. Compare user judgments 2. Recommend differences between similar users • Leading principle: People’s tastes are not randomly distributed –A.k.a. “You are what you buy”
  • 14. Objective If user Boris watched Love Actually, how would he rate it?
  • 16. Collaborative Filtering • Standard item-based formulation (Adomavicius & Tuzhilin 2005) ( ) ( ) ( ) ( ) sim , rat , rat , sim ,u u j I j I i j u i u j i j∈ ∈ = ∑ ∑
  • 17. Collaborative Filtering • Benefits over content-based approach – Overcomes problems with finding suitable features to represent e.g. art, music – Serendipity – Implicit mechanism for qualitative aspects like style • Problems: large groups, broad domains
  • 18. Prediction vs. Ranking • Original formulations focused on modelling the users’ item ratings: rating prediction – Evaluation of algorithms (e.g., Netflix prize) by Mean Absolute Error (MAE) or Root Mean Square Error (RMSE) between predicted and actual ratings • However, for the end user, the ranking of recommended items is the essential problem: relevance ranking – Evaluation by precision at fixed rank (P@N)
  • 19. Relevance Ranking • Core problem of Information Retrieval!
  • 20. Generative Model • A statistical model for generating data – Probability distribution over samples in a given ‘language’ M P ( | M ) = P ( | M ) P ( | M, ) P ( | M, ) P ( | M, ) © Victor Lavrenko, Aug. 2002
  • 21. Unigram models etc. • Unigram Models • N-gram Models (here, N=2) = P ( ) P ( | ) P ( | ) P ( | ) P ( ) P ( ) P ( ) P ( ) P ( ) P ( ) P ( | ) P ( | ) P ( | ) © Victor Lavrenko, Aug. 2002
  • 22. Fundamental Problem • Usually we don’t know the model M – But have a sample representative of that model • First estimate a model from a sample • Then compute the observation probability P ( | M ( ) ) M © Victor Lavrenko, Aug. 2002
  • 23. • Unigram Language Models (LM) – Urn metaphor Language Models… • P( ) ~ P ( ) P ( ) P ( ) P ( ) = 4/9 * 2/9 * 4/9 * 3/9 © Victor Lavrenko, Aug. 2002
  • 24. … for Information Retrieval • Rank models (documents) by probability of generating the query: Q: P( | ) = 4/9 * 2/9 * 4/9 * 3/9 = 96/9 P( | ) = 3/9 * 3/9 * 3/9 * 3/9 = 81/9 P( | ) = 2/9 * 3/9 * 2/9 * 4/9 = 48/9 P( | ) = 2/9 * 5/9 * 2/9 * 2/9 = 40/9
  • 25. Zero-frequency Problem • Suppose some event not in our example – Model will assign zero probability to that event – And to any set of events involving the unseen event • Happens frequently in natural language text, and it is incorrect to infer zero probabilities – Especially when dealing with incomplete samples ?
  • 26. Smoothing • Idea: Shift part of probability mass to unseen events • Interpolate document-based model with a background model (of “general English”) – Reflects expected frequency of events – Plays role of IDF λ + (1-λ)
  • 27. Relevance Ranking • Core problem of Information Retrieval! – Question arising naturally: Are CF and IR, from a modelling perspective, really two different problems then? Jun Wang, Arjen P. de Vries, Marcel JT Reinders, A User-Item Relevance Model for Log-Based Collaborative Filtering, ECIR 2006
  • 28. User-Item Relevance Models • Idea: apply the probabilistic retrieval model to CF
  • 29. User-Item Relevance Models • Idea: apply the probabilistic retrieval model to CF • Treat user profile as query and answer the following question: – “ What is the probability that this item is relevant to this user, given his or her profile” • Hereto, apply the language modelling approach to IR as a formal model to compute the user-item relevance
  • 30. Implicit or explicit relevance?• Rating-based CF: – Users explicitly rate “items” We use “items” to represent contents (movie, music, etc.) • Log-based CF: – User profiles are gathered by logging the interactions. Music play-list, web surf log, etc.
  • 31. User-Item Relevance Models • Existing User-based/Item-based approaches – Heuristic implementations of “word-of-mouth” – Unclear how to best deal with the sparse data! • User-Item Relevance Models – Give probabilistic justification – Integrate smoothing to tackle the problem of sparsity
  • 32. User-Item Relevance Models Other Items that the target user liked Other users who liked the target itemTarget Item Target User Relevance ? Item Representation User Representation
  • 33. User-Item Relevance Models • We introduce the following random variables: • Using the probabilistic model to IR, we rank items by their log odds of relevancy: ( | , ) RSV ( ) log ( | , ) U P R r U I I P R r U I = = = 1 1Users: { ,..., } Items: I { ,..., }K MU u u i i∈ ∈ Relevance : { , } , "relevant", "not relevant"R r r r r∈
  • 34. User-Item Relevance Model • [Lafferty, Zhai 02] ( | , ) RSV ( ) log ( | , ) ( | , ) ( | ) log log ( | , ) ( | ) Q P r Q D D P r Q D P D r Q P r Q P D r Q P r Q = = + The Robertson- Sparck Jones Models ( | , ) ( | ) log log ( | , ) ( | ) P Q r D P r D P Q r D P r D = + Or: The Language Models ( | , )P D r• D: Document; Q: Query : relevance{ , }R r r=
  • 35. Item Representation Query Items: other Items that the target user liked Target Item Target User Relevance Item Representation {ib} im? uk
  • 36. User-Item Relevance Models • Item representation – Use items that I liked to represent target user – Assume the item “ratings” are independent – Linear interpolation smoothing to address sparsity : ( , ) 0 ( | , ) ( | , ) ( | ) RSV ( ) log log ( | , ) ( | , ) ( | ) ... (1 ) ( | , ) log(1 ) log ( | ) ( | ) k b b u b mk m k k m m u m m k k m m ml b m m i i L c i i b P r i u P u r i P r i i P r i u P u r i P r i P i i r P i r P i r λ λ∀ ∈ ∩ > = = − = + +∑ ( | , ) (1 ) ( | , ) ( | )b m ml b m ml bP i i r P i i r P i rλ λ= − + [0,1] is a parameter to adjust the strength of smoothingλ ∈
  • 37. Co-occurrence popularity User-Item Relevance Models • Probabilistic justification of Item-based CF – The RSV of a target item is the combination of its popularity and its co-occurrence with items (query items) that the target user liked. : ( , ) 0 (1 ) ( | , ) RSV ( ) log(1 ) log ( | ) ( | )k b b u b mk ml b m u m m i i L c i i b P i i r i P i r P i r λ λ∀ ∈ ∩ > − = + +∑
  • 38. Co-occurrence between target item and query item Popularity of query item User-Item Relevance Models • Probabilistic justification of Item-based CF – The RSV of a target item is the combination of its popularity and its co-occurrence with items (query items) that the target user liked • Item co-occurrence should be emphasized if more users express interest in both target & query item • Item co-occurrence should be suppressed when the popularity of the query item is high : ( , ) 0 (1 ) ( | , ) RSV ( ) log(1 ) log ( | ) ( | )k b b u b mk ml b m u m m i i L c i i b P i i r i P i r P i r λ λ∀ ∈ ∩ > − = + +∑
  • 39. User Representation Other users who liked the target itemTarget Item Target User Relevance {ub}im? uk
  • 40. User-Item Relevance Models • User representation – Represent target item by users who like it – Assume the user profiles are independent – Linear interpolation smoothing to address sparsity : ( | , ) ( | , ) ( | ) RSV ( ) log log ( | , ) ( | , ) ( | ) ... (1 ) ( | , ) log(1 ) ( | ) k b b im m k m k k u m m k m k k ml b k u u L b P r i u P i r u P r u i P r i u P i r u P r u P u u r P u r λ λ∀ ∈ = = − = +∑ ( | , ) (1 ) ( | , ) ( | )ml b k ml b k ml bP u u r P u u r P u rλ λ= − + [0,1] is a parameter to adjust the strength of smoothingλ ∈
  • 41. Co-occurrence between the target user and the other users Popularity of the other users User-Item Relevance Models • Probabilistic justification of User-based CF – The RSV of a target item towards a target user is calculated by the target user’s co-occurrence with other users who liked the target item • User co-occurrence is emphasized if more items liked by target user are also liked by the other user • User co-occurrence should be suppressed when this user liked many items : (1 ) ( | , ) RSV ( ) log(1 ) ( | )k b b im ml b k u m u u L b P u u r i P u r λ λ∀ ∈ − = +∑
  • 42. Empirical Results • Data Set: – Music play-lists from audioscrobbler.com – 428 users and 516 items – 80% users as training set and 20% users as test set. – Half of items in test set as ground truth, others as user profiles • Measurement – Recommendation Precision: (num of corrected items)/(num. of recommended) – Averaged over 5 runs – Compared with the suggestion lib developed in GroupLens
  • 45. So far… • User-Item relevance models – Give a probabilistic justification for CF – Deal with the problem of sparsity – Provide state-of-art performance
  • 46. Rating Prediction? • Previous log-based CF method predicts nor uses rating information – Ranks items solely by usage frequency – Appropriate for, e.g., music recommendation in a service like Spotify or personalised TV
  • 47. …… bi Bi1i ,1ax 1,mx ,A bx ,a Bx, ?a bxau Au 1u ……
  • 48. bi … …Sorted Item Similarity 1,bx ,A bx au , ?a bx Rating Prediction SIR Unknown Rating
  • 49. bi au …… ,1ax ,a Bx, ?a bx SortedUserSimilarity RatingPrediction Unknown Rating SUR
  • 50. Sparseness • Whether you choose SIR or SUR, in many cases, the neighborhood extends to include “not-so-similar” users and/or items • Idea: Take into considerations the similar item ratings made by similar users as extra source for prediction Jun Wang, Arjen P. de Vries, Marcel JT Reinders, Unifying user-based and item-based collaborative filtering approaches by similarity fusion, SIGIR 2006
  • 51. bi … , ?a bxau ……SortedUserSimilarity … Sorted Item Similarity SIR Unknown Rating SUIR SUR Rating Prediction Rating Prediction
  • 52. ,a bx ,k mx 2I1I 1 1I =1 0I = 2 1I = ,a bx SIR∈ ,a bx SUR∈ ,a bx SUIR∈ ,a bx SUIR∈ 2 0I = Similarity Fusion
  • 53. Sketch of Derivation 2 , , 2 2 , 2 2 , 2 2 , , , , , ( | , , ) ( , | , , ) ( ) ( , 1| , , ) ( 1) ( , 0 | , , )(1 ( 1)) ( | ) ( | , )(1 ) ( | ) ( ( ) ( )(1 ) k m k m I k m k m k m k m k m k m k m P x SUR SIR SUIR P x I SUR SIR SUIR P I P x I SUR SIR SUIR P I P x I SUR SIR SUIR P I P x SUIR P x SUR SIR P x SUIR P x SUR P x SIR δ δ δ λ λ = = = = + = − = = + − = + + − ∑ )(1 )δ− See SIGIR 2006 paper for more details
  • 54. User-Item Relevance Models Theoretical Level Information Retrieval Field Machine Learning Field User RepresentationItem Representation Combination rules Similarity Fusion Individual Predictor Latent Predictor Space, Latent semantic analysis, manifold learning etc. Relevance Feedback. Query expansion etc
  • 55. Remarks • SIGIR 2006 paper estimates probabilities directly from the similarity distance given between users and items • TOIS 2008 paper below applies Parzen window kernel density estimation to the rating data itself, to give a full probabilistic derivation – Shows how the “kernel trick” let’s us generalize the distance measure; such that a cosine (projection) kernel (length-normalized dot product) can be chosen, while keeping Gaussian kernel Parzen windows Jun Wang, Arjen P. de Vries, and Marcel J. T. Reinders. Unified relevance models for rating prediction in collaborative filtering. ACM TOIS 26 (3), June 2008
  • 56. Relevance Feedback • Relevance Models for query expansion in IR – Language model estimated from known relevant or from top-k documents (Pseudo-RFB) – Expand query with terms generated by the LM • Application to recommendation – User profile used to identify neighbourhood; a Relevance Model estimated from that neighbourhood used to expand the profile – Deploy probabilistic clustering method PPC to construct the neighbourhood – Very good empirical results on P@N Javier Parapar, Alejandro Bellogín, Pablo Castells, Álvaro Barreiro. Relevance-Based Language Modelling for Recommender Systems.Information Processing & Management 49 (4), pp. 966-980
  • 57. CF =~ IR? Follow-up question: Can we go beyond “model level” equivalences observed so far, and actually cast the CF problem such that we can use the full IR machinery? Alejandro Bellogín, Jun Wang, and Pablo Castells.Text Retrieval Methods for Item Ranking in Collaborative Filtering. ECIR 2011
  • 59. CF RecSys?! User Profile Process Item Similarity Text Retrieval Engine OutputInverted Index User Profiles (User-item matrix) User profile (as query)
  • 60. Collaborative Filtering • Standard item-based formulation • More general ( ) ( ) ( ) ( ) ( ) ( ) 1 2rat , , , , , j g u j g u u i f u i j f u j f i j ∈ ∈ = =∑ ∑ ( ) ( ) ( ) ( ) sim , rat , rat , sim ,u u j I j I i j u i u j i j∈ ∈ = ∑ ∑
  • 61. Text Retrieval • In (Metzler & Zaragoza, 2009) – In particular: factored form ( ) ( ) ( ) , , , t g q s q d s q d t ∈ = ∑ ( ) ( ) ( )1 2, , , ,s q d t w q t w d t=
  • 62. Text Retrieval • Examples – TF: – TF-IDF: – BM25: ( ) ( ) ( ) ( ) 1 2 , qf , tf , w q t t w d t t d = = ( ) ( ) ( ) ( ) ( ) 1 2 , qf , tf , log df w q t t N w d t t d t =   =  ÷ ÷   ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) 3 1 3 1 2 1 1 qf , qf df 0.5 1 tf , , log df 0.5 1 dl / dl tf , k t w q t k t N t k t d w d t t k b b d t d + = +  − + + =  ÷ ÷+ − + × + 
  • 63. IR =~ CF? • In item-based Collaborative Filtering • Apply different models – With different normalizations and norms: sqd, L1 and L2 ( ) ( ) ( ) ( ) tf , sim , qf rat , t d i j t u j = = sqd Document No norm Norm ( /|D|) Query No norm s00 s01 Norm ( /|Q|) s10 s11 t j d i q u ≈ ≈ ≈
  • 64. IR =~ CF! • TF L1 s01 is equivalent to item-based CF ( ) ( ) ( ) ( ) sim , rat , rat , sim ,u u j I j I i j u i u j i j∈ ∈ = ∑ ∑ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 tf , , , , qf tf ,t g q t g q t g q t d s q d w q t w d t t t d∈ ∈ ∈ = =∑ ∑ ∑ ( ) ( ) ( ) ( ) tf , sim , qf rat , t d i j t u j = =
  • 65. Empirical Results • Movielens 1M – Movielens100k: comparable results 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 TF L1 s01 TF-IDF L1 s01 TF-IDF L2 s11 BM25 L2 s11 TF L1 s10 BM25 L1 s01 TF-IDF L1 s10 BM25 L1 s00 TF-IDF L2 s10 TF-IDF L1 s00 TF L2 s10 BM25 L2 s10 TF L1 s00 BM25 L1 s11 BM25 L1 s10 BM25 L2 s01 TF L2 s11 TF-IDF L2 s01 TF L2 s01 nDCG
  • 66. Vector Space Model • Challenge: get users and items in common space – Intuition: the shared “words” are missing, which in IR directly relate documents to queries – Two extreme settings: • Project items into a space with dimensionality of the number of items • Project users into a space with dimensionality of the number of users A. Bellogín, J. Wang, P. Castells. Bridging Memory-Based Collaborative Filtering and Text Retrieval. Information Retrieval (to appear)
  • 67. Item Space • User • Item • Rank • Predict rate
  • 68. User space • User • Item • Rank • Predict rate
  • 69. Linear Algebra • Users and items in shared orthonormal space: • Consider covariance matrix • Spectral theorem now states that an orthonormal basis of eigenvectors exists
  • 70. Linear Algebra • Use this basis to represent items and users: • The dot product then has a remarkable form (of the IR models discussed):
  • 71. Subspaces… • Number of items (n) vs. number of users (m): – If n < m, a linear dependency must exist between users in terms of the item space components – In this case, it has been known empirically that item-based algorithms tend to perform better • Dimension of sub-space key for the performance of the algorithm? • ~ better estimation (more data per item) in the probabilistic versions
  • 72. Subspaces… • Matrix Factorization methods are captured by assuming a lower-dimensionality space to project items and users into (usually considered “model-based” rather than “memory-based”) ~ Latent Semantic Indexing (a VSM method replicated as pLSA and variants)
  • 73. Ratings into Inverted File • Note: distribution of item occurrences not Zipfian like text, so existing implementations (including choice of compression etc.) may be sub-optimal for CF runtime performance
  • 78. Concluding Remarks • The probabilistic models are elegant (often deploying impressive maths), but what do they really add in understanding IR & CF – i.e., beyond the (often claimed to be “ad- hoc”) approaches of the VSM?
  • 79. Concluding Remarks • Clearly, the models in CF & IR are closely related • Should these then really be studied in two different (albeit overlapping) communities, RecSys vs. SIGIR?
  • 80. References • Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE TKDE 17(6), 734-749 (2005) • Alejandro Bellogín, Jun Wang, and Pablo Castells.Text Retrieval Methods for Item Ranking in Collaborative Filtering. ECIR 2011. • Metzler, D., Zaragoza, H.: Semi-parametric and non-parametric term weighting for information retrieval. ECIR 2009. • Javier Parapar, Alejandro Bellogín, Pablo Castells, Álvaro Barreiro. Relevance- Based Language Modelling for Recommender Systems.Information Processing & Management 49 (4), pp. 966-980 • A. Bellogín, J. Wang, P. Castells. Bridging Memory-Based Collaborative Filtering and Text Retrieval.Information Retrieval (to appear) • Jun Wang, Arjen P. de Vries, Marcel JT Reinders, Unifying user-based and item- based collaborative filtering approaches by similarity fusion, SIGIR 2006 • Jun Wang, Arjen P. de Vries, Marcel JT Reinders, A User-Item Relevance Model for Log-Based Collaborative Filtering, ECIR 2006 • Jun Wang, Arjen P. de Vries, and Marcel J. T. Reinders. Unified relevance models for rating prediction in collaborative filtering. ACM TOIS 26 (3), June 2008. • Jun Wang, Stephen Robertson, Arjen P. de Vries, and Marcel J.T. Reinders. Probabilistic relevance ranking for collaborative filtering. Information Retrieval 11 (6):477-497, 2008
  • 81. Thanks • Alejandro Bellogín • Jun Wang • Thijs Westerveld • Victor Lavrenko

Editor's Notes

  1. Recommendation can be obtained by looking at top-N similar peers&apos; user preferences. More similar a peer, more chance its liked content will get recommended.
  2. Recently they introduce a general framework for the term weighting problem
  3. If we take … as …, we obtain equivalent scoring functions