Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Context-aware similarities within the factorization framework (CaRR 2013 presentation)


Published on

This presentation is about an interesting side project of my main research in recommender systems. It is about the preliminary examination of context-aware similarities in the factorization framework.
This work is in the intersection of the following areas: (1) implicit feedback based recommendations; (2) context / context awareness; (3) item-to-item recommendations; (4) matrix / tensor factorization. The aim of this work is to examine whether context can be used to compute more accurate item similarities based on their feature vectors. Two levels of context aware similarities are introduced: (1) context is only used during training, but not for computing the similarity; (2) context is used during the training and for the similarity computations as well.
This presentation was given at the 3rd workshop on Context-awareness in Retrieval and Recommendations (CaRR 2013) in Rome.

Published in: Technology
  • Be the first to comment

Context-aware similarities within the factorization framework (CaRR 2013 presentation)

  2. 2. OUTLINE Background & scope Levels of CA similarity Experiments Future work
  3. 3. BACKGROUND Implicit feedback Context I2I Factorization recommendation This work
  4. 4. IMPLICIT FEEDBACK User preference not coded explicitely in data E.g. purchase history  Presence  preference assumed  Absence  ??? Binary „preference” matrix  Zeroes should be considered Optimize for  Weighted RMSE  Partial ordering (ranking)
  5. 5. CONTEXT Can mean anything Here: event context  User U has an event  on Item I  while the context is C E.g.: time, weather, mood of U, freshness of I, etc. In the experiments:  Seasonality (time of the day or time of the week)  Time period: week / day
  6. 6. FACTORIZATION I Preference data can be organized into matrix  Size of dimensions high  Data is sparse Approximate this matrix by the product of two low rank matrices  Each item and user has a feature vector  Predicted preference is the scalar product of the item appropriate vectors user r T  r Uu Ii u ,i Here we optimize for wRMSE (implicit case)  Learning features with ALS
  7. 7. FACTORIZATION II. Context introduced  additional context dimension Matrix  tensor (table of records) item Models for preference prediction:  Elementwise product model user r T WeightedU u  I i product r 1 ( scalar  C c )  u ,i , c   Pairwise model T T T  ContextU u I i ru , i , c dependent user/item I i C c U u Cc bias  context context item + + user user item r
  8. 8. ITEM-TO-ITEM RECOMMENDATIONS Items similar to the current item E.g.: user cold start, related items, etc. Approaches: association rules, similarity between item consumption vectors, etc. In the factorization framework:  Similarity between the feature vectors Scalar product:s i , j T  Ii I j T Ii Ij  Cosine similarity: s i , j Ii 2 Ij 2
  9. 9. SCOPE OF THIS PROJECT Examination wether item similarities can be improved  using context-aware learning or prediction  compared to the basic feature based solution Motivation:  If factorization models are used anyways, it would be good to use them for I2I recommendations as well Out of scope:  Comparision with other approaches (e.g. association rules)
  10. 10. CONTEXT-AWARE SIMILARITIES: LEVEL1 The form of computing similarity remains T  si, j Ii Ij Ii T Ij si, j Ii 2 Ij 2 Similarity is NOT CA Context-aware learning is used  Assumption: item models will be more accurate  Reasons: during the learning context is modelled separately Elementwise product model  Context related effects coded in the context feature vector Pairwise model  Context related biases removed
  11. 11. CONTEXT-AWARE SIMILARITIES: LEVEL2 Incorporating the context features Elementwise product model  Similarities reweighted by the context feature  Assumption: will be sensitive to the quality of the context T  si , j ,c T 1 Ii  Cc  I j 1 Ii  Cc  I j s i , j ,c Pairwise model Ii 2 Ij 2  Context dependent promotions/demotions for the participating items  Assumption: minor improvements to the basic similarity I j I i T C c I j T C c s i , j ,c Ii T 
  12. 12. CONTEXT-AWARE SIMILARITIES: LEVEL2NORMALIZATION Normalization of context vector Only for cosine similarity Elementwise product model:  Makes no difference in the ordering  Recommendation in a given context to a given user Pairwise model  Might affect results  Controls the importance of item promotions/demotions I I T I C T I T C i j i c j c s i , j ,c Ii Ij Ii 2 Ij 2 2 2 T T T Ii Ij Ii Cc Ij Cc s i , j ,c Ii Ij Ii 2 Cc 2 Ij Cc 2 2 2 2
  13. 13. EXPERIMENTS - SETUP Four implicit dataset  LastFM 1K – music  TV1, TV2 – IPTV  Grocery – online grocery shopping Context: seasonality  Both with manually and automatically determined time bands Evaluation: recommend similar items to the users’ previous item  Recall@20  MAP@20  Coverage@20
  14. 14. EXPERIMENTS - RESULTS Improvement for Grocery Improvement for LastFM35.00% 200.00% 183.91% 29.40%30.00% 150.00%25.00% 21.40%20.00% Recall 100.00% Recall15.00% 12.90% Coverage Coverage10.00% 50.00% 6.11% 6.79% 6.14% 17.99%19.72% 5.14% 5.83% 3.91% 3.53% 5.27% 5.00% 0.87% 0.00% 0.00% -7.92% -1.62% -3.18% -18.26% -21.96% -50.00% Improvement for TV1 Improvement for TV2 900.00% 797.51% 20.00% 800.00% 0.91% 0.79% 0.00% 0.59% 700.00% -0.23% -3.23% 600.00% -20.00% -8.08% -8.13% -10.40% 500.00% Recall Recall 400.00% -40.00% Coverage -37.24% Coverage 300.00% -60.00% 200.00% 7.34% 46.40% 100.00% 0.69% 8.31% 1.77% 6.77% 5.89% 4.86% -80.00% 0.00% -82.32% -3.53% -100.00%-100.00%  From left to right: L1 elementwise, L1 pairwise, L2 elementwise, L2 pairwise, L2 pairwise (norm)
  15. 15. EXPERIMENTS - CONCLUSION Context awareness generally helps Impromevent greatly depends on method and context quality All but the elementwise level2 method:  Minor improvements  Tolerant of context quality Elementwise product level2:  Possibly huge improvements  Or huge decrease in recommendations  Depends on the context/problem
  16. 16. (POSSIBLE) FUTURE WORK Experimentation with different contexts Different similarity between feature vectors Predetermination wether context is useful for  User bias  Item bias  Reweighting Predetermination of context quality Different evaluation methods  E.g. recommend to session
  17. 17. THANKS FOR THE ATTENTION!For more of my recommender systems related research visit my website: