Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Implicit Feedback - slides of our ECML PKDD paper


Published on

Executive summary: The paper deals with the implicit feedback problem for recommender systems, and propose a fast context-aware tensor factorization method that can integrate any kind of contextual information.

Paper abstract: Albeit the implicit feedback based recommendation problem—when only the user history is available but there are no ratings—is the most typical setting in real-world applications, it is much less researched than the explicit feedback case. State-of-the-art algorithms that are efficient on the explicit case cannot be straightforwardly transformed to the implicit case if scalability should be maintained. There are few implicit feedback benchmark datasets, therefore new ideas
are usually experimented on explicit benchmarks. In this paper, we pro-
pose a generic context-aware implicit feedback recommender algorithm,
coined iTALS. iTALSapplies a fast, ALS-basedtensor factorization learn-
ing method that scales linearly with the number of non-zero elements
in the tensor. The method also allows us to incorporate various contex-
tual information into the model while maintaining its computational effi-
ciency. We present two context-aware implementation variants of iTALS.
The first incorporates seasonality and enables to distinguish user behav-
ior in different time intervals. The other views the user history as sequen-
tial information and has the ability to recognize usage pattern typical
to certain group of items, e.g. to automatically tell apart product types
that are typically purchased repetitively or once. Experiments performed
on five implicit datasets (LastFM 1K, Grocery, VoD, and “implicitized”
Netflix and MovieLens 10M) show that by integrating context-aware
information with our factorization framework into the state-of-the-art
implicit recommender algorithm the recommendation quality improves

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Fast ALS-Based Tensor Factorization for Context-Aware Recommendation from Implicit Feedback - slides of our ECML PKDD paper

  1. 1. iTALSFast ALS-based tensor factorization for context-awarerecommendation from implicit feedbackBalázs Hidasi – balazs.hidasi@gravityrd.comDomonkos Tikk –
  2. 2. Overview• Implicit feedback problem• Context-awareness ▫ Seasonality ▫ Sequentaility• iTALS ▫ Model ▫ Learning ▫ Prediction• Experiments
  3. 3. Feedback types• Feedback: user-item interraction (events)• Explicit: ▫ Preferences explicitely coded ▫ E.g.: Ratings• Implicit: ▫ Preferences not coded explicitely ▫ E.g.: purchase history
  4. 4. Problems with implicit feedback• Noisy positive preferences ▫ E.g.: bought & disappointed• No negative feedback available ▫ E.g.: had no info on item• Usually evaluated by ranking metrics ▫ Can not be directly optimized
  5. 5. Why to use implicit feedback?• Every user provides• Some magnitudes larger amount of information than explicit feedback• More important in practice ▫ Explicit algorithms are for the biggest only
  6. 6. Context-awareness• Context: any information associated with events• Context state: a possible value of the context dimension• Context-awareness ▫ Usage of context information ▫ Incorporating additional informations into the method ▫ Different predictions for same user given different context states ▫ Can greatly outperform context-unaware methods  Context segmentates items/users well
  7. 7. Seasonality as context• Season: a time period ▫ E.g.: a week• Timeband: given interval in season ▫ Context-states ▫ E.g.: days• Assumed: ▫ aggregated behaviour in a given timeband is similar inbetween seasons User Item Date Context ▫ and different for different 1 A 12/07/2010 1 timebands 2 B 15/07/2010 3 ▫ E.g.: daily/weekly routines 1 B 15/07/2010 3 … … … 1 A 19/07/2010 1
  8. 8. Sequentiality• Bought A after B ▫ B is the context-state of the user’s event on A• Usefullness ▫ Some items are bought together ▫ Some items bought repetetively ▫ Some are not• Association rule like information incorporated into model as context ▫ Here: into factorization methods• Can learn negated rules ▫ If C then not A
  9. 9. M3iTALS - Model• Binary tensor ▫ D dimensional User ▫ User – item – context(s) M1 T• Importance weights ▫ Lower weights to zeroes (NIF) ▫ Higher weights to cells with Item more events M2• Cells approximated by sum of the values in the elementwise product of D vectors ▫ Each for a dimension ▫ Low dimensional vectors
  10. 10. iTALS - Learning•
  11. 11. iTALS - Prediction• Sum of values in elementwise product of vectors ▫ User-item: scalar product of feature vectors ▫ User-item-context: weighted scalar product of feature vectors• Context-state dependent reweighting of features• E.g.: ▫ Third feature = horror movie ▫ Context state1 = Friday night  third feature high ▫ Context state2 = Sunday afternoon  third feature low
  12. 12. Experiments• 5 databases ▫ 3 implicit  Online grocery shopping  VoD consumption  Music listening habits ▫ 2 implicitizied explicit  Netflix  MovieLens 10M• Recall@20 as primary evaluation metric• Baseline: context-unaware method in every context- state
  13. 13. Scalability Running times on the Grocery dataset 350 300 iALS epochRunning time (sec) 250 iTALS (time bands) epoch 200 iTALS (sequence) epoch 150 iALS calc. matrix 100 iTALS (time bands) calc. matrix 50 iTALS (sequence) calc. matrix 0 10 20 30 40 50 60 70 80 90 100 Number of factors (K)
  14. 14. Results – Recall@200.12000.10000.08000.06000.04000.02000.0000 VoD Grocery LastFM Netflix MovieLens iALS (MF method) iCA baseline iTALS (seasonality) iTALS (sequence)
  15. 15. Results – Precision-recall curves0.25 Recall-precision (Grocery) iALS0.050.00 iCA baseline 0.00 0.05 0.10 0.15 0.20 Recall-precision (VoD) iTALS0.08 (season)0.06 iTALS (sequence) 0.00 0.05 0.10 0.15 0.20
  16. 16. Summary• iTALS is a ▫ scalable ▫ context-aware ▫ factorization method ▫ on implicit feedback data• The problem is modelled ▫ by approximating the values of a binary tensor ▫ with elementwise product of short vectors ▫ using importance weighting• Learning can be efficiently done by using ▫ ALS ▫ and other computation time reducing tricks• Recommendation accuracy is significantly better than iCA-baseline• Introduced a new context type: sequentiality ▫ association rule like information in factorization framework
  17. 17. Thank you for your attention! T