Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Fast ALS-based matrix factorization... by Gravity - Rock So... 2118 views
- Implicit Feedback Recommendation vi... by Denis Parra-Santa... 1970 views
- Enhancing Matrix Factorization Thro... by Gravity - Rock So... 1296 views
- Slides from CARR 2012 WS - Enhancin... by Domonkos Tikk 532 views
- Improving the VR experience - VRST ... by Sebastien Kuntz 2735 views
- Top-N Recommendations from Implicit... by Vito Ostuni 1888 views

783 views

675 views

675 views

Published on

Paper abstract: Albeit the implicit feedback based recommendation problem—when only the user history is available but there are no ratings—is the most typical setting in real-world applications, it is much less researched than the explicit feedback case. State-of-the-art algorithms that are eﬃcient on the explicit case cannot be straightforwardly transformed to the implicit case if scalability should be maintained. There are few implicit feedback benchmark datasets, therefore new ideas

are usually experimented on explicit benchmarks. In this paper, we pro-

pose a generic context-aware implicit feedback recommender algorithm,

coined iTALS. iTALSapplies a fast, ALS-basedtensor factorization learn-

ing method that scales linearly with the number of non-zero elements

in the tensor. The method also allows us to incorporate various contex-

tual information into the model while maintaining its computational eﬃ-

ciency. We present two context-aware implementation variants of iTALS.

The ﬁrst incorporates seasonality and enables to distinguish user behav-

ior in diﬀerent time intervals. The other views the user history as sequen-

tial information and has the ability to recognize usage pattern typical

to certain group of items, e.g. to automatically tell apart product types

that are typically purchased repetitively or once. Experiments performed

on ﬁve implicit datasets (LastFM 1K, Grocery, VoD, and “implicitized”

Netﬂix and MovieLens 10M) show that by integrating context-aware

information with our factorization framework into the state-of-the-art

implicit recommender algorithm the recommendation quality improves

signiﬁcantly.

Published in:
Technology

No Downloads

Total views

783

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

16

Comments

0

Likes

1

No embeds

No notes for slide

- 1. iTALSFast ALS-based tensor factorization for context-awarerecommendation from implicit feedbackBalázs Hidasi – balazs.hidasi@gravityrd.comDomonkos Tikk – domonkos.tikk@gravityrd.com
- 2. Overview• Implicit feedback problem• Context-awareness ▫ Seasonality ▫ Sequentaility• iTALS ▫ Model ▫ Learning ▫ Prediction• Experiments
- 3. Feedback types• Feedback: user-item interraction (events)• Explicit: ▫ Preferences explicitely coded ▫ E.g.: Ratings• Implicit: ▫ Preferences not coded explicitely ▫ E.g.: purchase history
- 4. Problems with implicit feedback• Noisy positive preferences ▫ E.g.: bought & disappointed• No negative feedback available ▫ E.g.: had no info on item• Usually evaluated by ranking metrics ▫ Can not be directly optimized
- 5. Why to use implicit feedback?• Every user provides• Some magnitudes larger amount of information than explicit feedback• More important in practice ▫ Explicit algorithms are for the biggest only
- 6. Context-awareness• Context: any information associated with events• Context state: a possible value of the context dimension• Context-awareness ▫ Usage of context information ▫ Incorporating additional informations into the method ▫ Different predictions for same user given different context states ▫ Can greatly outperform context-unaware methods Context segmentates items/users well
- 7. Seasonality as context• Season: a time period ▫ E.g.: a week• Timeband: given interval in season ▫ Context-states ▫ E.g.: days• Assumed: ▫ aggregated behaviour in a given timeband is similar inbetween seasons User Item Date Context ▫ and different for different 1 A 12/07/2010 1 timebands 2 B 15/07/2010 3 ▫ E.g.: daily/weekly routines 1 B 15/07/2010 3 … … … 1 A 19/07/2010 1
- 8. Sequentiality• Bought A after B ▫ B is the context-state of the user’s event on A• Usefullness ▫ Some items are bought together ▫ Some items bought repetetively ▫ Some are not• Association rule like information incorporated into model as context ▫ Here: into factorization methods• Can learn negated rules ▫ If C then not A
- 9. M3iTALS - Model• Binary tensor ▫ D dimensional User ▫ User – item – context(s) M1 T• Importance weights ▫ Lower weights to zeroes (NIF) ▫ Higher weights to cells with Item more events M2• Cells approximated by sum of the values in the elementwise product of D vectors ▫ Each for a dimension ▫ Low dimensional vectors
- 10. iTALS - Learning•
- 11. iTALS - Prediction• Sum of values in elementwise product of vectors ▫ User-item: scalar product of feature vectors ▫ User-item-context: weighted scalar product of feature vectors• Context-state dependent reweighting of features• E.g.: ▫ Third feature = horror movie ▫ Context state1 = Friday night third feature high ▫ Context state2 = Sunday afternoon third feature low
- 12. Experiments• 5 databases ▫ 3 implicit Online grocery shopping VoD consumption Music listening habits ▫ 2 implicitizied explicit Netflix MovieLens 10M• Recall@20 as primary evaluation metric• Baseline: context-unaware method in every context- state
- 13. Scalability Running times on the Grocery dataset 350 300 iALS epochRunning time (sec) 250 iTALS (time bands) epoch 200 iTALS (sequence) epoch 150 iALS calc. matrix 100 iTALS (time bands) calc. matrix 50 iTALS (sequence) calc. matrix 0 10 20 30 40 50 60 70 80 90 100 Number of factors (K)
- 14. Results – Recall@200.12000.10000.08000.06000.04000.02000.0000 VoD Grocery LastFM Netflix MovieLens iALS (MF method) iCA baseline iTALS (seasonality) iTALS (sequence)
- 15. Results – Precision-recall curves0.25 Recall-precision (Grocery)0.200.150.10 iALS0.050.00 iCA baseline 0.00 0.05 0.10 0.15 0.20 Recall-precision (VoD) iTALS0.08 (season)0.06 iTALS (sequence)0.040.020.00 0.00 0.05 0.10 0.15 0.20
- 16. Summary• iTALS is a ▫ scalable ▫ context-aware ▫ factorization method ▫ on implicit feedback data• The problem is modelled ▫ by approximating the values of a binary tensor ▫ with elementwise product of short vectors ▫ using importance weighting• Learning can be efficiently done by using ▫ ALS ▫ and other computation time reducing tricks• Recommendation accuracy is significantly better than iCA-baseline• Introduced a new context type: sequentiality ▫ association rule like information in factorization framework
- 17. Thank you for your attention! T

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment