Incremental collaborative filtering via evolutionary co clustering
Upcoming SlideShare
Loading in...5
×
 

Incremental collaborative filtering via evolutionary co clustering

on

  • 3,062 views

A novel Incremental CF method via co-clustering.

A novel Incremental CF method via co-clustering.

Statistics

Views

Total Views
3,062
Views on SlideShare
687
Embed Views
2,375

Actions

Likes
0
Downloads
5
Comments
0

24 Embeds 2,375

http://web204seminar.blogspot.tw 1829
http://web204seminar.blogspot.hk 220
http://web204seminar.blogspot.com 160
http://web204seminar.blogspot.jp 53
http://web204seminar.blogspot.ca 18
http://web204seminar.blogspot.ru 12
http://web204seminar.blogspot.sg 10
http://web204seminar.blogspot.co.uk 9
http://web204seminar.blogspot.com.es 8
http://web204seminar.blogspot.com.au 8
http://web204seminar.blogspot.kr 6
http://web204seminar.blogspot.in 6
http://web204seminar.blogspot.fr 6
http://web204seminar.blogspot.com.br 6
http://web204seminar.blogspot.nl 5
http://web204seminar.blogspot.mx 3
http://web204seminar.blogspot.com.ar 3
http://web204seminar.blogspot.de 3
http://web204seminar.blogspot.be 2
http://web204seminar.blogspot.co.nz 2
http://web204seminar.blogspot.pt 2
http://web204seminar.blogspot.it 2
http://web204seminar.blogspot.ro 1
http://web204seminar.blogspot.co.il 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Incremental collaborative filtering via evolutionary co clustering Incremental collaborative filtering via evolutionary co clustering Presentation Transcript

  • INCREMENTALCOLLABORATIVE FILTERINGVIA EVOLUTIONARY CO-CLUSTERINGAUTHORS / MOHAMMAD KHOSHNESHIN AND W. NICK STREETSOURCE / RECSYS’10AFFILIATION / UNIVERSITY OF IOWAPRESENTER / ALLEN WU 1
  • OUTLINE• Introduction• Incremental CF• Incremental evolutionary co-clustering• Experimental Results• Conclusion 2
  • INTRODUCTION (1/3)• Recommender system suggest items of interest to users.• Collaborative filtering (CF) users rating information to recommend items based on similarity. • The drawback: more appropriate for static settings.• In real world data, the new users and items should be incorporated into model recommendations in an online manner.  The incremental CF can handle the need. 3
  • INTRODUCTION (2/3)• A few published approaches of the incremental CF: • Sarwar et al. proposed an online CF strategy using singular value decomposition, SVD. • Das et al. proposed a scalable online CF using MinHash clustering, PLSI and co-visitation counts. • In K-NN, similarity parameters such as correlation can be updated incrementally during online phase. • George and Merugu used Bregman co-clustering as a scalable incremental CF approach for dynamic settings. (ICDM’05) 4
  • INTRODUCTION (3/3)• This paper propose an incremental CF method that is both scalable and accurate.• The main contribution of this paper: • An evolutionary Bregman co-clustering algorithm • An ensemble strategy to give better predictions. 5
  • INCREMENTAL CF(1/3)• In a CF problem, there are U users and V items.• Users have provided a number of explicit ratings for items. • rui is the rating of user u for item i.• There are two phases in a CF algorithm: • Offline phase: training based on known ratings • Online phase: unknown ratings are estimated using the output of offline phase.• In incremental CF, the data available during online phase is incorporated into future predictions. 6
  • BASELINEALGORITHM• The simplest way to predict a rating is the global average r- of all ratings.• However, some users tend to rate higher and some items are more popular. Including user bias and item bias in rating, the prediction is given by: • r-u: the average ratings by user u. • r-i: the average of ratings for item i. • nu: the number of ratings for user u. • ni: the number of ratings for item i. • Snu,w and Sni, w are the support function for user u and item i. 7
  • INCREMENTAL CF VIA CO-CLUSTERING (ICDM’05) (1/2)• Clustering refers to partitioning similar objects into groups, while co- clustering partitions two different kinds of objects simultaneously.• As suggested in George’s paper, the prediction is as follows: • where k=(u) is the user cluster assigned to user u. • l=(i) is the item cluster assigned to item i. • r-kl is the average of ratings belonging to users in user cluster k and items in item cluster l. • (r-u-r-k) is the bias of user u. • (r-i-r-l) is the bias of item i. 8
  • INCREMENTAL CF VIA CO-CLUSTERING (ICDM’05) (2/2)• George used the Bregman co-clustering algorithm, which has two phases, updating user clusters and updating item clusters, to produce the co-cluster results.• In the online phase, the prediction is as follows:• Incremental training is achieved by using new ratings to update the average parameters (r-kl, r-u, r-k, r-i, r-l).• However, new users or items are not assigned to clusters during the online phase. 9
  • INCREMENTAL EVOLUTIONARYCO-CLUSTERING (1/4)• If the support Sv,w (number of available ratings) for a user or items is low, the co-clustering approach will not provide good predictions for them.• As a strategy, users and items with low support are removed from the training phase so that training is both more effective and efficient.• The drawback of Eq. (3), • It incorporates (r-kl, r-k, r-l) from co-clustering solution that is not necessarily reliable. (r-k and r-l is close to r-) • Using only the block average r-kl for prediction ignores user and item bias which results in poor accuracy as well. 10
  • INCREMENTAL EVOLUTIONARYCO-CLUSTERING (2/4)• The revised rating prediction with co-clustering residuals is model as • Eq.(5) is come from the support function of Eq. (1) set to 1. • The ui is the correction parameter for (1). • For known rating, Eq. (5) can be rewritten as • ui can be interpreted as the residual of the prediction via (1).• For implementing co-clustering, it is enough to work with the following objective function. • Where wui is 1 if rating rui exists in training data and otherwise is 0. • (u)(i): the block average of residuals for user cluster (u) and item cluster (i). 11
  • INCREMENTAL EVOLUTIONARYCO-CLUSTERING (3/4)• The prediction strategy of old user - old item,• and otherwise• The ensembles are used to improve the accuracy of a method using a group of predictors, while increasing the running time linearly with the number of ensemble elements.• Let p denote a co-clustering solution and P be the number of co-clustering solutions we use in the model. We can predict with • zulp is the average error of prediction for user u and item cluster l in p. • zikp is the average error of prediction for item i and user cluster k in p. 12
  • INCREMENTAL EVOLUTIONARYCO-CLUSTERING (4/4)• In this paper, it is trivial to find an appropriate cluster for a new user or new item.• Let u be a new user who has provided some ratings.• If a sufficient number of rated items exits in the current co-clustering solution (sub-matrix), then the new user’s cluster can be found using • nuh is the number of times user u has rated the items belonging to item cluster h during the online phase. • -uh is the average of residuals for those ratings. • g: user cluster • A similar procedure finds the cluster of a new item. 13
  • INCREMENTAL TRAININGALGORITHM• numberIn() is the number of ratings a user u (item i) has in the co-clustering solution which is defined by hnuh (gnig). Trust the information for incorporating new user or new item.• The new users and items will not receive any prediction, those are predicted by Eq. (1). 14
  • EVOLUTIONARY ALGORITHM A group of co-clustering solutions is randomly generated and locally• A population-based search optimized via Bregman co-clustering. approach• Goal: find better solutions by combining the current solutions.• Every evolutionary algorithm has three main step • Selection • Crossover • Replacement Worst solution 15
  • CROSSOVERALGORITHM• Let X be a NK assignment matrix • An element x=(u, k) is 1, if object u is assigned to cluster k and 0 otherwise.• qr is the intersection between cluster q and cluster r.• (k) is the largest intersection. 16
  • ILLUSTRATION EXAMPLE p=1 p=2 (1, 1) l1 l2 (2, 2) l1 l1 5 4 0 0 k2 5 4 k1 0 0 1 2 3 4 k1 1 2 k2 3 4 3 1 0 1 Bregman k2 3 1 k2 0 1 0 0 1 1 co-clustering (3, 3) l1 l1 (3, 3) l1 l2 5 4 3 2 k1 0 0 k1 1 1 2 1 2 1 k2 5 4 k2 3 2 p=3 p=4 k2 2 1 k2 2 11X1 k k2 2X2 k1 k2 k1 k2 k3 1 u1 0 1 u1 1 0 crossover u1 0 0 1 u2 1 0 u2 0 1 u2 0 0 1 u3 0 1 u3 0 1 u3 0 1 0 (k) k1 k2 17 u3 1 0
  • EXPERIMENTALRESULTS (1/3)• The experiment dataset: Movielens dataset consisting of 100,000 ratings (1-5) by 943 users on 1682 movies.• Evaluation metrics: Mean Absolute Error (MAE)• Comparison methods: • Baseline • COCL: George, ICDM’05 • ECOCL: Evolutionary co-clustering without ensembles • ECOCLE: Evolutionary co-clustering with ensembles. • IKNN: Incremental KNN method. • SVD 18
  • EXPERIMENTALRESULTS (2/3)• The experiment use the 5-fold cv. to get average MAE.• The incremental training based on three different strategies.• “20%-80%”: 20% of data was used for offline training, 80% for incremental training. 19
  • EXPERIMENTALRESULTS (3/3)• The offline phase of ECOCLE needs more time due to the evolutionary algorithm.• Online time is the sum of both incremental training and prediction.• ECOCL and IKNN have similar online speeds, while the accuracy for ECOCLE is much higher. 20
  • CONCLUSION• Online CF methods that can incorporate new data in real time are advantageous in many practical situations.• However, this problem has not been adequately addressed.• This paper extended the idea of CF via co-clustering to satisfy this need.• The empirical results showed the proposed ECOCLE avchived very good accuracy compared to other incremental methods.• Training time was comparatively slow, but still manageable. 21
  • THANK YOUFOR LISTENING!Q & A 22