Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Building a Recommendation Engine - ... by NYC Predictive An... 64221 views
- Your own recommendation engine with... by Christophe Willemsen 359 views
- Building a Recommendation Engine wi... by Spring by Pivotal 2390 views
- Build Your Own Recommendation Engine by Sri Ambati 1470 views
- Recommendation Engine Demystified by DKALab 4514 views
- Buidling large scale recommendation... by Keeyong Han 11297 views

6,444 views

Published on

License: CC Attribution License

No Downloads

Total views

6,444

On SlideShare

0

From Embeds

0

Number of Embeds

103

Shares

0

Downloads

180

Comments

0

Likes

13

No embeds

No notes for slide

- 1. Recommendation Engine Powered by Hadoop<br />PranabGhosh<br />pkghosh@yahoo.com<br />August 11th 2011 Meetup<br />
- 2. About me<br />Started with numerical computation on main frames, followed by many years of C and C++ systems and real time programming, followed by many years of java, JEE and enterprise apps<br />Worked for Oracle, HP, Yahoo, Motorola, many startups and mid size companies<br />Currently Big Data consultant using Hadoop and other cloud related technologies<br />Interested in Distributed Computation, Big Data, NOSQL DB and Data Mining.<br />August 11th 2011 Meetup<br />
- 3. Hadoop<br />Power of functional programming and parallel processing join hands to create Hadoop<br />Basically parallel processing framework running on cluster of commodity machines<br />Stateless functional programming because processing of each row of data does not depend upon any other row or any state<br />Divide and conquer parallel processing. Data gets partitioned and each partition get processed by a separate mapper or reducer task.<br />August 11th 2011 Meetup<br />
- 4. More About Hadoop<br />Data locality, at least for the mapper. Code gets shipped to where the data partition resides<br />Data is replicated, partitioned and resides in Hadoop Distributed File System (HDFS)<br />Mapper output: {k -> v}. Reducer: input {k -> List(v)} Reducer output {k -> v}<br />Many to many shuffle between mapper output and reducer input. Lot of network IO.<br />Simple paradigm, but surprising solves an incredible array of problems.<br />August 11th 2011 Meetup<br />
- 5. Recommendation Engine<br />Does not require an introduction. You know it if you have visited Amazon or Netflix. We love it when they get right, hate it otherwise.<br />Very computationally intensive, ideal for Hadoop processing.<br />In memory based recommendation engines, the entire data set is used directly e.g collaboration filtering, content based recommendation engine<br />In model based recommendation, a model is built first by training the data and then predictions made e.g., Bayesian, Clustering<br />August 11th 2011 Meetup<br />
- 6. Content Based recommendation<br />A memory based system, based purely on the attributes of an item only<br />An item with p attributes is considered as a point in a p dimensional space.<br />Uses nearest neighbor approach. Similar items are found using distance measurement in the p dimensional space.<br />Useful for addressing the cold start problem i.e., a new item in introduced in the inventory.<br />Computationally intensive. Not very useful for real time recommendation.<br />August 11th 2011 Meetup<br />
- 7. Model Based Recommendation<br />Based on traditional machine learning approach<br />In contract to memory based algorithms, creates a learning model using the ratings as training data.<br />The model is built offline as a batch process and saved. Model needs to be rebuilt when significant change in data is detected.<br />Once the trained model is available, making recommendation is quick. Effective for real time recommendation.<br />August 11th 2011 Meetup<br />
- 8. Collaboration Filter<br />In collaboration filtering based recommendation engine, recommendations are made based not only the user’s rating but also rating by other users for the same item and some other items. Hence the name collaboration filtering.<br />Requires social data i.e., user’s interest level for an item. It could be explicit e.g., product rating or implicit based on user’s interaction and behavior in a site. <br />More appropriate name might be user intent based recommendation engine.<br />Two approaches. In user based, similar users are found first. In item based, similar items are found first.<br />August 11th 2011 Meetup<br />
- 9. Item Based or User Based?<br />Item based CF is generally preferred. Similarity relationship between items is relatively static and stable, because items naturally map into many genres.<br />User based CF is less preferred, because we humans are more complex than a laptop or smart phone (although some marketing folks may disagree). As we grow and go through life experiences, our interests change. Our similarity relationship in terms of common interests with other humans is more dynamic and change over time<br />August 11th 2011 Meetup<br />
- 10. Utility Matrix<br />Matrix of user and item. The cell contains a value indicative of the users interest level for that item e.g., rating. Matrix is sparse<br />The purpose of recommendation engine is to predict the values for the empty cells based on available cell values<br />Denser the matrix, better the quality of recommendation. But generally the matrix sparse.<br />If I have rated item A and I need recommendation, enough users must have rated A as well as other items.<br />August 11th 2011 Meetup<br />
- 11. Example Utility Matrix<br />August 11th 2011 Meetup<br />
- 12. Rating Prediction Example<br />Let’s say we are interested in predicting r35 i.e., rating of item i5 for user u3.<br />Item based CF : r35 = (c52 x r32 + c54 x r34) / (c52 + c54) where items i2 and i4 are similar to i5<br />User based CF : r35 = (c31 x r15 + c32 x r25) / (c31 +c32) where users u1 and u2 are similar to u3<br />cij = similarity coefficient between items i and j or users i and j and rij = rating of item j by user i<br />August 11th 2011 Meetup<br />
- 13. Rating Estimation<br />In the previous slide, we assumed rating data for item, user pair was already available, through some rating mechanism a.k.a explicit rating.<br />However there may not be a product rating feature available in a site.<br />Even if the rating feature is there, many users may not use it.Evenif many users rate, explicit rating by users tend to be biased.<br />We need a way to estimate rating based on user behavior in the site and some heuristic a.k.a implicit rating<br />August 11th 2011 Meetup<br />
- 14. Heuristics for Rating: An Example<br />August 11th 2011 Meetup<br />
- 15. Similarity computation<br />For item based CF, the first step is finding similar items. For user based CF, the first step is finding similar users<br />We will use Pearson Correlation Coefficient. It indicates how well a set of data points lie in a straight line. In a 2 dimensional space of 2 items, rating of the 2 items by an user is a data point. <br />There are other similarity measure algorithms e.g., euclidian distance, cosine distance<br />August 11th 2011 Meetup<br />
- 16. Pearson Correlation Coefficient<br />c(i,j) = cov(i,j) / (stddev(i) * stddev(j)) <br />cov(i,j) = sum ((r(u,i) - av(r(i)) * (r(u,j) - av(r(j))) / n <br />stddev(i) = sqrt(sum((r(u,i) - av(r(i)) ** 2) / n) <br />stddev(j) = sqrt(sum((r(u,j) - av(r(j)) ** 2) / n) <br />The covariance can also be expressed in this alternative form, which we will be using cov(i,j) = sum(r(u,i) * r(u,j)) / n - av(r(i)) * av(r(j) <br />c(i,j) = Pearson correlation coefficient between product i and j <br />cov(i,j) = Covariance of rating for products i and j <br />stddev(i) = Std deviation of rating for product i<br />stddev(j) = Std deviation of rating for product j <br />r(u,i) = Rating for user u for product i<br />av(r(i)) = Average rating for product i over all users that rated <br />sum = Sum over all users <br />n = Num of data points<br />August 11th 2011 Meetup<br />
- 17. Map Reduce<br />We are going to have 2 MR jobs working in tandem for items based CF. Additional preprocessing MR jobs are also necessary to process click stream data.<br />The first MR calculates correlation for all item pairs, based on rating data. Essentially finds similar items.<br />The second MR takes the output of the first MR and the rating data for the user in question. The output is a list of items ranked by predicted rating<br />August 11th 2011 Meetup<br />
- 18. Correlation Map Reduce <br />It takes two kinds of input. The first kind has item id pair and two mean and std dev values for the ratings . This is generated by another pre processor MR. <br />The second input has item rating for all users. This is generated by another preprocessor MR analyzing click stream data. Each row is for one user along with variable number of product ratings by an user<br />August 11th 2011 Meetup<br />
- 19. Correlation Mapper Input<br />August 11th 2011 Meetup<br />
- 20. Correlation Mapper Output<br />The mapper produces two kinds of output. <br />The first kind contains {pid1,pid2,0 -> m1,s1,m2ms2}. It’s the mean and std dev for a pid pair<br />The second kind contains {pid1,pid2,1 -> r1xr2}. It’s the product of rating for the pidpair for some user.<br />We are appending 0 and 1 to the mapper output key, for secondary sorting which will ensure that for a given pid pair, the reducer will receive the value of the first kind of record followed by multiple values of the second kind of mapper output<br />August 11th 2011 Meetup<br />
- 21. Correlation Mapper Output<br />August 11th 2011 Meetup<br />
- 22. Correlation Reducer<br />Partitioner based on the first two tokens of key (pid1,pid2), so that the values for the same pid pair go to the same reducer<br />Grouping comparator on the first two tokens of key (pid1,pid2), so that all the mapper out put for the same pid pair is treated as one group and passed to the reducer in one call<br />The reducer output is pid pair and the corresponding correlation coefficient {pid1,pid2 -> c12}<br />For a pid pair, the reducer has at it’s disposal all the data for Pearson correlation computation.<br />August 11th 2011 Meetup<br />
- 23. Correlation Reducer Output<br />August 11th 2011 Meetup<br />
- 24. Prediction Map Reduce<br />This is the second MR that takes item correlation data which is the output of the first MR and the rating data for the target user.<br />We are running this MR to make rating prediction and ultimately recommendation for an user. The user rating data is passed to Hadoop as so called “side data”.<br />The mapper output consists of pid of an item as the key and the rating of the related item multiplied by the correlation coefficint and the correlation coefficient as the value. {pid1 -> rating(pid3) x c13, c13}<br />August 11th 2011 Meetup<br />
- 25. Prediction Mapper Input<br />August 11th 2011 Meetup<br />
- 26. Prediction Mapper Output<br />August 11th 2011 Meetup<br />
- 27. Prediction Reducer<br />The reducer gets a pid as a key and a list of tuples as value. Each tuple consists of weighted rating of a related item and the corresponding correlation coefficient. {pid1 -> [(pid3 x c31, c31), (pid5 x c51, c51),…..]<br />The reducer sums up the weighted rating and divides the sum by sum of correlation value. This is the final predicted rating for an item.<br />The reducer output is an item pid and the predicted rating for the item. All that remains is to sort the predicted ratings and use the top n items for making recommendation<br />August 11th 2011 Meetup<br />
- 28. Realtime Prediction<br />We would like to make recommendation when there is a significant event e.g., item gets put on a shopping cart.<br />But Hadoop is an offline batch processing system. How do we circumvent that? We have to do pre computation and cache the results.<br />There are 2 MR jobs: Correlation MR to calculate item correlation and Prediction MR to prediction rating. <br />We should re run the 2 MR jobs as necessary when significant change in user item rating is detected <br />August 11th 2011 Meetup<br />
- 29. Pre Computation<br />As mentioned earlier item correlation is relatively stable and only needs to be re computed when there is significant change in the utility matrix <br />Correlation MR for item similarity should be run only after significant over all change in utility matrix has been detected, since the last run.<br />For a given user, which is basically a row in the utility matrix, if significant change is detected e.g., new rating by the user for a product is available, we should re run rating prediction MR for the user. <br />August 11th 2011 Meetup<br />
- 30. Cold Start Problem<br />How do we make recommendation when a new item is introduced in the inventory or a new user visits the site<br />For new item, although we have no user interest data available we can use content based recommendation. Essentially, it’s similarity computation based on the attributes of the item only. <br />For new user (cold user?) the problem is much harder, unless detailed user profile data is available.<br />August 11th 2011 Meetup<br />
- 31. Some Temporal Issues<br />When does an item have enough rating data to be accurately recommendable? How to define the threshold?<br />When is there enough user rating, to be able to get good recommendations? How to define the threshold?<br />How to deal with old ratings, as users interest shifts with passing time?<br />When is there enough data in the utility matrix to bootstrap the recommendation system?<br />August 11th 2011 Meetup<br />
- 32. Resources<br />My 2 part blog posts on this topic at http://pkghosh.wordpress.com <br />“Programming Collective Intelligence” by Toby Segaram, O’Reilly<br />“Mining of Massive Datasets” by AnandRajaraman and Jeffrey Ullman<br />August 11th 2011 Meetup<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment