Recommender Systems: The Art and Science of Matching Items to Users<br />Deepak Agarwal<br />dagarwal@yahoo-inc.com<br />L...
 Recommender Systems<br />Serve the “right” item to users in an automated fashion to optimize long-term business objective...
Content Optimization: Match articles to users<br />
   Advertising: Recommend Ads on Pages<br />Display/Graphical Ad<br />Contextual  Advertising<br />
Shopping: Recommend Related Items to buy <br />
Recommend Movies<br />
Recommend People<br />
Problem Definition<br />Item Inventory<br />Articles, web page, <br />ads, …<br /> Example applications<br /> Content, Mov...
Data Mining -> Clever Algorithms<br />So much data, enough to process it all and process it fast? <br />Ideally, we want t...
Simple Approach: Segment Users/Items<br />Estimate CTR of items in each user segment<br />j<br />Serve most popular item i...
Example Application: Yahoo! front page <br />Recommend most popular article on slot F1 (out of 30-40, editorially programm...
Simple algorithm we began with<br />Initialize CTR of every new article to some high number<br />This ensures a new articl...
Bias in the data: Article CTR decays over time<br />This is what an article CTR curve looked like<br />We were computing C...
Explanation of decay: Repeat exposure<br />User Fatigue-> CTR Decay<br />
Clues to solve the mystery <br />Users seeing an article for the first time have higher CTR, those being exposed have lowe...
CTR of same article with/without randomization<br />Serving bucket<br />Random bucket<br />Decay<br />Time-of-Day<br />
CTR of articles in Random bucket<br />Track<br />Unbiased CTR, but it is dynamic. Simply counting clicks and views still d...
New algorithm<br />Create a small random bucket which selects one out of K existing articles at random for each user visit...
Other advantages<br />The random bucket ensures continuous flow of data for all articles, we quickly discard bad articles ...
More Details<br />Agarwal, Chen, Elango, Ramakrishnan, Motgi, Roy, Zachariah. Online models for Content Optimization, NIPS...
Lessons learnt<br />It is ok to start with simple models that learn a few things, but beware of the biases inherent in you...
Why learn how to gamble?<br />Consider a slot machine with two arms<br />(unknown payoff probabilities)<br />p2<br />p1   ...
Recommender Problems: Bandits?<br />Two Items: Item 1 CTR= 2/100 ; Item 2 CTR= 250/10000<br />Greedy: Show Item 2 to all; ...
Bayes optimal solution in next 5 mins 2 articles, 1 uncertain<br />Optimal allocation to uncertain article <br />Uncertain...
More Details on the Bayes Optimal Solution<br />Agarwal, Chen, Elango. Explore-Exploit Schemes for Web Content Optimizatio...
Recommender Problems: bandits in a casino<br />Items are arms of bandits, ratings/CTRs are unknown payoffs<br />Goal is to...
Back to the number of things to learn (curse of dimensionality)<br />Pros of learning things at granular resolutions<br />...
Solution: Go granular but with back-off<br />Too little data at granular level, need to borrow from coarse resolutions wit...
Sometimes too much data at granular level<br />No need to back-off <br /> CTR(1) = 100/50000<br />……<br />….<br />12. Pub-...
How much to borrow from ancestors?<br />Learning the weights when there is little data<br />Depends on heterogeneity in CT...
Crucial issue<br />Obtain grouping structures to perform effective back-off<br />BUT<br />How do we detect such groupings ...
How do we estimate heterogeneity for a group?<br />Simple example: CTR of an ad in different zip-codes<br />(si, ti): i=1,...
Two Examples of learning granular MODELS withback-off<br />
Online Advertising: Matching ads to opportunities<br />Pick best ads<br />Ads<br />Advertisers<br />Ad Network<br />Page<b...
How to Select “Best” ads<br />Pick best ads<br />Ads<br />Ad Network<br />Page<br />User<br />Publisher<br />Response rate...
The Ad Exchange: Unified Marketplace<br />Bids $0.75 via Network…<br />Bids $0.50<br />Bids $0.60<br />Ad.com<br />AdSense...
Advertising example <br />f(bid, rate) ---- rate is unknown, needs to be estimated<br />Goal: maximize revenue, advertiser...
Data<br />Features available for both opportunity and ad<br />Publisher: Publisher content type <br />User: demographics, ...
Model Setup<br />baseline<br />)<br />λij<br />B(<br />Piuj=<br />xi,<br />xj<br />xu,<br />residual<br />i<br />j<br />Ei...
Hierarchical Smoothing of residuals<br />Assuming two hierarchies (Publisher and advertiser)<br />Advertiser<br />Pub-clas...
Back-off Model<br />7 neighbors<br />3 blues, 4 greens<br />Advertiser<br />campaign<br />Pub-class<br />Conv-id<br />Pub-...
Ad- exchange (RightMedia)<br />Advertisers participate in different ways<br />CPM (pay by ad-view)<br />CPC (pay per click...
Data <br />Two kinds of conversion rates<br />Post-Click conv-rate = click-rate*conv/click<br />Post-View conv-rate = conv...
Datasets : Right-Media<br />CLICK  [~90B training events, ~100M parameters]<br />Post Click Conversion(PCC) (~.5B training...
Accuracy: Average test log-likelihood<br />
More Details<br />Agarwal, Kota, Agrawal, Khanna: Estimating Rates of Rare Events with Multiple Hierarchies through Scalab...
Back to Yahoo! front page<br />Recommend articles:<br />    Image<br />    Title, summary<br />    Links to other pages<br...
DATA<br />article j with<br />item featuresxj<br />(keywords, content categories, ...)<br />Algorithm selects<br />       ...
Bipartite Graph completion problem<br />Observed Graph<br />no-click<br />Articles<br />Articles<br />Users<br />Predicted...
Factor Model to estimate CTR at granular levels<br />ui<br />vj<br />j<br />i<br />Item popularity<br />User popularity<br />
Estimating granular latent factors via back-off<br />If user/item have high degree, good estimates of factors available el...
Estimates with back-off<br />For new user/article, factor estimates based on features<br />For old user/article, factor es...
Estimating the back-off Regression function<br />Maximize<br />Integral cannot be computed in closed form, <br />approxima...
Data Example<br />2M binary observations by 30K heavy users on 4K articles<br />Heavy user ---- at least 30 visits to the ...
ROC curve<br />Factor model: regression + online updates<br />Factor model: regression only<br />
More Details<br />Agarwal and Chen: Regression Based Latent Factor Models, KDD 2009<br />
Computation<br />Both models run on Hadoop, scalable to large datasets<br />For the factor models, also working on online ...
Multi-ObjectivesBeyond Clicks<br />
 Post-click utilities<br />Recommender<br />EDITORIAL<br />AD SERVER<br />      PREMIUM DISPLAY<br />         (GUARANTEED)...
Serving Content on Front Page: Click Shaping<br />What do we want to optimize?<br />Usual: Maximize clicks (maximize downs...
How are Clicks being Shaped ?<br />AFTER<br />BEFORE<br />Supply distribution<br />Changes<br />SHAPING can happen  with r...
Multi-Objective Optimization<br />n articles <br />K properties <br />m user segments<br />A1<br />S1<br />news<br />xij: ...
 Time duration of i on j: dij</li></ul>62<br />
63<br />Multi-Objective Program <br /><ul><li>Scalarization
Goal Programming </li></li></ul><li>Pareto-optimal solution (more in KDD 2011)<br />64<br />
More Details<br />Agarwal, Chen, Elango, Wang: Click Shaping to Optimize Multiple Objectives, KDD 2011 (forthcoming)<br />
Can we do it with Advertising Revenue?<br />Yes, but need to be careful.<br />Interventions can cause undesirable long-ter...
Summary<br />Simple models that learn a few parameters are fine to begin with  BUT beware of bias in data<br />Small amoun...
A modeling strategy<br />Feature Engineering<br />Content: IR, clustering, taxonomy, entity,.. <br />User profiles: clicks...
Indexing for fast retrieval at runtime<br />Retrieving the top-k when item inventory is large in few a milli-seconds could...
Upcoming SlideShare
Loading in …5
×

Recommender Systems: The Art and Science of Matching Items to Users - A LinkedIn open data talk by Deepak Agarwal from Yahoo Research!

5,951 views

Published on

Algorithmically matching items to users in a given context is essential for the success and profitability of large scale recommender systems like content optimization, computational advertising, search, shopping, movie recommendation, and many more. The objective is to maximize some utility (e.g. total revenue, total engagement) of interest over a long time horizon. This is a bandit problem since there is positive utility in displaying items that may have low mean but high variance. A key challenge in such bandit problems is the curse of dimensionality. Bandit problems are also difficult to work with for responses that are observed with considerable delay (e.g. return visits, confirmation of a buy). One approach is to optimize multiple competing objectives in the short-term to achieve the best long-term performance. For instance, in serving content to users on a website, one may want to optimize some combination of clicks and downstream advertising revenue in the short-term to maximize revenue in the long-run. In this talk, I will discuss some of the technical challenges by focusing on a concrete application - content optimization on the Yahoo! front page. I will also briefly discuss response prediction techniques for serving ads on the RightMedia Ad exchange.

Bio: Deepak Agarwal is a statistician at Yahoo! who is interested in developing statistical and machine learning methods to enhance the performance of large scale recommender systems. Deepak and his collaborators significantly improved article recommendation on several Yahoo! websites, most notably on the Yahoo! front page (a 200+% improvement in click-rates). He also works closely with teams in computational advertising to deploy elaborate statistical models on the RightMedia Ad Exchange, yet another large scale recommender system. He currently serves as associate editor for the Journal of American Statistical Association (JASA) and IEEE Transaction on Knowledge discovery and Data Engineering (TKDE).

Published in: Technology
0 Comments
22 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,951
On SlideShare
0
From Embeds
0
Number of Embeds
79
Actions
Shares
0
Downloads
274
Comments
0
Likes
22
Embeds 0
No embeds

No notes for slide
  • Focus on Today module. Publishes trendy, eclectic articles on a broad range of topics including sports, finance, entertainment etc.For each visit, select 4 to display from an inventory of K. Hundreds of millions of visits/day, ~600M visitors per month.
  • Recommender Systems: The Art and Science of Matching Items to Users - A LinkedIn open data talk by Deepak Agarwal from Yahoo Research!

    1. 1. Recommender Systems: The Art and Science of Matching Items to Users<br />Deepak Agarwal<br />dagarwal@yahoo-inc.com<br />LinkedIn, 7th July, 2011 <br />
    2. 2. Recommender Systems<br />Serve the “right” item to users in an automated fashion to optimize long-term business objectives<br />
    3. 3. Content Optimization: Match articles to users<br />
    4. 4. Advertising: Recommend Ads on Pages<br />Display/Graphical Ad<br />Contextual Advertising<br />
    5. 5. Shopping: Recommend Related Items to buy <br />
    6. 6. Recommend Movies<br />
    7. 7. Recommend People<br />
    8. 8. Problem Definition<br />Item Inventory<br />Articles, web page, <br />ads, …<br /> Example applications<br /> Content, Movie,<br />Advertising,<br />Shopping,<br />…..<br />Construct an automatedalgorithm <br />to select item(s) to show<br />Get feedback <br />(click, time-spent,rating, buy,…) <br />Refine parameters of the algorithm<br />Repeat (large number of times)<br />Optimize metric(s) of interest<br />(Total clicks, Total revenue,…)<br />Low Marginal cost per serve, <br /> Efficient and intelligent systems can<br /> provide significant improvements<br />Context<br />page, <br /> previous item viewed,<br />…<br />USER<br />
    9. 9. Data Mining -> Clever Algorithms<br />So much data, enough to process it all and process it fast? <br />Ideally, we want to learn every user-item interaction<br />Number of things to learn increases with data size <br />Dynamic nature exacerbates the problem<br />We want to learn things quickly in order to react fast<br />
    10. 10. Simple Approach: Segment Users/Items<br />Estimate CTR of items in each user segment<br />j<br />Serve most popular item in segment<br />Item/item segments<br />Users<br />i<br />CTRij = clicksij/viewsij<br />User segments<br />
    11. 11. Example Application: Yahoo! front page <br />Recommend most popular article on slot F1 (out of 30-40, editorially programmed)<br />Can collect data every 5 minutes<br />Should be simple, just count clicks and views, right?<br />Not quite!<br />Today module<br />F1<br />F2<br />F3<br />F4<br />NEWS<br />
    12. 12. Simple algorithm we began with<br />Initialize CTR of every new article to some high number<br />This ensures a new article has a chance of being shown<br />Show the most popular CTR article (randomly breaking ties) for each user visit in the next 5 minutes<br />Re-compute the global article CTRs after 5 minutes<br />Show the new most popular for next 5 minutes<br />Keep updating article popularity over time<br />Quite intuitive. Did not work! Performance was bad. Why? <br />
    13. 13. Bias in the data: Article CTR decays over time<br />This is what an article CTR curve looked like<br />We were computing CTR by cumulating clicks and views. <br />Missing decay dynamics? Dynamic growth model using a Kalman filter. <br />New model tracked decay very well, performance still bad<br />And the plot thickens, my dear Watson! <br />
    14. 14. Explanation of decay: Repeat exposure<br />User Fatigue-> CTR Decay<br />
    15. 15. Clues to solve the mystery <br />Users seeing an article for the first time have higher CTR, those being exposed have lower<br />but we use the same CTR estimate for all ?<br />Other sources of bias? How to adjust for them?<br />A simple idea to remove bias <br />Display articles at random to a small randomly chosen population<br />Call this the Random bucket<br />Randomization removes bias in data <br />(Charles Pierce,1877; R.A. Fisher, 1935)<br />Some other observations<br />Sticking with an article for complete 5 minutes was degrading performance, many bad articles got displayed too many times<br />Reaction time to display good articles was slower<br />
    16. 16. CTR of same article with/without randomization<br />Serving bucket<br />Random bucket<br />Decay<br />Time-of-Day<br />
    17. 17. CTR of articles in Random bucket<br />Track<br />Unbiased CTR, but it is dynamic. Simply counting clicks and views still didn’t won’t work well.<br />
    18. 18. New algorithm<br />Create a small random bucket which selects one out of K existing articles at random for each user visit<br />Learn unbiased article popularity using random bucket data by tracking (through a non-linear Kalman filter)<br /> Serve the most popular article in the serving bucket<br />Override rules: Don’t show an article to a user after few previous exposures, other rules (diversity, voice),….<br />
    19. 19. Other advantages<br />The random bucket ensures continuous flow of data for all articles, we quickly discard bad articles and converge to the best one<br />This saved the day, the project was a success!<br />Initial click-lift 40% (Agarwal et al. NIPS 08) <br />after 3 years it is 200+% (fully deployed on Yahoo! front page and elsewhere on Yahoo!), we are still improving the system<br />
    20. 20. More Details<br />Agarwal, Chen, Elango, Ramakrishnan, Motgi, Roy, Zachariah. Online models for Content Optimization, NIPS 2008<br />Agarwal, Chen, Elango. Spatio-Temporal Models for Estimating Click-through Rate, WWW 2009<br />
    21. 21. Lessons learnt<br />It is ok to start with simple models that learn a few things, but beware of the biases inherent in your data<br />E.g. of things gone wrong<br />Learning article popularity <br />Data used from 5am-8am pst, served from 10am-1pm pst<br />Bad idea if article popular on the east, not on the west<br />Randomization is a friend, use it when you can. Update the models fast, this may reduce the bias<br />User visit patterns close in time are similar <br />What if we can’t afford complete randomization?<br />Learn how to gamble<br />
    22. 22. Why learn how to gamble?<br />Consider a slot machine with two arms<br />(unknown payoff probabilities)<br />p2<br />p1 ><br />The gambler has 1000 plays, what is the best way to experiment ?<br /> (to maximize total expected reward)<br /> This is called the “bandit” problem, have been studied for a long time.<br />Optimal solution: Play the arm that has maximum potential of being good<br />
    23. 23. Recommender Problems: Bandits?<br />Two Items: Item 1 CTR= 2/100 ; Item 2 CTR= 250/10000<br />Greedy: Show Item 2 to all; not a good idea<br />Item 1 CTR estimate noisy; item could be potentially better<br />Invest in Item 1 for better overall performance on average<br />This is also referred to as Explore/exploit problem<br />Exploit what is known to be good, explore what is potentially good<br />Article 2<br />Article 1<br />Probability density<br />CTR<br />
    24. 24. Bayes optimal solution in next 5 mins 2 articles, 1 uncertain<br />Optimal allocation to uncertain article <br />Uncertainty in CTR: pseudo #views<br />
    25. 25. More Details on the Bayes Optimal Solution<br />Agarwal, Chen, Elango. Explore-Exploit Schemes for Web Content Optimization, ICDM 2009 <br />(Best Research Paper Award)<br />
    26. 26. Recommender Problems: bandits in a casino<br />Items are arms of bandits, ratings/CTRs are unknown payoffs<br />Goal is to converge to the best CTR item quickly<br />But this assumes one size fits all (no personalization)<br />Personalization<br />Each user is a separate bandit<br />Hundreds of millions of bandits (huge casino)<br />Rich literature (several tutorials on the topic)<br />Broadly : Clever/adaptive randomization<br />Our random bucket is a solution, often a good one in practice.<br />
    27. 27. Back to the number of things to learn (curse of dimensionality)<br />Pros of learning things at granular resolutions<br />Better estimates of affinities at event level <br />(ad 77 has high CTR on publisher 88, instead of ad 77 has good CTR on sports publisher)<br />Bias becomes less problematic<br />The more we chop, less prone we are to aggregating dissimilar things, less biased our estimates.<br />Challenges<br />Too much sparsity to learn everything at granular resolutions<br />We don’t have that much traffic<br />E.g. many ads are not even shown on many publishers<br />Explore/exploit helps but cannot do so much experimentation<br />In advertising, response rates (conversion, click) are too low, further exacerbates the problem<br />
    28. 28. Solution: Go granular but with back-off<br />Too little data at granular level, need to borrow from coarse resolutions with abundant data (smoothing, shrinkage)<br />200/5000<br />400/10000<br />CTR(1) = w1(0/5)<br />+ w11(2/200) +w12(40/1000)<br />+w121(200/5000) +w111(400/10000)<br />121. Adv-id=9<br />111. Bay Area<br />40/1000<br />2/200<br />12. Pub-id=88, adv-id=9<br />11. Palo Alto<br />0/5<br />1. Pub-id=88, ad-id=77, zip=Palo Alto<br />
    29. 29. Sometimes too much data at granular level<br />No need to back-off <br /> CTR(1) = 100/50000<br />……<br />….<br />12. Pub-id=88, adv-id=8<br />11. Arizona<br />100/50000<br />1. Pub-id=88, ad-id=80, zip=Arizona<br />
    30. 30. How much to borrow from ancestors?<br />Learning the weights when there is little data<br />Depends on heterogeneity in CTRs of small cells <br />Ancestors with similar CTR child nodes are more credible<br />E.g. if all zip-codes in Bay Area have similar CTRs, more weights given to Bay Area node<br />Pool similar cells, separate dissimilar ones<br />Palo Alto<br />Bay Area<br />Mtn View<br />Las Gatos<br />
    31. 31. Crucial issue<br />Obtain grouping structures to perform effective back-off<br />BUT<br />How do we detect such groupings when dealing with high dimensional data?<br />Billions/trillions of possible attribute combinations<br />Statistical modeling to the rescue<br />Art and science, requires experience. <br />Important to understand the business, the problem, the data. <br />
    32. 32. How do we estimate heterogeneity for a group?<br />Simple example: CTR of an ad in different zip-codes<br />(si, ti): i=1,…,K; emCTRi = si /ti<br />Var(emCTRi ) good measure of heterogeneity?<br />Not quite, empirical estimates not good for small ti and(or) si<br />Use a model<br />Variance among true CTRs can be estimated in a better way using MLE/MOM <br />(Agarwal & Chen, Latent OLAP, SIGMOD 2011)<br />
    33. 33. Two Examples of learning granular MODELS withback-off<br />
    34. 34. Online Advertising: Matching ads to opportunities<br />Pick best ads<br />Ads<br />Advertisers<br />Ad Network<br />Page<br />User<br />Examples:Yahoo, Google, MSN, <br />Ad exchanges(network of “networks”) …<br />Opportunity<br />Publisher<br />
    35. 35. How to Select “Best” ads<br />Pick best ads<br />Ads<br />Ad Network<br />Page<br />User<br />Publisher<br />Response rates<br />(click, conversion,<br />ad-view)<br />Bids<br />conversion<br />Auction<br />Statistical<br />model<br />Select argmax f(bid,rate)<br />Click<br />Advertisers<br />
    36. 36. The Ad Exchange: Unified Marketplace<br />Bids $0.75 via Network…<br />Bids $0.50<br />Bids $0.60<br />Ad.com<br />AdSense<br />Bids $0.65—WINS!<br />Has ad impression to sell --<br />AUCTIONS<br />… which becomes $0.45 bid<br />Transparency and value<br />
    37. 37. Advertising example <br />f(bid, rate) ---- rate is unknown, needs to be estimated<br />Goal: maximize revenue, advertiser ROI<br />High dimensional rate estimation<br />Response obtained through interaction among few heavy-tailed categorical variables (pub, user, and ad)<br />#levels : could be millions and changes over time<br />ad<br />( pub, user)<br />
    38. 38. Data<br />Features available for both opportunity and ad<br />Publisher: Publisher content type <br />User: demographics, geo,…<br />Ad: Industry, text/video, text (if any)<br />Hierarchically organized<br />Publisher hierarchy: URL -> Domain -> Publisher type<br />Geo hierarchy for users<br />Ad hierarchy: Ad -> Campaign -> Advertiser<br />Past empirical analysis (Agarwal et al, KDD 2007)<br />Hierarchical grouping provides homogeneity in rates<br />Here, groupings available through domain knowledge<br />
    39. 39. Model Setup<br />baseline<br />)<br />λij<br />B(<br />Piuj=<br />xi,<br />xj<br />xu,<br />residual<br />i<br />j<br />Eij= ∑uB(xi ,xu,xj) (Expected Success)<br />Sij~ Poisson(Eij λij)<br />MLE ( Sij /Eij) does not work well<br />,<br />
    40. 40. Hierarchical Smoothing of residuals<br />Assuming two hierarchies (Publisher and advertiser)<br />Advertiser<br />Pub-class<br />Conv-id<br />campaign<br />Pub-id<br />cell (i,j)<br />Ad-id<br />(Sij, Eij, λij)<br />
    41. 41. Back-off Model<br />7 neighbors<br />3 blues, 4 greens<br />Advertiser<br />campaign<br />Pub-class<br />Conv-id<br />Pub-id<br />Ad-id<br />i<br />j<br />(Sij, Eij, λij)<br />Back-off is through parameter sharing<br />Blues and greens are neighbors of several reds<br />
    42. 42. Ad- exchange (RightMedia)<br />Advertisers participate in different ways<br />CPM (pay by ad-view)<br />CPC (pay per click)<br />CPA (pay per conversion)<br />To conduct an auction, normalize across pricing types<br />Compute eCPM (expected CPM)<br />Click-based eCPM= click-rate*CPC<br />Conversion-based eCPM= conv-rate*CPA<br />
    43. 43. Data <br />Two kinds of conversion rates<br />Post-Click conv-rate = click-rate*conv/click<br />Post-View conv-rate = conv/ad-view<br />Three response rate models<br />Click-rate (CLICK), conv/click (PCC), <br />post-view conv/view (PVC)<br />
    44. 44. Datasets : Right-Media<br />CLICK [~90B training events, ~100M parameters]<br />Post Click Conversion(PCC) (~.5B training events,~81M parameters)<br />PVC – Post-View conversions (~7B events, ~6M parameters)<br />Cookie gets augmented with pixel, trigger conversion when user visits the landing page<br />Features<br />Age, gender, ad-size, pub-class, user fatigue<br />2 hierarchies (publisher and advertiser)<br />Two baselines<br />Pubid x adid [FINE] (no hierarchical information)<br />Pubid x advertiser [COARSE] (collapse cells)<br />
    45. 45. Accuracy: Average test log-likelihood<br />
    46. 46. More Details<br />Agarwal, Kota, Agrawal, Khanna: Estimating Rates of Rare Events with Multiple Hierarchies through Scalable Log-linear Models, KDD 2010<br />
    47. 47. Back to Yahoo! front page<br />Recommend articles:<br /> Image<br /> Title, summary<br /> Links to other pages<br />For each user visit,<br /> Pick 4 out of a pool of K<br />Routes traffic to other pages<br />2<br />3<br />4<br />1<br />
    48. 48. DATA<br />article j with<br />item featuresxj<br />(keywords, content categories, ...)<br />Algorithm selects<br /> (i,j) : response yij<br />User i<br />with<br />user featuresxi<br />(demographics,<br />browse history,<br />search history, …)<br />visits<br />(rating or click/no-click)<br />
    49. 49. Bipartite Graph completion problem<br />Observed Graph<br />no-click<br />Articles<br />Articles<br />Users<br />Predicted<br />CTR Graph<br />Users<br />click<br />
    50. 50. Factor Model to estimate CTR at granular levels<br />ui<br />vj<br />j<br />i<br />Item popularity<br />User popularity<br />
    51. 51. Estimating granular latent factors via back-off<br />If user/item have high degree, good estimates of factors available else we need back-off<br />Back-off: We use user/item features through regressions<br /> Age=old Geo=Mtn-View Int=Ski<br />Uik = G1k 1(Agei=old) + G2k 1(Geoi=Mtn-View) + G3k 1(Inti=Ski)<br />Weights of 8 different fallbacks using 3 parameters<br />
    52. 52. Estimates with back-off<br />For new user/article, factor estimates based on features<br />For old user/article, factor estimates<br />Linear combination of regression and user “ratings”<br />
    53. 53. Estimating the back-off Regression function<br />Maximize<br />Integral cannot be computed in closed form, <br />approximated by Monte Carlo using Gibbs Sampling<br />
    54. 54. Data Example<br />2M binary observations by 30K heavy users on 4K articles<br />Heavy user ---- at least 30 visits to the portal in last 5 months<br />Article features<br />Editorially labeled category information (~50 binary features)<br />User features<br />Demographics, browse behavior (~1K features)<br />Training/test split by timestamp of events (75/25)<br />Methods<br />Factor model with regression, no online updates<br />Factor model with regression + online updates<br />Online model based on user-user similarity (Online-UU)<br /> Online probabilistic latent semantic index (Online-PLSI)<br />
    55. 55. ROC curve<br />Factor model: regression + online updates<br />Factor model: regression only<br />
    56. 56. More Details<br />Agarwal and Chen: Regression Based Latent Factor Models, KDD 2009<br />
    57. 57. Computation<br />Both models run on Hadoop, scalable to large datasets<br />For the factor models, also working on online EM <br />Collaboration with Andrew Cron, Duke University<br />
    58. 58. Multi-ObjectivesBeyond Clicks<br />
    59. 59. Post-click utilities<br />Recommender<br />EDITORIAL<br />AD SERVER<br /> PREMIUM DISPLAY<br /> (GUARANTEED)<br /> NETWORK PLUS<br /> (Non-Guaranteed)<br />Clicks on FP links influence downstream supply distribution<br />content<br /> SPORTS<br />NEWS<br />Downstream engagement<br />(Time spent)<br />OMG<br />FINANCE<br />
    60. 60. Serving Content on Front Page: Click Shaping<br />What do we want to optimize?<br />Usual: Maximize clicks (maximize downstream supply from FP)<br />But consider the following<br />Article 1: CTR=5%, utility per click = 5 <br />Article 2: CTR=4.9%, utility per click=10<br />By promoting 2, we lose 1 click/100 visits, gain 5 utils<br />If we do this for a large number of visits --- lose some clicks but obtain significant gains in utility?<br />E.g. lose 5% relative CTR, gain 20% in utility (revenue, engagement, etc)<br />
    61. 61. How are Clicks being Shaped ?<br />AFTER<br />BEFORE<br />Supply distribution<br />Changes<br />SHAPING can happen with respect to multiple downstream metrics (like engagement, revenue,…)<br />
    62. 62. Multi-Objective Optimization<br />n articles <br />K properties <br />m user segments<br />A1<br />S1<br />news<br />xij: variables<br />known pij, dij<br />A2<br />S2<br />finance<br />…<br />…<br />…<br />omg<br />An<br />Sm<br /><ul><li> CTR of user segment i on article j: pij
    63. 63. Time duration of i on j: dij</li></ul>62<br />
    64. 64. 63<br />Multi-Objective Program <br /><ul><li>Scalarization
    65. 65. Goal Programming </li></li></ul><li>Pareto-optimal solution (more in KDD 2011)<br />64<br />
    66. 66. More Details<br />Agarwal, Chen, Elango, Wang: Click Shaping to Optimize Multiple Objectives, KDD 2011 (forthcoming)<br />
    67. 67. Can we do it with Advertising Revenue?<br />Yes, but need to be careful.<br />Interventions can cause undesirable long-term impact<br />Communication between two complex distributed systems <br />Display advertising at Y! also sold as long-term guaranteed contracts<br />We intervene to change supply when contract is at risk of under-delivering<br />Research to be shared in the future<br />
    68. 68. Summary<br />Simple models that learn a few parameters are fine to begin with BUT beware of bias in data<br />Small amounts of randomization + fast model updates<br />Clever Randomization using Explore/Exploit techniques<br />Granular models are more effective but we need good statistical algorithms to provide back-off estimates<br />Considering multi-objective optimization is often important<br />
    69. 69. A modeling strategy<br />Feature Engineering<br />Content: IR, clustering, taxonomy, entity,.. <br />User profiles: clicks, views, social, community,..<br />Online<br />(Fine resolution<br />Corrections)<br />(item, user level)<br />(Quick updates)<br />Initialize<br />Offline(Logistic, GBDT,..)<br />Coarse and slow changing<br />components<br />Explore/Exploit<br />(Adaptive sampling)<br />
    70. 70. Indexing for fast retrieval at runtime<br />Retrieving the top-k when item inventory is large in few a milli-seconds could be challenging for complex models<br />Current work (joint with Maxim Guverich)<br />Approximate the model by an index friendly synthetic model<br />Index friendly model retrieves the top-K very fast, a second stage evaluation on top-K retrieves the top-k ( K > k)<br />Research to be shared in a forthcoming paper<br />
    71. 71. Collaborators<br />Bee-Chung Chen (Yahoo! Research, CA)<br />Pradheep Elango (Yahoo! Labs, CA)<br />Liang Zhang (Yahoo! Labs, CA)<br />Nagaraj Kota (Yahoo! Labs, India)<br />Xuanhui Wang (Yahoo! Labs, CA)<br />Rajiv Khanna (Yahoo! Labs, India)<br />Andrew Cron (Duke University)<br />Engineering & Product Teams (CA, India)<br />Special thanks to Yahoo! Labs senior leadership for the support<br />Andrei Broder, Preston MacAfee ,Prabhakar Raghavan ,Raghu Ramakrishnan<br />
    72. 72. E-mail: dagarwal@yahoo-inc.com<br />Thank you !<br />

    ×