Successfully reported this slideshow.
Your SlideShare is downloading. ×

Context Aware Recommendations at Netflix

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 38 Ad

Context Aware Recommendations at Netflix

Download to read offline

At Netflix, we try to provide the best personalized video recommendations to our members. To do this, we need to adapt our recommendations for each contextual situation, which depends on information such as time or device. In this talk, I will describe how state of the art Contextual Recommendations are used at Netflix. A first example of contextual adaptation is the model that powers the Continue Watching row. It uses a feature-based approach with a carefully constructed training set to learn how to adapt to the context of the member. Next, I will dive into more modern approaches such as Tensor Factorization and LSTMs and share some results from deployments of these methods. I will highlight lessons learned and some common pitfalls of using these powerful methods in industrial scale systems. Finally, I will touch upon system reliability, choice of optimization metrics, hidden costs, risks and benefits of using highly adaptive systems.

At Netflix, we try to provide the best personalized video recommendations to our members. To do this, we need to adapt our recommendations for each contextual situation, which depends on information such as time or device. In this talk, I will describe how state of the art Contextual Recommendations are used at Netflix. A first example of contextual adaptation is the model that powers the Continue Watching row. It uses a feature-based approach with a carefully constructed training set to learn how to adapt to the context of the member. Next, I will dive into more modern approaches such as Tensor Factorization and LSTMs and share some results from deployments of these methods. I will highlight lessons learned and some common pitfalls of using these powerful methods in industrial scale systems. Finally, I will touch upon system reliability, choice of optimization metrics, hidden costs, risks and benefits of using highly adaptive systems.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to Context Aware Recommendations at Netflix (20)

Advertisement

Recently uploaded (20)

Context Aware Recommendations at Netflix

  1. 1. Context Aware Recommendations at Netflix Linas Baltrunas DMBI, May 10, 2018 @LinasTw
  2. 2. Netflix is Entertainment. Product defines macro-context Goal: maximize member satisfaction and retention
  3. 3. Contents. ● Why Context Matters ● Contextual Models ○ Feature Based Model ○ Sequence Models ● Conclusions
  4. 4. Context Free Context Aware
  5. 5. Importance of Context Signal strength is domain specific: ● Tourism ● E-commerce ● Movies
  6. 6. ● Explicit ○ Location (country, region) ○ Time (day, season, hour) ○ Device ○ Language ● Inferred ○ Binging state ○ Companion Contextual Dimensions at Netflix
  7. 7. Features that describe user experience and can rapidly change states Technical Definition of Context
  8. 8. Examples of Contextual Signals
  9. 9. Location JAPAN MEXICO
  10. 10. Time 9AM in UK: :11PM in UK
  11. 11. Device
  12. 12. Language Dutch in Belgium French in Belgium
  13. 13. How to Train your Dragon First Context Aware Model
  14. 14. User Modes of Watching. ● Continuation ● Discovery ● Play from My List ● Rewatch ● Search
  15. 15. Feature Based Context-Aware Model ● Continue Watching row ○ Time ○ Device ● Title ranking ● Row ranking
  16. 16. Title Ranking Model ● P(titleX=continue_watch | current_time, current_device, some_play_happens) Time, tPast Today t1,iOS t2,web t3,web t4,iOS ? ?
  17. 17. ● P(titleX=continue_watch | current_time, current_device, some_play_happens) ● Construction of the data set and feature extraction is the key ● Model matters, but it is a secondary concern Title Ranking Model Time, tPast Today t1,iOS t2,web t3,web t4,iOS Continue Discovery
  18. 18. Data Set Construction t3,web,user1,item1 t3,web,user1,item2 t4,iOS,user1,item3 t4,iOS,user1,item1 t4,iOS,user1,item2 Today at time t3, and web for continuation title Today at time t4, and iOs, for discovery title
  19. 19. Feature Extraction morning, web morning, web evening,iOS evening,iOS evening,iOS Today at time t3, and web for continuation title Today at time t4, and iOs, for discovery title
  20. 20. Performance
  21. 21. Time machines Observed labels Training input data collected Training time Serving time Serving input data collected Distributed Time Travel for Feature Generation DeLorean image by JMortonPhoto.com & OtoGodfrey.com
  22. 22. Sequence Prediction with Context
  23. 23. ● Representation (Deep) Learning promises to do feature engineering for you ● Time is a complex contextual dimension that needs special attention ● Time exhibits many periodicities ○ Daily ○ Weekly ○ Seasonally ○ … and even longer: Olympics, elections, etc. ● Generalizing to future behaviors through temporal extrapolation Representation Learning
  24. 24. Sequence prediction ● Treat recommendations as a sequence classification problem ○ Input: sequence of user actions ○ Output: next action ● E.g. Gru4Rec [Hidasi et. al., 2016] ○ Input: sequence of items in a sessions ○ Output: next item in the session
  25. 25. Contextual sequence prediction ● Input: sequence of contextual user actions, plus current context ● Output: probability of next action ● E.g. “Given all the actions a user has taken so far, what’s the most likely video they’re going to play right now?” ● e.g. [Smirnova & Vasile, 2017], [Beutel et. al., 2018]
  26. 26. Contextual Sequence Data 2017-12-10 15:40:22 2017-12-23 19:32:10 2017-12-24 12:05:53 2017-12-27 22:40:22 2017-12-29 19:39:36 2017-12-30 20:42:13 Context ActionSequence per user ? Time
  27. 27. Training Contextual RNN 2017-12-23 19:32:10 2017-12-24 12:05:53 2017-12-27 22:40:22 2017-12-29 19:39:36 2017-12-30 20:42:13 Context ActionSequence per user ? Time 2017-12-10 15:40:22
  28. 28. Extrapolating Time 9AM in UK: :11PM in UK
  29. 29. Time-sensitive sequence prediction ● Experiment on a Netflix internal dataset ○ Context: ■ Discrete time ● Day-of-week: Sunday, Monday, … ● Hour-of-day ■ Continuous time (Timestamp) ■ Device ■ Country ○ Predict next play (temporal split data)
  30. 30. Results
  31. 31. The Price to pay
  32. 32. The Price of Contextual Models ● Increased computational cost ○ Models can not be precomputed ● Modeling ○ Harder to build intuition ○ Higher time and memory complexity ○ Testing methodology is complicated ● Model gets stale easily ● Deep models can overfit offline metric
  33. 33. Takeaways
  34. 34. ● Be careful when splitting dataset ○ Don’t overfit the past ○ Predict the future ● May need to train/test at multiple distinct time points to see generalization across time (e.g. [Lathia et. al., 2009]) ● Not all offline metrics make sense for contextual recommendations Experimental Design Train Time Test
  35. 35. Takeaways from Deep Learning ● Think beyond solving existing problems with new tools and instead think what new problems the new tools can solve ● Deep Learning can work well for Recommendations... ○ When you go beyond the classic problem definition ○ Use more complex data such as contextual factors ● Lots of open areas to improve recommendations using deep learning
  36. 36. ● Contextual signals can be as strong as personal preferences ○ Model them as such ○ Evaluate them as such ○ Make them central to your system and infrastructure Final Note
  37. 37. Thank you.
  38. 38. Credits Justin Basilico Yves Raimond Sudeep Das Hossein Taghavi and the whole Algorithm Engineering team Read more in depth discussion on the topic: ● Other relevant presentations ● Blog post on continue watching model

×