• Save
The Netflix prize: yet another million dollar problem
Upcoming SlideShare
Loading in...5
×
 

The Netflix prize: yet another million dollar problem

on

  • 2,619 views

Slides from an introductory talk on machine learning, and why mathematicians should take interest in it. ...

Slides from an introductory talk on machine learning, and why mathematicians should take interest in it.

This is a very basic introduction, for math undergraduates & other curious minds.

Statistics

Views

Total Views
2,619
Views on SlideShare
2,224
Embed Views
395

Actions

Likes
1
Downloads
0
Comments
0

7 Embeds 395

http://seenthis.net 386
http://www.seenthis.net 2
http://www.docshut.com 2
http://www.slashdocs.com 2
http://jujo00obo2o234ungd3t8qjfcjrs3o6k-a-sites-opensocial.googleusercontent.com 1
http://translate.googleusercontent.com 1
http://94.seenthis.net 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

The Netflix prize: yet another million dollar problem The Netflix prize: yet another million dollar problem Presentation Transcript

  • The Problem Strategies Some Funny New Science The Netflix Prize: yet another million dollar problem David Bessis Ecole Normale Sup´rieure, 27/01/2010 e David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Solutions must David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Solutions must ”be published in a refereed mathematics publication of worldwide repute” David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Solutions must ”be published in a refereed mathematics publication of worldwide repute” ”have general acceptance in the mathematics community two years after” David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Applied Mathematics. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Applied Mathematics Computer Science. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Applied Mathematics Computer Science Psychology. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Applied Mathematics Computer Science Psychology (do we really care?) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Some Funny New Science. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Some Funny New Science. Clear rules. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Some Funny New Science. Reasonably clear rules. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Strategies Some Funny New Science 7 + 1 Million Dollar Problems Millenium Prize Problems: Funded in 2000 by the Clay Mathematical Institute. Seven classical open problems in Mathematics. Fuzzy rules. The Poincar´ conjecture was solved by Perelman in 2003. e No award yet. Netflix Prize: Funded in 2006 by the DVD rental company Netflix. A problem in Some Funny New Science. Reasonably clear rules. Prize awarded in September 2009. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. Beyond a few obvious choices, people don’t know what they want to watch. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. Beyond a few obvious choices, people don’t know what they want to watch. Collaborative filtering: recommending products based on prior evaluations by other users (just like Amazon does). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. Beyond a few obvious choices, people don’t know what they want to watch. Collaborative filtering: recommending products based on prior evaluations by other users (just like Amazon does). The Netflix prize is a collaborative filtering competition: David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. Beyond a few obvious choices, people don’t know what they want to watch. Collaborative filtering: recommending products based on prior evaluations by other users (just like Amazon does). The Netflix prize is a collaborative filtering competition: Based on a huge dataset of actual ratings by Netflix users. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. Beyond a few obvious choices, people don’t know what they want to watch. Collaborative filtering: recommending products based on prior evaluations by other users (just like Amazon does). The Netflix prize is a collaborative filtering competition: Based on a huge dataset of actual ratings by Netflix users. Open to almost everyone. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Context Netflix has an “all-you-can-eat” pricing model. They need their users to watch a lot of movies. Beyond a few obvious choices, people don’t know what they want to watch. Collaborative filtering: recommending products based on prior evaluations by other users (just like Amazon does). The Netflix prize is a collaborative filtering competition: Based on a huge dataset of actual ratings by Netflix users. Open to almost everyone. Endowed with a $1.000.000 prize. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Dataset The user space U consists of 480 189 users (identified by a meaningless non-sequential integral id). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Dataset The user space U consists of 480 189 users (identified by a meaningless non-sequential integral id). The movie space M consists of 17 770 movies (identified by integers 1, . . . , 17 770, and the associated list of titles and release years is provided – this data is meaningful and minable). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Dataset The user space U consists of 480 189 users (identified by a meaningless non-sequential integral id). The movie space M consists of 17 770 movies (identified by integers 1, . . . , 17 770, and the associated list of titles and release years is provided – this data is meaningful and minable). The date space D spans the period Oct. 1998 – Dec. 2005 (extremely meaningful data; no time of day is provided). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Dataset The user space U consists of 480 189 users (identified by a meaningless non-sequential integral id). The movie space M consists of 17 770 movies (identified by integers 1, . . . , 17 770, and the associated list of titles and release years is provided – this data is meaningful and minable). The date space D spans the period Oct. 1998 – Dec. 2005 (extremely meaningful data; no time of day is provided). The rating space R is {1, 2, 3, 4, 5} (”stars”). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Dataset The user space U consists of 480 189 users (identified by a meaningless non-sequential integral id). The movie space M consists of 17 770 movies (identified by integers 1, . . . , 17 770, and the associated list of titles and release years is provided – this data is meaningful and minable). The date space D spans the period Oct. 1998 – Dec. 2005 (extremely meaningful data; no time of day is provided). The rating space R is {1, 2, 3, 4, 5} (”stars”). The training dataset T contains 100 480 507 quadruples (u, m, d, r ) ∈ U × M × D × R. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Dataset The user space U consists of 480 189 users (identified by a meaningless non-sequential integral id). The movie space M consists of 17 770 movies (identified by integers 1, . . . , 17 770, and the associated list of titles and release years is provided – this data is meaningful and minable). The date space D spans the period Oct. 1998 – Dec. 2005 (extremely meaningful data; no time of day is provided). The rating space R is {1, 2, 3, 4, 5} (”stars”). The training dataset T contains 100 480 507 quadruples (u, m, d, r ) ∈ U × M × D × R. The qualifying dataset Q contains 2 817 131 triples (u, m, d) ∈ U × M × D. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone except Netflix employees and their relatives David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone except Netflix employees and their relatives and residents of Cuba, Iran, Syria, North Korea, Myanmar and Sudan. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone except Netflix employees and their relatives and residents of Cuba, Iran, Syria, North Korea, Myanmar and Sudan. Participants can join efforts in teams. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone except Netflix employees and their relatives and residents of Cuba, Iran, Syria, North Korea, Myanmar and Sudan. Participants can join efforts in teams. They can upload their predictions up to once a day. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone except Netflix employees and their relatives and residents of Cuba, Iran, Syria, North Korea, Myanmar and Sudan. Participants can join efforts in teams. They can upload their predictions up to once a day. Predictions are maps from the qualifying set Q to the interval [1, 5]. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The Challenge Open to everyone except Netflix employees and their relatives and residents of Cuba, Iran, Syria, North Korea, Myanmar and Sudan. Participants can join efforts in teams. They can upload their predictions up to once a day. Predictions are maps from the qualifying set Q to the interval [1, 5]. The metric used to benchmark predictions is the RMSE (”root of mean square error”) 1 RMSE = |predicted rating for q − actual rating for q|2 |Q| q∈Q David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Typical RMSEs Theoretically, the RMSE cannot be greater than 2. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Typical RMSEs Theoretically, the RMSE cannot be greater than 2. Users tend to view and rate movies they like, so they typically give 3, 4 or 5 stars rather than 1 or 2 (the above upper bound is unrealistically pessimistic). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Typical RMSEs Theoretically, the RMSE cannot be greater than 2. Users tend to view and rate movies they like, so they typically give 3, 4 or 5 stars rather than 1 or 2 (the above upper bound is unrealistically pessimistic). A basic prediction consists of mapping a triple (u, m, d) to the average rating obtained by the movie m. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Typical RMSEs Theoretically, the RMSE cannot be greater than 2. Users tend to view and rate movies they like, so they typically give 3, 4 or 5 stars rather than 1 or 2 (the above upper bound is unrealistically pessimistic). A basic prediction consists of mapping a triple (u, m, d) to the average rating obtained by the movie m. It achieves 1.0540. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Typical RMSEs Theoretically, the RMSE cannot be greater than 2. Users tend to view and rate movies they like, so they typically give 3, 4 or 5 stars rather than 1 or 2 (the above upper bound is unrealistically pessimistic). A basic prediction consists of mapping a triple (u, m, d) to the average rating obtained by the movie m. It achieves 1.0540. At the beginning of the Challenge, Netflix’s in-house prediction system Cinematch achieved 0.9514 (roughly a 10% improvement). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Typical RMSEs Theoretically, the RMSE cannot be greater than 2. Users tend to view and rate movies they like, so they typically give 3, 4 or 5 stars rather than 1 or 2 (the above upper bound is unrealistically pessimistic). A basic prediction consists of mapping a triple (u, m, d) to the average rating obtained by the movie m. It achieves 1.0540. At the beginning of the Challenge, Netflix’s in-house prediction system Cinematch achieved 0.9514 (roughly a 10% improvement). Netflix set the following target: obtain a further 10% improvement over Cinematch. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 1: a Cryptographic Trick Netflix has secretly partitioned the qualifying set Q = Q1 Q2 into two subsets of (approximately) equal sizes. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 1: a Cryptographic Trick Netflix has secretly partitioned the qualifying set Q = Q1 Q2 into two subsets of (approximately) equal sizes. The RMSE achieved on Q1 is revealed to participants (there is a public leaderboard). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 1: a Cryptographic Trick Netflix has secretly partitioned the qualifying set Q = Q1 Q2 into two subsets of (approximately) equal sizes. The RMSE achieved on Q1 is revealed to participants (there is a public leaderboard). The RMSE achieved on Q2 is used to determine the winner. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 1: a Cryptographic Trick Netflix has secretly partitioned the qualifying set Q = Q1 Q2 into two subsets of (approximately) equal sizes. The RMSE achieved on Q1 is revealed to participants (there is a public leaderboard). The RMSE achieved on Q2 is used to determine the winner. This prevented participants from “learning from the oracle”. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 1: a Cryptographic Trick Netflix has secretly partitioned the qualifying set Q = Q1 Q2 into two subsets of (approximately) equal sizes. The RMSE achieved on Q1 is revealed to participants (there is a public leaderboard). The RMSE achieved on Q2 is used to determine the winner. This prevented participants from “learning from the oracle”. The goal was to achieve 0.8572. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders provided they made their current methodology public. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders provided they made their current methodology public. The Challenge was to last for 30 more days after the goal was achieved. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders provided they made their current methodology public. The Challenge was to last for 30 more days after the goal was achieved. The winner would be the team with the best RMSE after this 30 days period David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders provided they made their current methodology public. The Challenge was to last for 30 more days after the goal was achieved. The winner would be the team with the best RMSE after this 30 days period (no backstabbing arXiv-style “I posted first” effect). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders provided they made their current methodology public. The Challenge was to last for 30 more days after the goal was achieved. The winner would be the team with the best RMSE after this 30 days period (no backstabbing arXiv-style “I posted first” effect). Every detail was carefully anticipated (even the possibility of a tie). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Very Smart Rules 2: Crowd Psychology Tricks The Challenged opened on October 2, 2006. Annual $50.000 prizes were offered to current leaders provided they made their current methodology public. The Challenge was to last for 30 more days after the goal was achieved. The winner would be the team with the best RMSE after this 30 days period (no backstabbing arXiv-style “I posted first” effect). Every detail was carefully anticipated (even the possibility of a tie). These smart rules, together with the $1.000.000 prize, attracted thousands of participants. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Timeline October 2006: Cinematch RMSE = 0.9514. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Timeline October 2006: Cinematch RMSE = 0.9514. October 2007: team KorBell leads with 0.8712 (8.43% improvement). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Timeline October 2006: Cinematch RMSE = 0.9514. October 2007: team KorBell leads with 0.8712 (8.43% improvement). October 2008: team “BellKor in BigChaos” (two teams merging efforts) leads with 0.8616 (9.44% improvement). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Timeline October 2006: Cinematch RMSE = 0.9514. October 2007: team KorBell leads with 0.8712 (8.43% improvement). October 2008: team “BellKor in BigChaos” (two teams merging efforts) leads with 0.8616 (9.44% improvement). June 26, 2009: the goal is achieved. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Timeline October 2006: Cinematch RMSE = 0.9514. October 2007: team KorBell leads with 0.8712 (8.43% improvement). October 2008: team “BellKor in BigChaos” (two teams merging efforts) leads with 0.8616 (9.44% improvement). June 26, 2009: the goal is achieved. July 26, 2009: Netflix stops gathering solutions. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science Timeline October 2006: Cinematch RMSE = 0.9514. October 2007: team KorBell leads with 0.8712 (8.43% improvement). October 2008: team “BellKor in BigChaos” (two teams merging efforts) leads with 0.8616 (9.44% improvement). June 26, 2009: the goal is achieved. July 26, 2009: Netflix stops gathering solutions. The winner is announced on September 18, 2009. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The winning team Three teams combined their results to win the competition: BellKor Bob Bell (AT&T) Yehuda Koren (Yahoo) Chris Volinsky (AT&T) BigChaos Michael Jahrer (Commendo research and consulting) Andreas T¨scher (Commendo research and consulting) o Pragmatic Theory Martin Chabbert (Pragmatic Theory) Martin Piotte (Pragmatic Theory) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The winning team Three teams combined their results to win the competition: BellKor Bob Bell (AT&T) Yehuda Koren (Yahoo) Chris Volinsky (AT&T) BigChaos Michael Jahrer (Commendo research and consulting) Andreas T¨scher (Commendo research and consulting) o Pragmatic Theory Martin Chabbert (Pragmatic Theory) Martin Piotte (Pragmatic Theory) Their winnning submission achieved a RMSE of 0.8567 (10.06% improvement over Cinematch.) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The winning team Three teams combined their results to win the competition: BellKor Bob Bell (AT&T) Yehuda Koren (Yahoo) Chris Volinsky (AT&T) BigChaos Michael Jahrer (Commendo research and consulting) Andreas T¨scher (Commendo research and consulting) o Pragmatic Theory Martin Chabbert (Pragmatic Theory) Martin Piotte (Pragmatic Theory) Their winnning submission achieved a RMSE of 0.8567 (10.06% improvement over Cinematch.) Another team, The Ensemble, achieved the same RMSE... David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Rules Strategies Competition Some Funny New Science The winning team Three teams combined their results to win the competition: BellKor Bob Bell (AT&T) Yehuda Koren (Yahoo) Chris Volinsky (AT&T) BigChaos Michael Jahrer (Commendo research and consulting) Andreas T¨scher (Commendo research and consulting) o Pragmatic Theory Martin Chabbert (Pragmatic Theory) Martin Piotte (Pragmatic Theory) Their winnning submission achieved a RMSE of 0.8567 (10.06% improvement over Cinematch.) Another team, The Ensemble, achieved the same RMSE... ...and lost because their submission was posted 24 minutes later! David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). Dates can be encoded on 2 bytes. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). Dates can be encoded on 2 bytes. A triple (m, v , d) can be encoded on 7 bytes. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). Dates can be encoded on 2 bytes. A triple (m, v , d) can be encoded on 7 bytes. 700 MB suffice to store the dataset. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). Dates can be encoded on 2 bytes. A triple (m, v , d) can be encoded on 7 bytes. 700 MB suffice to store the dataset. It is possible (necessary) to work in RAM. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). Dates can be encoded on 2 bytes. A triple (m, v , d) can be encoded on 7 bytes. 700 MB suffice to store the dataset. It is possible (necessary) to work in RAM. Commodity hardware is sufficient. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Computer implementation Memory requirements: Movies can be encoded on 2 bytes (17770 < 2562 ). Viewers can be encoded on 3 bytes (480189 < 2563 ). Dates can be encoded on 2 bytes. A triple (m, v , d) can be encoded on 7 bytes. 700 MB suffice to store the dataset. It is possible (necessary) to work in RAM. Commodity hardware is sufficient. (I have some Ruby code to interactively play with the dataset.) David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? Some movies were rated by hundreds of thousands viewers, some by just a few (long-tail distribution). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? Some movies were rated by hundreds of thousands viewers, some by just a few (long-tail distribution). Similarly, a user rated all the movies, and many just a few. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? Some movies were rated by hundreds of thousands viewers, some by just a few (long-tail distribution). Similarly, a user rated all the movies, and many just a few. Let F be the set of all final 9 ratings for all individual users. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? Some movies were rated by hundreds of thousands viewers, some by just a few (long-tail distribution). Similarly, a user rated all the movies, and many just a few. Let F be the set of all final 9 ratings for all individual users. Then F = Q P, with P ⊂ T publicly tagged by Netflix. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? Some movies were rated by hundreds of thousands viewers, some by just a few (long-tail distribution). Similarly, a user rated all the movies, and many just a few. Let F be the set of all final 9 ratings for all individual users. Then F = Q P, with P ⊂ T publicly tagged by Netflix. Q is a random draw of 2/3 of F . David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Remarks About 200 ratings per users. This is likely caused by Cinematch’s data gathering procedure: users sometime rate tens of movies on a single day. This causes an insanely huge bias within the dataset (movies are perceived differently when rated individually or within a rating spree), not fully exploited by the winners. Netflix, do you read me? Some movies were rated by hundreds of thousands viewers, some by just a few (long-tail distribution). Similarly, a user rated all the movies, and many just a few. Let F be the set of all final 9 ratings for all individual users. Then F = Q P, with P ⊂ T publicly tagged by Netflix. Q is a random draw of 2/3 of F . Q resembles P but is very dissimilar from T . David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Algorithms The machine learning toolbox consists of many methods: Clustering methods. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Algorithms The machine learning toolbox consists of many methods: Clustering methods. Regressions. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Algorithms The machine learning toolbox consists of many methods: Clustering methods. Regressions. Latent parameters methods (SVD). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Algorithms The machine learning toolbox consists of many methods: Clustering methods. Regressions. Latent parameters methods (SVD). Neural networks. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Algorithms The machine learning toolbox consists of many methods: Clustering methods. Regressions. Latent parameters methods (SVD). Neural networks. SVM David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Algorithms The machine learning toolbox consists of many methods: Clustering methods. Regressions. Latent parameters methods (SVD). Neural networks. SVM ... David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Beginner’s mistakes Underestimate the volume effect. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Beginner’s mistakes Underestimate the volume effect. Think conceptually and discretely rather than globally and continuously. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Beginner’s mistakes Underestimate the volume effect. Think conceptually and discretely rather than globally and continuously. Put users and movies into categories (clustering introduces unwanted discontinuities). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Beginner’s mistakes Underestimate the volume effect. Think conceptually and discretely rather than globally and continuously. Put users and movies into categories (clustering introduces unwanted discontinuities). Learn from the probe. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Beginner’s mistakes Underestimate the volume effect. Think conceptually and discretely rather than globally and continuously. Put users and movies into categories (clustering introduces unwanted discontinuities). Learn from the probe. Dealing with 100 000 000 data isn’t a logic puzzle. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Beginner’s mistakes Underestimate the volume effect. Think conceptually and discretely rather than globally and continuously. Put users and movies into categories (clustering introduces unwanted discontinuities). Learn from the probe. Dealing with 100 000 000 data isn’t a logic puzzle. It resembles Thermodynamics. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. A linear regression is a natural way to do that. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. A linear regression is a natural way to do that. Write (rx,y ) = (Cy )y ∈Y where the Cy are the column vectors. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. A linear regression is a natural way to do that. Write (rx,y ) = (Cy )y ∈Y where the Cy are the column vectors. Performing the linear regression consists of approximating Cy0 by ˆ its orthogonal projection Cy0 on the hyperplane generated by the (Cy )y ∈Y . David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. A linear regression is a natural way to do that. Write (rx,y ) = (Cy )y ∈Y where the Cy are the column vectors. Performing the linear regression consists of approximating Cy0 by ˆ its orthogonal projection Cy0 on the hyperplane generated by the (Cy )y ∈Y . Clearly, there exists a unique solution. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. A linear regression is a natural way to do that. Write (rx,y ) = (Cy )y ∈Y where the Cy are the column vectors. Performing the linear regression consists of approximating Cy0 by ˆ its orthogonal projection Cy0 on the hyperplane generated by the (Cy )y ∈Y . Clearly, there exists a unique solution. It optimizes RMSE. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Linear regression Suppose all viewers in X have rated all movies in Y : the rating matrix is (rx,y )(x,y )∈X ×Y . Suppose you want to model the ratings given to a particular movie y0 based on the ratings given to the movies in Y = Y − {y0 }. A linear regression is a natural way to do that. Write (rx,y ) = (Cy )y ∈Y where the Cy are the column vectors. Performing the linear regression consists of approximating Cy0 by ˆ its orthogonal projection Cy0 on the hyperplane generated by the (Cy )y ∈Y . Clearly, there exists a unique solution. It optimizes RMSE. Write the formula! David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? It is better to do regression by movies. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? It is better to do regression by movies. Normalize ratings: David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? It is better to do regression by movies. Normalize ratings: replace the rating rv ,m by the meaningful signal, i.e., the difference r v ,m between rv ,m and the average rating for m. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? It is better to do regression by movies. Normalize ratings: replace the rating rv ,m by the meaningful signal, i.e., the difference r v ,m between rv ,m and the average rating for m. Then it becomes natural to set r v ,m to 0 when v hasn’t rated m. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? It is better to do regression by movies. Normalize ratings: replace the rating rv ,m by the meaningful signal, i.e., the difference r v ,m between rv ,m and the average rating for m. Then it becomes natural to set r v ,m to 0 when v hasn’t rated m. Actually, whether or not v has rated m is a meaningful information! David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 1: missing data Not all viewers have seen all movies. Worse, there are virtually no complete rectangular blocks within the dataset. Regression by viewers or by movies? It is better to do regression by movies. Normalize ratings: replace the rating rv ,m by the meaningful signal, i.e., the difference r v ,m between rv ,m and the average rating for m. Then it becomes natural to set r v ,m to 0 when v hasn’t rated m. Actually, whether or not v has rated m is a meaningful information! Add normalized bit columns to account for that. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 2: the curse of dimensionality We all know that Lagrange interpolators are not to be used on noisy data. Rather, one should look at best-fitting polynomials of a given low degree. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 2: the curse of dimensionality We all know that Lagrange interpolators are not to be used on noisy data. Rather, one should look at best-fitting polynomials of a given low degree. Similarly, the curse of dimensionality asserts that: With high-dimensionality datasets, one will always find stupid predictors, making perfect predictions on the dataset, and failing to generalize. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 2: the curse of dimensionality We all know that Lagrange interpolators are not to be used on noisy data. Rather, one should look at best-fitting polynomials of a given low degree. Similarly, the curse of dimensionality asserts that: With high-dimensionality datasets, one will always find stupid predictors, making perfect predictions on the dataset, and failing to generalize. By looking at my audience today, what should I be able to infer? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 2: the curse of dimensionality We all know that Lagrange interpolators are not to be used on noisy data. Rather, one should look at best-fitting polynomials of a given low degree. Similarly, the curse of dimensionality asserts that: With high-dimensionality datasets, one will always find stupid predictors, making perfect predictions on the dataset, and failing to generalize. By looking at my audience today, what should I be able to infer? That having long hair is a reasonably good gender predictor? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 2: the curse of dimensionality We all know that Lagrange interpolators are not to be used on noisy data. Rather, one should look at best-fitting polynomials of a given low degree. Similarly, the curse of dimensionality asserts that: With high-dimensionality datasets, one will always find stupid predictors, making perfect predictions on the dataset, and failing to generalize. By looking at my audience today, what should I be able to infer? That having long hair is a reasonably good gender predictor? That wearing a grey sweater is a reasonably good gender predictor? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Real life problems 2: the curse of dimensionality We all know that Lagrange interpolators are not to be used on noisy data. Rather, one should look at best-fitting polynomials of a given low degree. Similarly, the curse of dimensionality asserts that: With high-dimensionality datasets, one will always find stupid predictors, making perfect predictions on the dataset, and failing to generalize. By looking at my audience today, what should I be able to infer? That having long hair is a reasonably good gender predictor? That wearing a grey sweater is a reasonably good gender predictor? Dilemma: overlearning vs underlearning. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Ridge regression (aka Tikhonov regularization) Linear regression: given vectors x, y1 , . . . , yn ∈ Rm , find λ1 , . . . , λn that minimize ||x − λi yi ||2 . David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Ridge regression (aka Tikhonov regularization) Linear regression: given vectors x, y1 , . . . , yn ∈ Rm , find λ1 , . . . , λn that minimize ||x − λi yi ||2 . When n is large (with respect to m), the linear system is overdetermined. Overfitting occurs. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Ridge regression (aka Tikhonov regularization) Linear regression: given vectors x, y1 , . . . , yn ∈ Rm , find λ1 , . . . , λn that minimize ||x − λi yi ||2 . When n is large (with respect to m), the linear system is overdetermined. Overfitting occurs. A telltale sign of overfitting is the presence of λi ’s with huge norms compensating each other. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Ridge regression (aka Tikhonov regularization) Linear regression: given vectors x, y1 , . . . , yn ∈ Rm , find λ1 , . . . , λn that minimize ||x − λi yi ||2 . When n is large (with respect to m), the linear system is overdetermined. Overfitting occurs. A telltale sign of overfitting is the presence of λi ’s with huge norms compensating each other. Ridge regression (Tikhonov regularization): find λ1 , . . . , λn that minimize ||x − λi yi ||2 + ε |λi |2 where ε is a well-adjusted (small) penalty term. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Anything else? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Maybe not. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. This Actor or that Actress or some Director... David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. This Actor or that Actress or some Director... Victorian costumes. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. This Actor or that Actress or some Director... Victorian costumes. 3D David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. This Actor or that Actress or some Director... Victorian costumes. 3D Beautiful Japanese landscapes with Mount Fuji. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. This Actor or that Actress or some Director... Victorian costumes. 3D Beautiful Japanese landscapes with Mount Fuji. Whatever. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies Assume that movies differ by their amount of certain qualities: Violence. Sex. Humor. This Actor or that Actress or some Director... Victorian costumes. 3D Beautiful Japanese landscapes with Mount Fuji. Whatever. ... David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 2 David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 2 Maybe we could construct a map φ : M → RN from the space of movies to an abstract parameter space. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 2 Maybe we could construct a map φ : M → RN from the space of movies to an abstract parameter space. Maybe we could construct a map ψ : V → RN from the space of viewers to the same abstract parameter space, expressing the viewers’ appetite for this and that attribute. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 2 Maybe we could construct a map φ : M → RN from the space of movies to an abstract parameter space. Maybe we could construct a map ψ : V → RN from the space of viewers to the same abstract parameter space, expressing the viewers’ appetite for this and that attribute. Then a good estimate rv ,m should be (some calibrated normalized variant of) the scalar product: φ(m).ψ(v ). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 2 Maybe we could construct a map φ : M → RN from the space of movies to an abstract parameter space. Maybe we could construct a map ψ : V → RN from the space of viewers to the same abstract parameter space, expressing the viewers’ appetite for this and that attribute. Then a good estimate rv ,m should be (some calibrated normalized variant of) the scalar product: φ(m).ψ(v ). But how can we construct good φ and ψ? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” Set N. Look for φ and ψ miniminizing |rv ,m − φ(m).ψ(v )|2 David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” Set N. Look for φ and ψ miniminizing |rv ,m − φ(m).ψ(v )|2 or, rather, miniminizing |rv ,m − φ(m).ψ(v )|2 + ε(||φ||2 + ||ψ||2 ). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” Set N. Look for φ and ψ miniminizing |rv ,m − φ(m).ψ(v )|2 or, rather, miniminizing |rv ,m − φ(m).ψ(v )|2 + ε(||φ||2 + ||ψ||2 ). Phrase this as a convex problem with a unique solution. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” Set N. Look for φ and ψ miniminizing |rv ,m − φ(m).ψ(v )|2 or, rather, miniminizing |rv ,m − φ(m).ψ(v )|2 + ε(||φ||2 + ||ψ||2 ). Phrase this as a convex problem with a unique solution. Tens of millions of parameters to adjust. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” Set N. Look for φ and ψ miniminizing |rv ,m − φ(m).ψ(v )|2 or, rather, miniminizing |rv ,m − φ(m).ψ(v )|2 + ε(||φ||2 + ||ψ||2 ). Phrase this as a convex problem with a unique solution. Tens of millions of parameters to adjust. Approximate the solution by Stochastic Gradient Descent (some variant of Newton’s method). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 3 The Golden Rule of Machine Learning: “Learn everything from thy dataset!” Set N. Look for φ and ψ miniminizing |rv ,m − φ(m).ψ(v )|2 or, rather, miniminizing |rv ,m − φ(m).ψ(v )|2 + ε(||φ||2 + ||ψ||2 ). Phrase this as a convex problem with a unique solution. Tens of millions of parameters to adjust. Approximate the solution by Stochastic Gradient Descent (some variant of Newton’s method). It really works well! David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). Objective basis for categorizing movies. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). Objective basis for categorizing movies. N itself can be “learnt” from the dataset. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). Objective basis for categorizing movies. N itself can be “learnt” from the dataset. N = 50 ⇒ RMSE = 0.9046 (disclaimer: my account is naive oversimplification of Yehuda Koren’s paper) David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). Objective basis for categorizing movies. N itself can be “learnt” from the dataset. N = 50 ⇒ RMSE = 0.9046 (disclaimer: my account is naive oversimplification of Yehuda Koren’s paper) N = 100 ⇒ RMSE = 0.9025 David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). Objective basis for categorizing movies. N itself can be “learnt” from the dataset. N = 50 ⇒ RMSE = 0.9046 (disclaimer: my account is naive oversimplification of Yehuda Koren’s paper) N = 100 ⇒ RMSE = 0.9025 N = 200 ⇒ RMSE = 0.9009 David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Assigning attributes to movies 4 Afterwards, one may try to make sense of the attributes (PCA). Objective basis for categorizing movies. N itself can be “learnt” from the dataset. N = 50 ⇒ RMSE = 0.9046 (disclaimer: my account is naive oversimplification of Yehuda Koren’s paper) N = 100 ⇒ RMSE = 0.9025 N = 200 ⇒ RMSE = 0.9009 A natural way to define a “cognitive dimension of the space of movies”? David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Latent Semantic Analysis Let W be a set of words, let D be a set of documents. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Latent Semantic Analysis Let W be a set of words, let D be a set of documents. Look at the frequency matrix M = (mw ,d ). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Latent Semantic Analysis Let W be a set of words, let D be a set of documents. Look at the frequency matrix M = (mw ,d ). Singular Value Decomposition. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Latent Semantic Analysis Let W be a set of words, let D be a set of documents. Look at the frequency matrix M = (mw ,d ). Singular Value Decomposition. ⇒ Abstract space of concepts. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. BellKor have a nice composite model (latent factors + regression + presence or absence of ratings, everything tuned simultaneously). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. BellKor have a nice composite model (latent factors + regression + presence or absence of ratings, everything tuned simultaneously). Baseline: global mean + movie offset + user offset (offsets are learnt). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. BellKor have a nice composite model (latent factors + regression + presence or absence of ratings, everything tuned simultaneously). Baseline: global mean + movie offset + user offset (offsets are learnt). BigChaos have filtered out many factors (even the impact of the character length of the movie title, or days of the week). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. BellKor have a nice composite model (latent factors + regression + presence or absence of ratings, everything tuned simultaneously). Baseline: global mean + movie offset + user offset (offsets are learnt). BigChaos have filtered out many factors (even the impact of the character length of the movie title, or days of the week). BellKor have subtle ways to filter out time signals. David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. BellKor have a nice composite model (latent factors + regression + presence or absence of ratings, everything tuned simultaneously). Baseline: global mean + movie offset + user offset (offsets are learnt). BigChaos have filtered out many factors (even the impact of the character length of the movie title, or days of the week). BellKor have subtle ways to filter out time signals. Any two models can be combined through a regression (calibrated on the probe). David Bessis The Netflix Prize: yet another million dollar problem
  • Practical issues The Problem Regressions Strategies Latent factors Some Funny New Science Tuning and Blending Tuning and Blending This talks only mentions particular approaches. BellKor have a nice composite model (latent factors + regression + presence or absence of ratings, everything tuned simultaneously). Baseline: global mean + movie offset + user offset (offsets are learnt). BigChaos have filtered out many factors (even the impact of the character length of the movie title, or days of the week). BellKor have subtle ways to filter out time signals. Any two models can be combined through a regression (calibrated on the probe). The winning solution is a sophisticated blend. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What Statisticians are paid to do? Historically, the typical real-world statistical question was: David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What Statisticians are paid to do? Historically, the typical real-world statistical question was: Given a certain hypothesis... David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What Statisticians are paid to do? Historically, the typical real-world statistical question was: Given a certain hypothesis... ...and a tiny dataset... David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What Statisticians are paid to do? Historically, the typical real-world statistical question was: Given a certain hypothesis... ...and a tiny dataset... ...so tiny that one cannot be sure about anything.. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What Statisticians are paid to do? Historically, the typical real-world statistical question was: Given a certain hypothesis... ...and a tiny dataset... ...so tiny that one cannot be sure about anything.. ...prove that the hypothesis is correct! David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The high-volume effect With big datasets, the nature of the game is changing. “Big” can be millions, tens of millions (Netflix), billions (Advertisement Campaigns), trillions or even scarier amounts. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The high-volume effect With big datasets, the nature of the game is changing. “Big” can be millions, tens of millions (Netflix), billions (Advertisement Campaigns), trillions or even scarier amounts. It is no longer about checking a given hypothesis (forget about χ2 ). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The high-volume effect With big datasets, the nature of the game is changing. “Big” can be millions, tens of millions (Netflix), billions (Advertisement Campaigns), trillions or even scarier amounts. It is no longer about checking a given hypothesis (forget about χ2 ). It is about handling huge dataflows and automatically building millions of models. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The high-volume effect With big datasets, the nature of the game is changing. “Big” can be millions, tens of millions (Netflix), billions (Advertisement Campaigns), trillions or even scarier amounts. It is no longer about checking a given hypothesis (forget about χ2 ). It is about handling huge dataflows and automatically building millions of models. Our concept-based intuition tends to underestimate the predictive power of simple algorithms on big datasets. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). Hardware is cheap enough. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). Hardware is cheap enough. Programming languages are pleasant enough. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). Hardware is cheap enough. Programming languages are pleasant enough. Parallel computing is easy enough (Hadoop,...) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). Hardware is cheap enough. Programming languages are pleasant enough. Parallel computing is easy enough (Hadoop,...) Google-style problem-solving is no longer reserved to big corporations. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). Hardware is cheap enough. Programming languages are pleasant enough. Parallel computing is easy enough (Hadoop,...) Google-style problem-solving is no longer reserved to big corporations. This is changing the way science is done. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint What is going on? New learning algorithms (e.g., semantic search of images). Hardware is cheap enough. Programming languages are pleasant enough. Parallel computing is easy enough (Hadoop,...) Google-style problem-solving is no longer reserved to big corporations. This is changing the way science is done. Induction-Deduction-Transduction. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. Play with the data! David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. Play with the data! One Example: ratings for certain movies are harder to predict, yet even this is meaningful: David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. Play with the data! One Example: ratings for certain movies are harder to predict, yet even this is meaningful: Napoleon Dynamite (see New York Times article.) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. Play with the data! One Example: ratings for certain movies are harder to predict, yet even this is meaningful: Napoleon Dynamite (see New York Times article.) Wes Anderson’s movies (do I even know if I like them?) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. Play with the data! One Example: ratings for certain movies are harder to predict, yet even this is meaningful: Napoleon Dynamite (see New York Times article.) Wes Anderson’s movies (do I even know if I like them?) What’s the minimal RMSE? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint The Netflix dataset, beyond collaborative filtering Like any big dataset, the Netflix dataset is a world in reduction. It’s interest reaches beyond collaborative filtering. Basic metrics easily cluster movies by genre or director. Clear social, psychological, cultural significance. Play with the data! One Example: ratings for certain movies are harder to predict, yet even this is meaningful: Napoleon Dynamite (see New York Times article.) Wes Anderson’s movies (do I even know if I like them?) What’s the minimal RMSE? Does this question make sense? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: Software quality? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: Software quality? Fine-tuning of models? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: Software quality? Fine-tuning of models? Intuition about the dataset and the problem? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: Software quality? Fine-tuning of models? Intuition about the dataset and the problem? Global architecture of the solutions? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: Software quality? Fine-tuning of models? Intuition about the dataset and the problem? Global architecture of the solutions? Maybe not serious math problems. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Surprises and questions So far, the mathematics are trivial. The effectiveness of machine learning is very counter-intuitive to me. Beautiful concepts (cognitive dimension of the space of movies, concept of concept...) No-one has a clue about the theoretical bounds. No-one knows where the added-value lies: Software quality? Fine-tuning of models? Intuition about the dataset and the problem? Global architecture of the solutions? Maybe not serious math problems. But serious problems for mathematical minds. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Beyond Theorems Should we be satisfied with the fuzziness of the Millenium Prize Problems rules? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Beyond Theorems Should we be satisfied with the fuzziness of the Millenium Prize Problems rules? Wasn’t Axiomatic Set Theory supposed to have solved the problem of objectivity in Mathematics? David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Beyond Theorems Should we be satisfied with the fuzziness of the Millenium Prize Problems rules? Wasn’t Axiomatic Set Theory supposed to have solved the problem of objectivity in Mathematics? The Netflix Prize is strikingly objective, strikingly mathematical. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Beyond Theorems Should we be satisfied with the fuzziness of the Millenium Prize Problems rules? Wasn’t Axiomatic Set Theory supposed to have solved the problem of objectivity in Mathematics? The Netflix Prize is strikingly objective, strikingly mathematical. Yet I cannot see any real theorem in the winners’ solution. David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Beyond Theorems Should we be satisfied with the fuzziness of the Millenium Prize Problems rules? Wasn’t Axiomatic Set Theory supposed to have solved the problem of objectivity in Mathematics? The Netflix Prize is strikingly objective, strikingly mathematical. Yet I cannot see any real theorem in the winners’ solution. This isn’t depressing, but very exciting! David Bessis The Netflix Prize: yet another million dollar problem
  • The Problem Old Statistics vs New Statistics Strategies What is going on? Some Funny New Science A mathematician’s viewpoint Suggested readings http://www.netflixprize.com/ Yehuda Koren, Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model, proceedings of KDD’08. Clive Thompson, If You Liked This, You´re Sure to Love That, The New York Times, November 21, 2008. Ian Ayres, Super Crunchers. Play with the data! David Bessis The Netflix Prize: yet another million dollar problem