Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Robert Moakler, Data Science Intern, Integral Ad Science at MLconf SEA - 5/01/15

988 views

Published on

Efficient Measurement of Causal Impact in Digital Advertising Using Online Ad Viewability: Online display ads offer a level of granularity in observable metrics that is impossible to achieve for traditional, non-digital advertisers. However, as advertising budgets comprise an increasing amount of marketing spend, true return on investment (ROI) is increasingly important but often goes unmeasured. An important question to answer is how much incremental revenue was generated by an online campaign. In general, there are two common approaches to measuring the causal impact of a campaign: (1) a randomized experiment and (2) using observational data. The first technique is preferred due to its ability to give an unbiased estimate of a campaign’s effect, but is usually prohibitively costly. The second requires no additional ad spend, but is plagued by complex modeling choices and biases. Using a unique position in the online advertising pipeline to create a “natural experiment”, we propose a novel approach to measuring campaign effectiveness that utilizes detailed measurements of whether ads were actually viewed by a user. Treating users that have never been exposed to a viewable ad as a control group, we are able to mimic the setup of a randomized experiment without any additional cost while avoiding the biases that are typical when using observational data.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Robert Moakler, Data Science Intern, Integral Ad Science at MLconf SEA - 5/01/15

  1. 1. MLCONF SEATTLE — MAY 1, 2015 A large scale online natural experiment Measuring causal impact of display ads Robert Moakler — rmoakler@stern.nyu.edu | robert@integralads.com @ MLconf Seattle 2015
  2. 2. MLCONF SEATTLE — MAY 1, 2015 The $100+ billion question! Does online advertising really work? $104.57 $120.05 $140.15 $160.18 $178.45 $196.05 $213.89Digital ad spending! % change! 2012 2013 2014 2015 2016 2017 2018! Source: www.emarketer.com, “Global Ad Spending Growth to Double This Year” 20.4% 14.8% 16.7% 14.3% 11.4% 9.9% 9.1%
  3. 3. MLCONF SEATTLE — MAY 1, 2015 The $100+ billion question! Does online advertising really work?
  4. 4. MLCONF SEATTLE — MAY 1, 2015 The $100+ billion question! Does online advertising really work? Do online ads cause you to take some action?
  5. 5. MLCONF SEATTLE — MAY 1, 2015 Measuring causal impact! Option 1: Randomized A/B test •  Pros –  If setup correctly, gives unbiased causal estimates •  Cons –  Control ads cost as much as real ones –  Planned before campaign starts –  Coordination of multiple media partners –  Too many levers to test them all Campaign AdPSA
  6. 6. MLCONF SEATTLE — MAY 1, 2015 Measuring causal impact! Option 2: Observational study •  Pros –  Cheap –  Flexible •  Cons –  Enormous amount of selection bias
  7. 7. MLCONF SEATTLE — MAY 1, 2015 Confounding in digital advertising campaigns! •  Why is there selection bias in observational techniques? –  Online ads are targeted to specific segments of the population based on particular demographics, user interests and behaviors, etc. –  Targeting ads to specific populations makes comparing users that have received an ad to those that did not very problematic; estimates of causal impact will be overestimated.
  8. 8. MLCONF SEATTLE — MAY 1, 2015 Confounding in digital advertising campaigns! •  Why is there selection bias in observational techniques? W User features A Served ads Y Convert
  9. 9. MLCONF SEATTLE — MAY 1, 2015 Confounding in digital advertising campaigns! •  Why is there selection bias in observational techniques? W User features A Served ads Y Convert Unless we know what information targeters are using, we will never be able to fully adjust for selection bias.
  10. 10. MLCONF SEATTLE — MAY 1, 2015 Viewability! •  Web page layout, ad placement details, and user browsing behavior and setup can all impact the way in which ads are seen online. –  Some ads are served far down on the page (below the fold) –  Ads can be loaded in hidden tabs or windows –  Users may not stay on a page long enough for it to finish loading
  11. 11. MLCONF SEATTLE — MAY 1, 2015 Viewability!
  12. 12. MLCONF SEATTLE — MAY 1, 2015 Viewability!
  13. 13. MLCONF SEATTLE — MAY 1, 2015 Viewability!
  14. 14. MLCONF SEATTLE — MAY 1, 2015 Viewability as a natural experiment! Introduce a mediating variable — viewability W User features A Served ads Y Convert V Viewable ad
  15. 15. MLCONF SEATTLE — MAY 1, 2015 Methodology! Conversion (Y=1) Web page visit Effect window T0 Untreated user Treated user Viewable ad (V=1) Unviewable ad (V=0)
  16. 16. MLCONF SEATTLE — MAY 1, 2015 Methodology! Conversion (Y=1) Web page visit Effect window T0 Untreated user Treated user Viewable ad (V=1) Unviewable ad (V=0) Parameter of interest
  17. 17. MLCONF SEATTLE — MAY 1, 2015 Data! •  Seven display advertising campaigns run during the 4th quarter of 2014 –  Diverse industries such as auto insurance, beauty products, finance, and online marketing –  3 million - 29 million impressions –  2,000 - 2 million conversions
  18. 18. MLCONF SEATTLE — MAY 1, 2015 Using viewability as a natural experiment! Compared to the naïve analysis of comparing users that were served and not served ads, we find a drastic decrease in estimated lift when utilizing viewability.
  19. 19. MLCONF SEATTLE — MAY 1, 2015 Validation! •  How do we know a reduction in lift means our new estimates are correct? •  Use negative control tests –  Use the impressions of one campaign to predict an unrelated conversion
  20. 20. MLCONF SEATTLE — MAY 1, 2015 Validation! •  How do we know a reduction in lift means our new estimates are correct? •  Use negative control tests –  Use the impressions of one campaign to predict an unrelated conversion W User features A Served ads Y Convert V Viewable ad Y- Unrelated outcome
  21. 21. MLCONF SEATTLE — MAY 1, 2015 Validation! Focusing on Campaign B from the previous example, we measure the ads’ impact on unrelated outcomes
  22. 22. MLCONF SEATTLE — MAY 1, 2015 Bias in the natural experiment! •  We don’t see zero effect on many of our negative controls –  There can be other factors that affect viewability and conversion that we don’t account for
  23. 23. MLCONF SEATTLE — MAY 1, 2015 Bias in the natural experiment! •  We don’t see zero effect on many of our negative controls –  There can be other factors that affect viewability and conversion that we don’t account for W User features A Served ads Y Convert V Viewable ad W’ User features
  24. 24. MLCONF SEATTLE — MAY 1, 2015 Bias in the natural experiment! •  We don’t see zero effect on many of our negative controls –  There can be other factors that affect viewability and conversion that we don’t account for W User features A Served ads Y Convert V Viewable ad W’ User features Parameter of interest
  25. 25. MLCONF SEATTLE — MAY 1, 2015 Validation! Returning to Campaign B from the previous example, we measure the ads’ impact on irrelevant outcomes
  26. 26. MLCONF SEATTLE — MAY 1, 2015 Summary! •  Viewability enables a natural experiment –  Combines the benefits of A/B tests and observational analysis –  Adjustment for viewability features is easier than adjusting for targeting features –  Results in a large reduction in bias •  Negative controls allow for validation of models when the true value being estimated is unknown –  As the true effect of a natural experiment is usually unknown, negative controls provide a method for validation
  27. 27. MLCONF SEATTLE — MAY 1, 2015 Versatility! •  Features that can be used in a natural experiment can be found in data sets from a wide array of industries –  Viewability of stories in a user’s news feed –  Listening to songs on shuffle –  Winning bids in online advertising real-time bidding systems •  Valid negative controls naturally exist in many industries –  Purchasing unrelated products –  Clicking unrelated links
  28. 28. MLCONF SEATTLE — MAY 1, 2015 Acknowledgments! Integral Ad Science Daniel Hill Ekaterina Eliseeva Gijs Joost Brouwer Kiril Tsemekhman NYU Stern Foster Provost UC Berkley Alan Hubbard
  29. 29. MLCONF SEATTLE — MAY 1, 2015 Thanks! Robert Moakler — rmoakler@stern.nyu.edu | robert@integralads.com

×