Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Debunking Ad Testing

202 views

Published on

This presentation was originally presented at Hero Conf London by Martin Rottgerding.

Published in: Marketing
  • D0WNL0AD FULL ▶ ▶ ▶ ▶ http://1lite.top/GkUJSj ◀ ◀ ◀ ◀
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Is Your Ex With a Woman? Don't lose your Ex boyfriend! This weird trick will get him back!  http://ow.ly/f23I301xGAo
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

Debunking Ad Testing

  1. 1. Debunking Ad Testing Martin Röttgerding Head of SEM Logo Here
  2. 2. Learner Outcomes • Ad testing best practices don‘t work • What actually happens when you follow best practices • What you should be doing instead
  3. 3. Martin Röttgerding • Head of SEM @ Bloofusion Germany • Blogger @ PPC-Epiphany.com • Judge @ SEMY (German Search Awards) • Dad @ Home Who’s Talking? @bloomarty
  4. 4. First off: No Worries • I’m not a mathematician. • This is not a math lecture. • I brought some data, though. iStock.com/digitalgenetics
  5. 5. In the Beginning: Optimize for Clicks % Served Impressions Clicks CTR Ad 1 96% 12,323 594 4.82% Ad 2 4% 536 37 6.90%
  6. 6. Alternative Approaches • Gut feeling • Rules („wait 100 clicks, then decide“) • Statistical Significance (usually: 95%)
  7. 7. Statistical Significance in a Nutshell Impressions Clicks Ad 1 200 10 Ad 2 200 20
  8. 8. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Ad 1 Impressions Clicks Ad 1 200 10 Ad 2 200 20 Ad 2
  9. 9. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Ad 1 Impressions Clicks Ad 1 200 10 Ad 2 200 20 Ad 2
  10. 10. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Ad 1 Impressions Clicks Ad 1 200 10 Ad 2 200 20 Ad 2
  11. 11. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Impressions Clicks Ad 1 2000 100 Ad 2 2000 200 Ad 1 Ad 2
  12. 12. Pros and Cons • You have the power. • You know what to do. • You look good. • It doesn‘t work.
  13. 13. Waiting for Significance Problem #1
  14. 14. Is it significant yet? no no no no no no no no no yes „We are 95% certain that this ad is better than the other one.“
  15. 15. An Experiment • 576 ad pairs • Rotated evenly • Left untouched for 12 months
  16. 16. Analyzing the Data • Script to evaluate the data • Calculate level of significance for each day • Visualization: – Ad 1 reaches statistical significance (95%) – Ad 2 reaches statistical significance (95%)
  17. 17. The Result (a small part) statistically significant no longer significant still significant… waiting for significance
  18. 18. The Result (zoomed out)
  19. 19. The Result (zoomed out)
  20. 20. Results • Most tests reached a significance level of 95% at some point Minimum total Impressions Tests to reach significance Still significant in the end 1,000 55% 13% 10,000 62% 12% 100,000 81% 11%
  21. 21. Not looking at all the pieces Problem #2
  22. 22. So this is what we see… Impressions Clicks CTR Ad 1 2,000 200 10% Ad 2 3,000 240 8%
  23. 23. How we tend to think of CTR CTR Your Ad
  24. 24. What drives CTR? CTR Ad Network (Google vs. Search Partners)
  25. 25. Search Partners?
  26. 26. Search Partners
  27. 27. Segmented by Network Impressions Clicks CTR Ad 1 2,000 200 10% Google Search Partners Ad 2 3,000 240 8% Google Search Partners
  28. 28. Segmented by Network Impressions Clicks CTR Ad 1 2,000 200 10% Google 1,000 180 18% Search Partners 1,000 20 2% Ad 2 3,000 240 8% Google 1,000 220 22% Search Partners 2,000 20 1%
  29. 29. Also possible… Impressions Clicks CTR Ad 1 2,000 200 10% Google 1,000 180 18% Search Partners 1,000 20 2% Ad 2 3,000 240 8% Google 1,000 220 22% Search Partners 2,000 20 1%
  30. 30. Also possible… Impressions Clicks CTR Ad 1 2,000 200 10% Google 1,000 180 18% Search Partners 1,000 20 2% Ad 2 3,000 270 9% Google 1,000 220 22% Search Partners 2,000 50 2.5%
  31. 31. How common is this? Based on a study of 6,500 ad pairs, compared with an AdWords Script • Overall winner loses on Google • Overall winner loses on Google & Search Partners 32.74% 12.23%
  32. 32. Quick Win: Ignore Search Partners • Well…
  33. 33. What drives CTR? CTR Ad Network (Google vs. Search Partners) Slot (top vs. other)
  34. 34. Same thing with slots Based on a study of 6,500 ad pairs, compared with an AdWords Script • Overall winner loses in the top slot • Overall winner loses on top & other 18.46% 6.30%
  35. 35. What drives CTR? CTR Ad Device Ad positionNetwork (Google vs. Search Partners) Slot (top vs. other)
  36. 36. CTR by avg. Position (Google top) 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% 20% 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 Avg. Pos. CTR
  37. 37. Exact Positions‘ CTR‘s (Google top) 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% 20% 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 Avg. Pos. CTR Exact Pos. CTR 18.05% 13.64% 12.05% 10.09%
  38. 38. Interpolated CTR‘s 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% 20% 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 Avg. Pos. CTR Exact Pos. CTR interpolated Exact Pos. CTR 18.05% 13.64% 12.05% 10.09%
  39. 39. What happens at the impression level… • This is just a glimpse into impression level data • How would you assess the following scenarios?
  40. 40. Scenario 1 Your ad User clicks on this ad … and never scrolls down
  41. 41. Scenario 2 Your ad user clicks on this ad …after 0.2 seconds
  42. 42. Scenario 3 Your ad user clicks on your ad … and on all other ads as well
  43. 43. Scenario 4 user clicks on your ad … after spending 35 seconds reading all search results Your ad
  44. 44. How would you evaluate these? • 4 impressions • 2 clicks • CTR: 50%
  45. 45. Ignoring Feedback Problem #3
  46. 46. What drives CTR? CTR Ad Slot (top vs. other) Device Ad positionNetwork (Google vs. Search Partners) Ad Auction Ad ranking
  47. 47. CTR The position feedback • Positive feedback • No loop: position effects do not affect QS Ad position Ad Auction Ad rankingHigher CTR Higher Quality Score Better Position Higher CTR
  48. 48. The impressions feedback
  49. 49. The impressions feedback • Negative feedback: • No loop: low relevance impressions evaluated separately Higher CTR Higher Quality Score More less relevant impressions Lower CTR CTR Ad Ad Auction
  50. 50. CTR Feedback CTR Ad Slot (top vs. other) Device Ad positionNetwork (Google vs. Search Partners) Ad Auction Ad ranking mostly invisible
  51. 51. Thinking you are better motivated than Google Problem #4
  52. 52. Google doesn‘t understand my business Google doesn‘t care Ad testing isn‘t important enough for Google to get it right
  53. 53. The AdWords business model „How much would you give us if we gave you the click?“ „How much would you give us if we showed your ad?“ Sell ad clicks Sell ad impressions Convert bids No control over clicks Advertisers want clicks
  54. 54. The ad auction • 𝐴𝑑 𝑅𝑎𝑛𝑘 = 𝐶𝑃𝐶 ∗ 𝑄𝑢𝑎𝑙𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒 • = 𝐶𝑃𝐶 ∗ 𝐶𝑇𝑅 • = 89:; 8<=>?: ∗ 8<=>?: @ABCD::=9E • = 89:; @ABCD::=9E = „How much would you give us if we showed your ad?“
  55. 55. Getting CTR wrong…
  56. 56. The concept itself Problem #5
  57. 57. Conflicting Mindsets Have a dedicated ad for everything. Find the best ad and use only that. [keywords] ? ? ? ? ? ? ? ?
  58. 58. Example: Location Context • Are they out or at home? • Are they moving or standing still? • Are they at a familiar place?
  59. 59. Example: Search History • Have they searched for this before? • Did they interact with ads? • Did they interact with organic results? • Have they seen our ad before?
  60. 60. Example: Personality • Do they take their time to read the entire ad? • How do they respond to – discounts – reassurances – testimonials
  61. 61. Testing for Things We Can‘t See? • Manually: Impossible • Otherwise:
  62. 62. What should we do? So…
  63. 63. New Mindset • You don‘t have control over ad testing. Let it go. • There can be multiple winners. • Use Google‘s optimized ad rotation by default.
  64. 64. Let the Machines Do Their Job • Google is well motivated • Google is really good with data and algorithms[citation needed] • Let Google decide which ads to show
  65. 65. Provide Meaningful Ad Variations • Create ads for client personas • Try out different USP‘s • Big stuff first
  66. 66. Know Google‘s Limits • CTR & conversion rate • Historical data • Semantics
  67. 67. Keep an Eye on the Machines • If necessary, force data collection • Rotate at adgroup level • Consider the cost of even rotation • Alternative: Add the ad again iStock.com/RichVintage
  68. 68. To Sum Up… • No more micromanaging ad tests • Focus on messaging and supervising the machines
  69. 69. Thank You! • Agency Blog: Die Internetkapitäne • Advanced AdWords Blog: PPC-Epiphany.com @bloomarty Martin Röttgerding Head of SEA Bloofusion Germany GmbH martin@bloofusion.de

×