Assessing Online Ads
Beyond Only Clicks
Ruth Garcia @ruthygarcia
Skyscanner – April 25, 2017
Online
Advertising:
Growth rate
of 17 % from
2006-2015
Online Advertising Revenue (billion $)
Source: IAB/PwC Internet Ad Revenue Report, HY 2016
* CAGR: Compound Annual Growth Rate
$7.9 $10.0 $11.5 $10.9 $12.1 $14.9 $17.0 $20.1 $23.1
$27.5
$32.7
$9.0
$11.2 $11.9 $11.8
$13.9
$16.8
$19.5
$22.7
$26.4
$32.1
?
$0
$10
$20
$30
$40
$50
$60
$70
$80
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016
Last 6 months First 6 months
Over the past ten years, second
half revenues averaged 53% of
the annual total.
Find balance:
Increase our
revenue and
engage users
D.G. Goldstein, R.P. McAfee, and S.Suri. The cost of
annoying ads. WWW 2013
A. Goldfarb and C. Tucker. Online display advertising:
Targeting and obstrusiveness. Marketing Science, 2011
Revenue Engaged user
Goal of Data
Science on
Advertising
– Assess the quality of our ads beyond Click Through Rate
(CTR) to better understand user experience.
– Take actions to improve the targeting and delivery of ads
towards a better user experience with ads.
– Provide feedback to our partners about their ads.
Where are
we?
CTR
Focusing on
native ads
Native ads:
impact on
user
conversion in
one month
(sample)
User Type Users who
clicked on
itineraries
Users who
clicked on
native ads
Users who
booked
Users who
always see
native ads
13.8% 2.0% 12.3%
Users who
never see
native ads
15.3% 11.4%
Most users will not click on an ad again
Native ads
with and
without
prices: what
is a better
ad?
When a price is available, we show it in the native inline ad
Higher CTR
for ads
without price
Higher CTR
for ads
without price
Beyond
clicks: what
new feature
tells you
We need new
metrics to
evaluate the
quality of ads
beyond CTR
Pre-click
When is an ad bad?
Click
When is a click
good?
Post-click
When does an ad provide
a good experience?
• Metric: Feedback
from users with
hide button and
survey question
after hide button
• Metric: dwell
time
• Metric: Booking/sales
rate from ads
• Metric: Time on partner’s
site based on dwell time
New pre-click metric
Machine
learning
opportunities
to rank ads
Ke Zhou, Miriam Redi, Andy Haines, Mounia Lalmas, Predicting Pre-click
Quality for Native Advertisements, WWW 2016
Nicola Barbieri, Fabrizio Silvestri, Mounia Lalmas, Improving Post-Click
User's Engagement on Native Ads via Survival Analysis, WWW 2016
Pre-click Click Post-click
Machine Learning
Logistic regression Random Survival
Forest
Logistic regressions
Fake it until
you make it:
validate your
results
Score(ad)= bid x pCTR Score(ad)= bid x P(quality|ad) >	𝜹
Take away
messages
– Metric: Pre-click, click and post-click quality, not only CTR.
E.g., dwell time, feedback on hide button, ad bookings.
– Features: Collect features to evaluate ads: text, images,
brand, numbers, partner site.
– Model: Use machine learning models to predict AND
provide feedback to partner (e.g., logistic regression,
random forest) and use continuous learning.
– A/B testing: Fake it until you make it. Take into production
successes and continue improving.
Thank you
@ruthygarcia
ruth.garcia@skyscanner.net

Assessing Online Ads Beyond Only Clicks

  • 1.
    Assessing Online Ads BeyondOnly Clicks Ruth Garcia @ruthygarcia Skyscanner – April 25, 2017
  • 2.
    Online Advertising: Growth rate of 17% from 2006-2015 Online Advertising Revenue (billion $) Source: IAB/PwC Internet Ad Revenue Report, HY 2016 * CAGR: Compound Annual Growth Rate $7.9 $10.0 $11.5 $10.9 $12.1 $14.9 $17.0 $20.1 $23.1 $27.5 $32.7 $9.0 $11.2 $11.9 $11.8 $13.9 $16.8 $19.5 $22.7 $26.4 $32.1 ? $0 $10 $20 $30 $40 $50 $60 $70 $80 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 Last 6 months First 6 months Over the past ten years, second half revenues averaged 53% of the annual total.
  • 3.
    Find balance: Increase our revenueand engage users D.G. Goldstein, R.P. McAfee, and S.Suri. The cost of annoying ads. WWW 2013 A. Goldfarb and C. Tucker. Online display advertising: Targeting and obstrusiveness. Marketing Science, 2011 Revenue Engaged user
  • 4.
    Goal of Data Scienceon Advertising – Assess the quality of our ads beyond Click Through Rate (CTR) to better understand user experience. – Take actions to improve the targeting and delivery of ads towards a better user experience with ads. – Provide feedback to our partners about their ads.
  • 5.
  • 6.
  • 7.
    Native ads: impact on user conversionin one month (sample) User Type Users who clicked on itineraries Users who clicked on native ads Users who booked Users who always see native ads 13.8% 2.0% 12.3% Users who never see native ads 15.3% 11.4% Most users will not click on an ad again
  • 8.
    Native ads with and without prices:what is a better ad? When a price is available, we show it in the native inline ad
  • 9.
  • 10.
  • 11.
  • 12.
    We need new metricsto evaluate the quality of ads beyond CTR Pre-click When is an ad bad? Click When is a click good? Post-click When does an ad provide a good experience? • Metric: Feedback from users with hide button and survey question after hide button • Metric: dwell time • Metric: Booking/sales rate from ads • Metric: Time on partner’s site based on dwell time New pre-click metric
  • 13.
    Machine learning opportunities to rank ads KeZhou, Miriam Redi, Andy Haines, Mounia Lalmas, Predicting Pre-click Quality for Native Advertisements, WWW 2016 Nicola Barbieri, Fabrizio Silvestri, Mounia Lalmas, Improving Post-Click User's Engagement on Native Ads via Survival Analysis, WWW 2016 Pre-click Click Post-click Machine Learning Logistic regression Random Survival Forest Logistic regressions
  • 14.
    Fake it until youmake it: validate your results Score(ad)= bid x pCTR Score(ad)= bid x P(quality|ad) > 𝜹
  • 15.
    Take away messages – Metric:Pre-click, click and post-click quality, not only CTR. E.g., dwell time, feedback on hide button, ad bookings. – Features: Collect features to evaluate ads: text, images, brand, numbers, partner site. – Model: Use machine learning models to predict AND provide feedback to partner (e.g., logistic regression, random forest) and use continuous learning. – A/B testing: Fake it until you make it. Take into production successes and continue improving.
  • 16.