Talking Bayes to Business: Marketing Spend Usecase (EARL2019)
Sep. 20, 2019•0 likes
0 likes
Be the first to like this
Show More
•75 views
views
Total views
0
On Slideshare
0
From embeds
0
Number of embeds
0
Download to read offline
Report
Data & Analytics
Slide from my talk at EARL2019: Some background on Bayesian thinking and how to communicate with your stakeholders, and a marketing spend example and how we were able to estimate effect without an A/B test
Talking Bayes to Business: Marketing Spend Usecase (EARL2019)
2019
2019
EARL London | 10-12 September, 2019
“Hard Talks”: Talking Bayes to Business
A marketing spend use-case
Yizhar (Izzy) Toren
2019
About me
! Adopted R at 2003 & introduced it to my university
! Bayesian by belief - Frequentist by practice
! I call myself a “Data Scientist” because I know math, stats & just
enough programming to be dangerous
! Currently focused on forecasting & causality (for elasticity,
optimisation, etc.) and NLP for recommendations & search
Find me on @BigEndianB Linkedin github.com/ytoren
2019
Agenda
! Motivation: Is my ad campaign working?
! Why Bayes?
! Use Case: Measuring impact without a test
○ Toolkits: CausalImpact
○ A workflow example
2019
Meet Nadia
Nadia is a marketing director.
Nadia is smart.
She wants to know if a new video ad campaign will
be effective.
She talks to you about impact, tracking & KPIs before
planning the campaign.
BE LIKE NADIA
🙋
2019
💁
Meet Nadia
Nadia is a marketing director.
Nadia is smart responsible.
She wants to know if a new video ad campaign will
be effective.
She talks to you about impact, tracking & KPIs before
planning releasing the campaign.
BE LIKE NADIA, but be better next time
2019
Meet Nadia
Nadia is a marketing director.
Nadia is smart responsible.
She wants to know if a new video ad campaign will
be effective.
She talks to you about impact, tracking & KPIs before
planning after releasing the campaign.
BE LIKE NADIA, but be better next time
🙎
2019
In a perfect the real world
! We have a model of population & causality
(e.g. more ads ➡ more signups)
! We have well defined KPIs (daily signups) and
understanding of effect size
! Sufficient volume for significance & power
! Sufficient velocity for timely answer
! Good randomisation & user tracking infra for
A/B tests
💁
harder than
you’d think
2019
Is it working?
Good news! We pass
the IOTT (Intra-Ocular
Trauma Test)
Test group
after
before
95% CI: [102.2,130.9]
P-value < 2e-15
2019
Is it still working?
Life is noisy and complicated ➡ Let’s test!
- Nadia: “Can we say the ad campaign worked?”
- You: “Well… we saw X% increase daily visits, with p < 0.005”
- Nadia: “So… 99.5% its working?”
- You: “ , also not necessarily”
And without a testing? 😳
Test group
2019
So Why Bayes?
! Get the answers you want (p-values = hard talk)
! A healthy conversation with stakeholders (priors)
! Problems first not solutions backwards
! Sometimes you just can’t test
! Because you have nifty tools in *
* And some of them only in R, so you now have a great excuse to introduce R into your org toolset!
2019
Use Case
! On 09/12 we implemented a new ad-spend policy on a
content website (“ThyPipe”), where we can’t run tests
! Nadia asks: “Is the new policy better than the old one?”
! Translation: what would have happened if we did not
change anything? (a.k.a “The Counterfactual”)
2019
Data
! User signups from different sources (ours is source1)
! Queries from a search engine (rhymes with “Doodle”)
! Calendar: Holidays, seasonality, feature releases, etc.
2019
Possible Approaches
! A/B testing
! Compare “Before” / “After” ❌
! Difference in differences 🔎🦄
! Multivariate regression + 🔎📦
time series + GLM + …
! Bayesian Time Series ✔
We “got lucky”: CausalImpact 🎯
2019
CausalImpact
Step 1: Fit the best model you can
on “before” data (out-of-the box or
customized via BSTS)
Step 2: Compare actual “after” data
to the model simulation of the
same period: This is the effect!
“We see 4% increase but it’s probably
noise: 45% chance we’re below the min
3% uplift required”
Before After
2019
Summary
! A/B testing is great, when testing is feasible and the
answers are meaningful
! If you can’t test - simulate!
! Think problem first, not solution backwards
! Priors are an opportunity to engage with stakeholders
! Use powerful tools, but with care 🕸
! More details on github.com/ytoren/presentation-bsts
2019
The answers you want
P(“it works”) P(data|“it works”)
P(data)
P(“it works”|data) =
The answer
Nadia wants
Prior Likelihood
Might be Hard
to Compute
p-value = P(data|”it’s not working”)
You are already having a “hard talk”...
2019
Priors: The “Hard Talk”
• Choice of priors: subjective 🙀, but there are guidelines
• New discussions with stakeholders:
• Internally: “if you had to guess”, surveys, games, ...
• Externally: industry benchmarks
• Some obvious defaults (mean=0, “natural” limits, ...)
• Defaults from your tools (when in doubt - )
Your new job: Translate business insights into a distribution
2019
Time to
Solve
Problem
Scope
Time to
Solve
Problem
Scope
Thinking & Framing
Tools
Scope
Tool
Scope
Frequentist: “Solution Backwards” Bayesian: “Problem First”
! Frequentist: phrase the problem to fit the tools
! Bayes: find a model that fits the problem (but in a finite time…)
Solutions
Solutions
2019
Some More Toolkits
! Stan
○ Fully flexible & powerful
○ New syntax
○ Cross platform
○ Multiple R wrappers (BRMS, stanARM, …)
! Prophet:
○ Stan wrapper
○ R & python bindings
○ (log) additive Gaussian only
○ Time series trend oriented
! BSTS:
○ R library (with R syntax)
○ (log) additive Gaussian / Poisson(sometimes)
○ Regression/causality oriented
○ CausalImpact is a wrapper
Easy Hard
Flexible
Specific
BSTS