Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Getting Started with Radio & TV

105 views

Published on

We recently brought in Mark Zamuner from TwoNil for Office Hours on “getting started with TV & Radio. Mark, founder of TwoNil, was formerly VP of Customer Acquisition at eHarmony, and his agency now runs video and audio campaigns for companies like Uber, Zillow, and TripAdvisor.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Getting Started with Radio & TV

  1. 1. Offline Marketing Playbook
  2. 2. 2 Market Leading Consumer Internet Clients TWO NIL | CONFIDENTIAL
  3. 3. 3 Discussion Points • How To Get Started – Mindset – Objective – Models – Sanity Check – Baseline – Investment – Creating A Test Matrix – Client Segmentation & Audience Identification – Spot Level Attribution – Campaign Optimizations • Creative – Helpful Insights – Creative Brief – Potential Performance Ranges • Appendix: Other Odds & Ends TWO NIL | CONFIDENTIAL
  4. 4. Mindset This is a test, this is only a test… Works just like any other marketing channel… Creative doesn’t need to be performance art… …all eyes will be on you; stay strong TWO NIL | CONFIDENTIAL
  5. 5. What’s the Objective? The objective of the test is to determine if offline media can provide a viable platform to acquire customers You won’t get everything right on the first pass…(worth repeating) 5TWO NIL | CONFIDENTIAL What You Need To Definitely Get From Testing: 1. Response Rate  The number of people who visit or download divided by impressions 2. Funnel Effects  Downstream conversion rates & ARPU/AOV/LTV
  6. 6. 3 Models to Start 1. Sanity Check  Basic Excel 2. Baseline  Predicted Forecast: Dependent on Depth of Historical Data • Time-Series, MMM 3. Investment Requirement  Stats Exercise 6TWO NIL | CONFIDENTIAL
  7. 7. 7 Data Requirements • Historical data for all KPI’s (as far back as possible) • Historical marketing spend and performance for all marketing channels  e.g. Branded/Non-Branded SEM, Email, Social, Podcast/Audio • Promotions and events for the historical data period  e.g. Refer a Friend Program • Product changes • Significant website/funnel changes TWO NIL | CONFIDENTIAL
  8. 8. Sanity Check = Can This Work? 8TWO NIL | CONFIDENTIAL
  9. 9. Create A Predictive Baseline 1. Need to understand where the business is going to be trending before launching your campaign 2. Enables you to assess totality of impact in a pre/post view 3. Make sure you validate the predictive fit of the model BEFORE you launch the campaign (Holdout Period) 4. If there is not ‘buy-in’ on approach…consider local scenario 5. Use baseline to determine “lift required” scenario 9TWO NIL | CONFIDENTIAL 7-Wk Hold-out Validation Model fit is very close to actual values, and is able to consistently capture 2015 growth. A 7 week holdout period was used to test the model’s predictive power, resulting a MAPE of 8%.
  10. 10. Reverse Engineer Required Spend 1. Assess the lift required against key KPI(s)to create confidence 2. Determine level of confidence you need to make business decisions 3. Ensure that all KPI required to make go forward decision are represented 4. Incorporate volume needed back into Sanity Model 5. Determine if spend is too great...if so, visit local scenario 10TWO NIL | CONFIDENTIAL 15 17 19 21 23 25 27 1/28 2/11 2/25 3/11 3/25 4/8 4/22 5/6 5/20 6/3 6/17 7/1 7/15 7/29 8/12 8/26 9/9 7-DaySumsofDailyUUs Millions Day 7 of Rolling Sum Rolling 7-Day Sums of Daily UUs UU Baseline UU Actual UU 50% UU 80% Campaign Edges Week Total UU
  11. 11. Key Test Design Question: Local or National National • Most Cost Efficient • Accelerate Learning Timeline (i.e. – initial investment can have positive business impact) • Higher Investment Threshold • Not suitable for multi-variable testing • Wide variety of networks and programming Local • Best for businesses with high variability • Provides discrete A/B test-control environment for those wary of models • Requires bundling markets (3 market minimum for any test cell) • Allows for multi-variable testing (i.e. – media mix, creative/offer, etc.) • Least Cost-Efficient and Media is Less Responsive • Need to Incorporate National CPM into Performance Model • Reliance on Broadcast reduces response 11TWO NIL | CONFIDENTIAL
  12. 12. A Bit of Socialism…Nationalizing Local Results • Large Price Differential between Local & National Media • Factor can range between 3- 8x depending on seasonality, markets, etc. • Broadcast is significantly less responsive on a short term basis • Need to level set Local outputs (response rate, conversion, CPA) 12TWO NIL | CONFIDENTIAL
  13. 13. 13 Targeting: Using 1st and/or 3rd Party Data to Target TWO NIL | CONFIDENTIAL 1st Party Data 3st Party Data THIRD-PARTY DATA ADDRESSABLE MARKETS POTENTIAL CUSTOMER
  14. 14. 14 Media Planning: Building Effective Reach • Frequency of Exposure builds as a campaign runs • This build of Reach & Frequency propels response rate • Recommend an absolute minimum of 4 week test & prefer 6-10 • Recommend increasing weight over time to get initial read on elasticity TWO NIL | CONFIDENTIAL Week # 1 2 3 4 5 GRPs 10 10 15 15 25 Impressions 11,975,000 11,975,000 17,962,500 17,962,500 29,937,500 Total Reach 8.0% 13.6% 19.8% 24.4% 30.1% Avg. Freq. 1.3x 1.5x 1.8x 2.1x 2.5x Effective 3+ Reach 0.4% 1.4% 3.4% 5.6% 9.2% Wkly Spend $62,869 $62,869 $94,303 $94,303 $157,172 Plan Delivery 75 89,812,500 30.1% 2.5x 9.2% $471,516
  15. 15. Typical Response “S” Curve • The initial weeks will be the worst performing • Allow yourself time to see the response start to build • Funnel metrics will move on a staggered basis • Visits initially • Leads after a few weeks • Sales/Orders later TWO NIL | CONFIDENTIAL
  16. 16. 16 Status Check: Where Are We • In control – this is just performance marketing at a large scale • We’ve got the team aligned to collect and share data & have built our models • Based on goals/objectives we’re testing either nationally or locally • Using 1st/3rd party data we’ve selected the most targeted networks & dayparts • Our test has enough investment & time to allow response to build and create a clear signal • Up Next: Measurement, Optimization and Creative Development TWO NIL | CONFIDENTIAL
  17. 17. 17 The New Marketing Reality • Direct to web/mobile response is the new reality and plays a significant role for all companies • Traditional data analysis is being thrown off by website traffic and digital marketing footprints • Attribution is more complex then ever, especially when traditional, offline media plays a significant role in your mix • Marketers need sophisticated tools to make sense of all the data, and more importantly make decisions about resource allocations to drive efficient customer acquisition and growth TWO NIL | CONFIDENTIAL
  18. 18. A Priori Impact • Spots are encoded with digital watermarks • Tracking services obtain daily feeds of specific spot airings • Pre-Post spot airing logic estimates the expected number of actions in a “five- minute window” after a commercial airs - Expected actions subtracted from the observed to create lift analysis • This basic spot attribution focuses on the immediate/initial response after the spot airing • Complexities exist outside the simple model TWO NIL | CONFIDENTIAL
  19. 19. 19 Operationalizing Granular Attribution TWO NIL | CONFIDENTIAL
  20. 20. Optimization Levers • Network Selection • Utilizing Attribution • Creative • Normative Benchmarks • Call to Action/Offer • Unit Length • Seasonality TWO NIL | CONFIDENTIAL
  21. 21. Ad Hoc Analysis Example: Scaling • Volume has moved closely in line with spend – More closely than at any other time (highest correlation) – Steepest slope (most bang for each buck) • If this trend continued: – $600k would yield ~34,970 DOS subs – $800k would yield ~39,770 DOS subs TWO NIL | CONFIDENTIAL 21
  22. 22. 22 Cross Client Creative Review OBJECTIVE Understand the short term and immediate response of different creative copy based on spot attribution method. METHOD Each account’s creatives coded for criteria that may indicate the strength of creative in terms of immediate and measurable response i.e. a visit to the site. Criteria Description CTA Is there a CTA? Strong CTA* Is the CTA strong? Dollar Amount Is a dollar amount noted? Discount Is a discount provided? Brand Frequency Frequency of which product mentioned METRICS  Key metric is observed is visit lift brought to the site five minutes post airing of the advertisement  Main KPI is response rate, or visit lift over impressions * Consistent messaging throughout the ad, the creation of a sense of urgency and presence of landing page inserts were a few of the criteria among others TWO NIL | CONFIDENTIAL
  23. 23. 23 Calls to Action Work!  Station/dayparts must have ran in both ‘Strong’ and ‘Weak’ types and have aired in the same week relative to their respective launch date Key Takeaways  ‘Strong’ creatives possessed significantly higher mean response rates than ‘Weak’ creatives in aggregate and across all clients observed  Difference in mean response rate from ‘Strong’ to ‘Weak’ ranged from 1.5-2.2 times, in aggregate ‘Strong’ creatives possessed a response rate 1.8 ± 0.1 times greater than their ‘Weak’ counterparts Note: Each point represents a week of air. Error bars indicate standard error of mean at 95% confidence. TWO NIL | CONFIDENTIAL
  24. 24. 24 Strong CTAs Burn Out  Cumulative response rate plotted against total target rating points (TRP) delivered Key Takeaways  Creatives coded as ‘Strong’ had significantly greater cum. response rate than ‘Weak’ counterparts in aggregate and across all clients observed  Negative correlation between cum. response rate and TRPs suggests efficacy of call-to-action deteriorates over time – Ad strength at a given point in time will likely not be as strong as its initial launch Note: Linear fit applied. Shaded area indicates 95% confidence interval of slope. TWO NIL | CONFIDENTIAL
  25. 25. 25 Key Takeaways  Observed significant difference in mean response rate amongst 2 of the 3 clients, however Client ‘F’ possessed only two weeks observation  In aggregate, we did not observed a significant difference in mean response rate from ‘No Discount’ to ‘Discount’  May suggests prospect of a discount or simply the size of the discount itself are not large enough incentives to prompt an immediate site visit Discounting  Cumulative response rate plotted against total target rating points (TRP) delivered Note: Each point represents a week of air. Error bars indicate standard error of mean at 95% confidence. TWO NIL | CONFIDENTIAL
  26. 26. Creative Development Best Practices • Keep It Simple • Benefits • Reasons to Believe • Include a Voice Over • Utilize to Call to Action • Offer/End Card • Discounting Discouraged • Free Drives Response • Length • :30s to start unless very complex Some of Our Favorites • http://www.ispot.tv/ad/7fl2/wix-com- do-it-yourself • http://www.ispot.tv/ad/7a6f/honest- diapers-all-about-that-honest-song- by-meghan-trainor • http://www.ispot.tv/ad/7orb/hauteloo k-gym-coffee-hautelook • http://www.ispot.tv/ad/7nQa/zillow- returning-soldier 26TWO NIL | CONFIDENTIAL
  27. 27. 27 Status Check • Confident, Prepared, Know the Road Ahead • Passed the Sticker Shock Phase • Drafting a Creative Brief that highlights our benefits and has some CTA/Offer • Our Data/Engineering Team is a Partner and we are sharing data to build models • Sober assessment based on our risk & decision making criteria for National or Local • Briefed everyone on what to expect…it’s not going to work day one • Give yourself 12 weeks for creative development process (could be cut to 6-10) • Re-check the Product & Marketing Calendars!!! No Big Surprises Planned : ) TWO NIL | CONFIDENTIAL
  28. 28. Summary Facts TV: $500K - $1M Dev. Timeline: 6-12 weeks Flight: 6 weeks • Local: ~$350K • National: ~$750K • Creative: ~$150K Audio: $150K - $500K Dev. Timeline: 3-4 weeks Flight: 10 weeks Benchmark Resp. Range: 0.25% to 0.75% & CPV of $1 to $3 (dependent on market size) • Dedicated Engineering Resources • Outside feedback on creative concept/script • Clear, controlled marketing & product calendar • Don’t try to learn everything at once • Mileage May Vary • Commit to Test & Learn 28TWO NIL | CONFIDENTIAL
  29. 29. Appendix TWO NIL | CONFIDENTIAL 29
  30. 30. 30 TWO NIL | CONFIDENTIAL Local Test Final Results Example Overall Lift Metrics (includes carryover) ― 95% lift in Sessions ― 128% lift in Subscriptions Local Cost per Metrics (includes carryover) ― $12.28 cost per Session ― $1,331 cost per Subscription A rapid decline in KPI volume was observed immediately following the end of the campaign on 10/19.
  31. 31. • It isn’t about Digital vs. TV vs. Print vs. X; its Digital + TV + Print + X • Integrated marketing has to start with a data driven approach, requiring the capture of as much data about a prospective customer as early as possible – Brands need to focus on creating:  Portfolio level segmentation  Customer Lifetime Value (CLV)  Demand Side Platform (DSP) level data aggregation through the funnel  Data Management Platform (DMP) makes rich data available for acquisition and retention (in fact distinction disappears) • Customers need to be visible at a detailed level through actual or modeled behavior Optimal Media Planning = Testing & Validating TWO NIL | CONFIDENTIAL 31
  32. 32. 32  Tell you the impact of something you haven’t done before  Tell you accurately the impact of doing something at levels far beyond that already experienced  Tell you the impact of an effort that does not vary  Overcome problems with the data Provide accountability, ROI Objectively and fairly quantify  What is important • By how much, and when it affects sales (short and long term)  Factors both within and outside of your control Provide a starting point  For the strategic planning process  For investment allocation • Communications mix implications • Flighting implications • Budgetary implications  For Scenario planning What models do What models can not do
  33. 33. MMM: CHANNEL PERFORMANCE 33 Model Results (actual results may vary): • As the largest spending channel, TV is driving KPI effectively, outperforming Facebook • Model results are directionally the same compared to prior US models - 30s TV is outperforming 15s TV – Radio has better CPA performance than TV in general • From a cost perspective, 2015 Super Bowl investment is less efficient compared to regular TV advertising • YouTube is very cost effective in driving premiums :30s TV CPA $300 (±14%) :15s TV CPA $400 (±12%) Super Bowl CPA $500 (±13%) Radio CPA $200 (±20%) Facebook CPA $500 (±36%) YouTube CPA $500 (±41%) • A simple, yet powerful, deliverable from the modeling exercise is the “decomp chart” • This graph, and the data behind it, shows across the time series the impact that all the activities in the model have on achieving the target KPI • The visual representation of How well are my marketing dollars working? TWO NIL | CONFIDENTIAL
  34. 34. Dec Q1:14 Q2:14 Q3:14 Q4:14 Jan Feb March April May June July Aug Sept Oct Nov Product Integration (e.g., NBC, HGTV, IFC & Others) <-- YouTube Masthead Always-on Digital Harvest (SEM, Display, etc.) Mobile App Acquisition eMail Expansion Mobile Web Site Direct + Twitter Test Nat'l Radio ADD Radio ADD Radio E.g. Real Simple, Dwell, House Beautiful, Southern Living, Country Living, Better Homes & Gardens Site Direct + Twitter Nat'l Radio HarvestChannels (PaidandOwned) Print (Mags) Digital (Display+ Video+Mobile) NationalTV (Cable+EM+SyndDayparts) 100% :30s 75% :30s / 25% :15s 75% :30s / 25% :15s Start Early Build Early 1wk Hiatus Periods (may revisit) Build towards & a little ahead of Q2 & Q3 Peaks Sustain thru Q3; step down w/seasonality Open Q: Traditionally dark in Q4; remain so? On-going Digital Harvest activities to capture incremental demand + create leverage Advanced Media Planning
  35. 35. When TV spots overlap in the spot attribution window the lift in the window needs to be fairly attributed to the networks airing in the window • Need to account for the historical response rate of the network • Need to account for the timing of the spot, namely the how long the spot is on air before another spot airs Sophisticated Spot Overlap Algorithms TWO NIL | CONFIDENTIAL 35
  36. 36. • Decay rates for each Network/Daypart are calculated in matrices in Matlab • These matrices are applied to the overlapping spot response rates resulting in a relative attribution of the overlapping spots Sophisticated Spot Overlap Algorithms Decays are equal in length to the spot attribution window For explanatory purposes only For explanatory purposes only TWO NIL | CONFIDENTIAL 36
  37. 37. STEP 1: BUILD CUSTOMER PROFILE TARGET AUDIENCE SEGMENT 400 ATTRIBUTES TARGET AUDIENCE SEGMENT 400 ATTRIBUTES YOUR AUDIENCE SEGMENT 400 ATTRIBUTES UBER DRIVER/CUSTOMER DATA ENRICHED AUDIENCE DATA TARGET AUDIENCE PROFILES Using viewer profile data from set-top boxes and leading providers like Nielsen & Rentrak, AOL predicts the audience profile for upcoming programs by assessing 100+ different data points for each airing. AOL’s viewer profile for each TV airing is similar in format to the customer profile from Step 1. STEP 2: PREDICT TV VIEWER PROFILES HISTORY WED. 9:00PM 400 ATTRIBUTES Air day: Wednesday Most recent airing Network Geography (DMA)
  38. 38. STEP 3: tRATIO SCORING YOUR AUDIENCE SEGMENT 400 ATTRIBUTES HISTORY WED. 9:00PM 400 ATTRIBUTES ABC MON. 9:00AM 400 ATTRIBUTES tRATIO 0.44 tRATIO -0.28 tRatio Scoring Spectrum + 1.0 Perfect Match 0.0 Random - 1.0 Perfectly Opposite AOL’s algorithms then compare the profile of your audience segments to the predicted viewer profiles for each upcoming TV/Digital Video airing. A score known as the “tRatio” is then generated to express how closely the audiences align on a network/daypart/program basis. A tRatio of 1.0 is a perfect match while a tRatio of -1.0 is the perfect opposite. In the illustration above, the profile of Your Audience Segment #1 “looks” more similar to the viewer profile for American Pickers than it does to Live with Kelly and Michael, which you can see expressed in their respective tRatios.
  39. 39. 39 TWO NIL | CONFIDENTIAL National Measurement – High Growth Case Study Identify the outliers in KPI volume using smoothing techniques Select the most appropriate explanatory variables (e.g. YouTube) and dummy variables for holidays (e.g. Christmas) Split historical data into test and training data sets and construct a model using training data Evaluate model fit statistics by fitting the model using test data set Once the best model is identified, refit the model using all data points (includes test data).

×