Successfully reported this slideshow.

10NTC - Data Superheroes - DiJulio

996 views

Published on

Learn how to transform from a mild-mannered online organizer into a true data-driven mastermind! What to track, how to test, and methods for creating a data-driven culture at your nonprofit.

Published in: Business

10NTC - Data Superheroes - DiJulio

  1. 1. Superheroes of Online Fundraising: Become a Data-Driven Strategist Presented by Sarah DiJulio
  2. 2. Agenda <ul><li>Introductions </li></ul><ul><li>How Data Saved the Day </li></ul><ul><li>Testing </li></ul><ul><li>Key Stats </li></ul><ul><li>Data-Drive Culture </li></ul><ul><li>Q&A </li></ul>
  3. 4. HOW DATA SAVED THE DAY
  4. 5. November, 2008
  5. 6. <ul><li>Dear Kate, </li></ul><ul><li>The global financial crisis has created a strain on the US economy unlike any since the Great Depression. </li></ul><ul><li>Yet for poor people around the world, the burden is even greater… </li></ul><ul><li>Make a tax-deductible gift today. </li></ul>November, 2008
  6. 7. NARAL Pro-Choice America <ul><li>Problem: Advocacy Response Rates < Benchmarks </li></ul><ul><li>Solution: 10/90 Plan </li></ul><ul><li>In Action: </li></ul><ul><ul><li>Initial Send: On par with 2009, but no lift </li></ul></ul><ul><ul><li>Revised Copy: 29% lift over 12-month average response rate </li></ul></ul>
  7. 8. Original Version
  8. 9. Revised Version
  9. 10. $5 Ask: The Wilderness Society and AARP <ul><li>Problem: Converting non-donors to donor </li></ul><ul><li>Test: $5 ask vs. $10 ask vs. Control </li></ul>
  10. 11. AARP
  11. 12. The Wilderness Society
  12. 13. $5 Ask: The Wilderness Society and AARP <ul><li>Problem: Converting non-donors to donor </li></ul><ul><li>Test: $5 ask vs. $10 ask vs. Control </li></ul><ul><li>Lesson Learned: Test it for yourself! </li></ul>$5 Test TWS AARP Change in Response Rate -2% 144% Change in Average Gift -27% -41% Change in $ Raised Per Thousand Recipients -28% 31%
  13. 14. TESTING
  14. 15. What to test first?
  15. 16. What to test first?
  16. 17. What to test first?
  17. 18. What to test first?
  18. 19. What to test first?
  19. 20. Ask yourself: <ul><li>What goal will this help you meet? </li></ul><ul><li>How much of a lift can you expect? Is this likely to produce significant improvements? </li></ul><ul><li>How long will it take to get statistically significant results? (Will you ever get them?) </li></ul><ul><li>How much time will it take to implement? </li></ul><ul><li>Is the lesson you learn applicable to future efforts? </li></ul><ul><li>How will you evaluate the results? </li></ul>
  20. 21. Accounting for Other Variables
  21. 22. Before You Test <ul><li>Create a data template for how you’ll evaluate results </li></ul><ul><li>Ensure your sample sizes will give statistically significant results </li></ul>
  22. 23. Calculating a Sample Size <ul><li>Rule #1: Bigger is Better </li></ul><ul><li>Rule #2: 400 responses is usually valid </li></ul><ul><li>Rule #3: The smaller the metric you are measuring, the bigger sample you are going to need </li></ul><ul><ul><li>If you have a list of 100,000 people x 4% response rate = 4,000 responses. So you could do an A/B test to two groups of 10,000 people each </li></ul></ul><ul><ul><li>If you have a list of 100,000 people x .1% response rate = 100 responses. </li></ul></ul><ul><li>Rule #4: If your response rates are VERY different, then you can get away with a smaller sample size! </li></ul>
  23. 24. Tools for Calculating a Sample Size <ul><li>Online Sample Size Calculator: </li></ul><ul><ul><li>http://www.surveysystem.com/sscalc.htm </li></ul></ul><ul><li>Google Web Optimizer Calculator: </li></ul><ul><ul><li>https://www.google.com/analytics/siteopt/siteopt/help/calculator.html </li></ul></ul>
  24. 25. Evaluating Results <ul><li>Option 1: Gut </li></ul><ul><li>Option 2: A little math! </li></ul><ul><ul><li>Chi Square Test: A/B Test </li></ul></ul><ul><ul><ul><li>Open Rates / Click Through Rates / Response Rates </li></ul></ul></ul><ul><ul><ul><li>http://www.prconline.com/education/tools/statsignificance/index.asp </li></ul></ul></ul><ul><ul><li>T-Test: Comparing Data Sets with Multiple Data Points </li></ul></ul><ul><ul><ul><li>Average Gift Size </li></ul></ul></ul><ul><ul><ul><li>http://www.graphpad.com/quickcalcs/ttest1.cfm?Format=C </li></ul></ul></ul><ul><ul><li>ANOVA Test: Multiple Variables </li></ul></ul><ul><ul><ul><li>http://www.danielsoper.com/statcalc/calc43.aspx </li></ul></ul></ul><ul><ul><ul><li>http://faculty.vassar.edu/lowry/anova1u.html </li></ul></ul></ul>
  25. 26. KEY STATS
  26. 27. Email Message Results Email Messaging Benchmarks Open Rate Click-Through Rate Response Rate Unsubscribe Rate All Message Types 14.09% 2.55% 0.23% Fundraising Emails 12.82% 0.78% 0.13% 0.23% Advocacy Emails 14.26% 4.65% 4.00% 0.19% Email Newsletters 14.57% 2.96% 0.25%
  27. 28. Fundraising Email Performance Fundraising Email Performance Open Rate Click-Through Rate Page Completion Rate Response Rate Unsubscribe Rate High Response Rate 16.42% 1.48% 23.38% 0.28% 0.38% Middle Response Rate 12.07% 0.57% 22.31% 0.10% 0.18% Low Response Rate 10.32% 0.29% 17.20% 0.04% 0.19%
  28. 29. Advocacy Email Performance Advocacy Email Performance Open Rate Click-Through Rate Page Completion Rate Response Rate Unsubscribe Rate High Response Rate 15.93% 6.75% 92.94% 7.03% 0.11% Middle Response Rate 13.64% 4.55% 83.02% 3.58% 0.19% Low Response Rate 13.73% 2.71% 62.54% 1.75% 0.27%
  29. 30. Email List Churn
  30. 31. Average Gift Size
  31. 32. Cost Per Acquisition
  32. 33. $ Raised Per $ Spent (FY05 Recruits)
  33. 34. $ Raised Per $ Spent
  34. 35. Giving Trends Over Time
  35. 36. Cost Per New Donor
  36. 37. $ Raised Per $ Spent
  37. 38. Some Stats of Mixed Value <ul><li>Emails Sent (but not delivered) </li></ul><ul><li>Open Rates </li></ul><ul><li>Clicks / Opens </li></ul><ul><li>Total Opens (not unique) </li></ul><ul><li>Total Clicks (not unique individuals) </li></ul><ul><li>Total File Size </li></ul><ul><li>Others? </li></ul>
  38. 39. DATA-DRIVEN CULTURE
  39. 40. “ How often do we test? We test constantly. We test everything. It’s what we do.”
  40. 41. “ We don’t really test. I wish we did. We just go on instinct a lot of the time.”
  41. 42. Roadblocks
  42. 43. Data Overload
  43. 44. Step 1: Have a Template
  44. 45. Step 2: Set Benchmarks <ul><li>Most Important Benchmark? Your own! </li></ul><ul><li>Industry Benchmarks: </li></ul><ul><ul><li>M+R & NTEN: www.e-benchmarksstudy.com </li></ul></ul><ul><ul><li>Convio </li></ul></ul><ul><ul><li>Target Analysis </li></ul></ul>
  45. 46. Step 3: Have a System <ul><li>Keep Track of Possible Tests </li></ul><ul><li>Evaluate Results </li></ul><ul><li>Save Results! </li></ul>
  46. 47. Steps 4 & 5: Lather, Rinse, Repeat
  47. 48. Steps 4 & 5: Lather, Rinse, Repeat <ul><li>Set Goals </li></ul><ul><li>Measure Results </li></ul><ul><li>Set Goals </li></ul><ul><li>Measure Results </li></ul><ul><li>Set Goals </li></ul><ul><li>Measure Results </li></ul><ul><li>Etc. </li></ul>
  48. 49. OTHER IDEAS?
  49. 50. Evaluation Code: 152 How Was this Session? Call In Text Online Call 404.939.4909 Enter Code 152 Text 152 to 69866 Visit nten.org/ntc-eval Enter Code 152 Session feedback powered by: Tell Us and You Could Win a Free 2011 NTC Registration!

×