Webtrends/Marketing Sherpa Webinar

665 views

Published on

Great takeaways on landing page optimization and driving higher conversions from our recent webinar.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
665
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Lissa to speak to this slide
  • Lissa to speak to this slide
  • Lissa to speak to this slide
  • Lissato speak to this slide:Webtrends Optimization solutions increase online conversion and revenue. We provide testing, segmentation, and targeting solutions to enterprises in eCommerce, travel, financial services, and more. Our methodology is founded on principles shared by Marketing Sherpa, and we ensure our customers’ success by providing not just technology, but also expertise and guidance in order to create meaningful conversion lift and learnings for our customers
  • Sherpa to speak to this slide
  • Sherpa to speak to this slide
  • Sherpa to speak to this slide
  • Sherpa to speak to this slide
  • Sherpa to speak to this slide
  • Sherpa to speak to this slide
  • Sherpa to speak to this slide
  • Sherpa to speak to this slideTaken from 2012 BMR: While 47% of marketers use optimization testing to inform customer theory, there is still room for improvement, with more than half of marketers failing to fully deploy optimization within their organizationsIt is interesting that agencies reported they were roughly 66% more likely to use optimization lessons to transform their customer theory, with 60% of marketers inside marketing agencies using testing to influence customer theory, messaging and segmentation strategies.
  • Sherpa to speak to this slide
  • Sherpa to speak to this slideSummary of Boris Commentary: One sentiment that many LPO marketers have is frustration. Decisions that affect the performance of a website are often being made by C-Suite or organizational committees and not on results of test data.
  • Sherpa to speak to this slideSummary of Boris Commentary: Over 2/3 of the survey respondents are involved in LPO testing in some form.
  • Sherpa to speak to this slide.Summary of Boris Commentary: Over 2/3 of the survey respondents are involved in LPO testing in some form.
  • Kirk to speak to this slide
  • Kirk to speak to this slide
  • Sherpa to speak to this slide.
  • Sherpa to speak to this slide.
  • Sherpa to speak to this slide.These are the three most important and often misunderstood elements of statistical significance.
  • Sherpa to speak to this slide.
  • Sherpa to speak to this slide.
  • Sherpa to speak to this slide.
  • Sherpa to speak to this slide.
  • Kirk to speak to this slideChanged: “You can’t blindly trust your tools” to “You can’t trust data in isolation”We use stabilization as our indicator to show validity. This helps with challenges to validity.
  • We do not need this level of detail
  • We do not need this level of detail
  • We do not need this level of detail
  • We do not need this level of detail
  • JONKirk to weigh in and comment on waiting until the tests have stabilized
  • Kirk to speak to this slideKirk’s notes: Stabilization and why it is important to LPO:·         To help ensure accurate testing results, a method we use here at Webtrends is to watch the cumulative conversion rate of the different experiments stabilize (or become consistent) while the test is running.o   Early in the test, each experiment may have a conversion rate that changes dramatically.o   As each experiment receives more data, the conversion rate will become more stable or consistent.o   This help us know when the test is trending towards completion & ensure we are making an accurate prediction about which experiment is the winner.§  This is in conjunction with statistical significance.
  • Plain English Definition: Selection effect occurs when we wrongly assume some portion of the traffic represents the totality of the traffic
  • Plain English Definition: Something happens in the outside world that causes flawed data in the testThis is an example of bad historical assumptions of correlation vs. causality from the book Freakonomics; in which scientists associated ice cream as the cause of Polio. The reason for this was because of the spikes in both polio cases and ice cream during the summer months.
  • Plain English Definition: Something happens in the outside world that causes flawed data in the testThis is an example of bad historical assumptions of correlation vs. causality from the book Freakonomics; in which scientists associated ice cream as the cause of Polio. The reason for this was because of the spikes in both polio cases and ice cream during the summer months.
  • Plain English Definition: when a test variable is affected by a change in the measurement instrument
  • Kirk to speak to this slide
  • Lissa to speak to this slide
  • Webtrends/Marketing Sherpa Webinar

    1. 1. Landing Page Optimization: How to get better ROI on the traffic you’re already sending to your site Nov. 29, 2012
    2. 2. Introductions • Daniel Burstein, Director of Editorial Content MECLABS/MarketingSherpa Twitter: @DanielBurstein • Adam Lapp, Associate Director, Optimization and Strategy MECLABS/MarketingSherpa Twitter: @AdamLapp • Kirk Ramble, Optimization Consultant Webtrends Optimization Solutions #WTwebinar
    3. 3. Follow today’s conversation #WTwebinar #WTwebinar
    4. 4. Webtrends provides optimization solutionsto increase conversion and revenue• Landing page optimization• Site optimization• Mobile/Social optimization• A/B and MVT testing• Visitor segmentation• Content targeting• Experience and expertise #WTwebinar
    5. 5. MarketingSherpa is a research and publishingorganization serving the marketing community• MarketingSherpa’s annual research cycle provides knowledge for continuous marketing improvement #WTwebinar
    6. 6. Research Background • 2,677 qualified survey responses • In 10 major industry verticals • Key marketing insights on: • Website optimization • Optimization ROI • Optimization Strategies • Testing and Analytics • Optimization Strategy Integration • Key success stories #WTwebinar
    7. 7. Research Background • 2,673 qualified survey responses • Over 190 charts with analytical commentary • Key marketing insights on: • Optimization tactics • C-level ROI and budgeting perspectives • Testing and Analytics • Optimization Challenges • Key success stories #WTwebinar
    8. 8. Today, we will discuss… • The case for LPO, information you can use to help you secure budget approval • The challenges with statistical validity, to help you avoid making overconfident and perhaps erroneous assumptions based on misleading numbers • How you can use LPO to keep up with your ever- changing customers in an ever-changing marketplace #WTwebinar
    9. 9. Landing Page Optimization 3 Keys to successful online testing1 The Case for LPO2 Validity Challenges3 Optimization in a Changing Marketplace #WTwebinar
    10. 10. “Business exists to supply goodsand services to customers, ratherthan to supply jobs to workers“and managers, or even dividendsto stockholders. – Peter Drucker #WTwebinar
    11. 11. The Internet as a Research Lab The DecisionThe Internet has become the most Resolutionefficient means of gatheringbusiness intelligence BEFORE amajor online (or offline) campaign. Behavioral Level 3 Experimentatio n Level 2 Opinion Research Level 1 Marketing Intuition #WTwebinar
    12. 12. The Case for LPO: Why Test? Q. Does your organization use website optimization and/or testing to draw conclusions about your customer base? • 47% of marketers use optimization testing to inform customer theory • More than half of marketers fail to fully deploy optimization within their organizations #WTwebinar
    13. 13. Marketer Insights: Overcoming challenges to LPO “How to get the entire Web IT, managers and copy writers fired? They are 5 years behind the rest of the other retailers. What they do on the website is totally subpar. It really is sad. Many dollars lost every day!” - Benchmark Study Participants #WTwebinar
    14. 14. Who called the shots in 2010? “Only one-fifth of organizations surveyed made decisions based on validated test results.” – 2011 MarketingSherpa Landing Page Optimization Benchmark Report Validated test CMO or business result determines unit head makes the decision the decision 21% 23% Other 4% Marketing department decides based on published best practices 16% Marketing department decides collaboratively 36% Source: ©2011 MarketingSherpa Landing Page Optimization Benchmark Survey Methodology: Fielded Feb ruary 2011, N=2,673 #WTwebinar
    15. 15. State of LPO in 2010 No LPO 31% LPO without testing (based only on best practices) 33% LPO, including testing 36% Source: ©2011 MarketingSherpa Landing Page Optimization Benchmark Survey Methodology: Fielded Feb ruary 2011, N=2,673 #WTwebinar
    16. 16. Getting started in LPO doesn’t have to be difficult Even with very small and simple changes you can gain great insight about your customer and receive dramatic results. Control Stock image of customer service rep Image of well-known company founder. Treatment 35% IN CONVERSION #WTwebinar
    17. 17. Huge wins are possible from very small changes Moved primary CTA to center of page Control Treatment Control Stock image of customer service rep Treatment 280+% CONVERSION LIFT #WTwebinar
    18. 18. Small changes together make a big impact • Presented information as tools or modules • Adjusted price and free trial messages • Revised copy to be more benefit-based Control Treatment 12+% CONVERSION LIFT #WTwebinar
    19. 19. Landing Page Optimization 3 Keys to successful online testing1 The Case for LPO2 Validity Challenges3 Optimization in a Changing Marketplace #WTwebinar
    20. 20. Validity Challenges: How marketers validate test results20 #WTwebinar
    21. 21. Validity Challenges: Elements of statistical significance Significant Level of Sample Size Difference Confidence21 #WTwebinar
    22. 22. Validity Challenges: Significant Difference11% is more than 10%**…except when it’s not #WTwebinar
    23. 23. Validity Challenges: Sample Sizen=2“Well, you’re alive today even though you didn’t have one of those fancy carseats.”– My Momn=7,813“Compared with seat belts, child restraints…were associated with a 28%reduction in risk for death.”– Michael R. Elliott, PhD; Michael J. Kallan, MS; Dennis R. Durbin, MD, MSCE;Flaura K. Winston, MD, PhD #WTwebinar
    24. 24. Validity Challenges: Sample SizesFactors in determining Sample Size • Test complexity (number of versions being tested) • Conversion rate • Performance difference between variations • Confidence level • But – too short a test may not be as valid as it looks, especially if distribution of time is a factorBe realistic about what kind of test your site can support #WTwebinar
    25. 25. Validity Challenges: Levels of Confidence #WTwebinar
    26. 26. Validity Challenges: Levels of Confidence Imagine an experiment… • Take one FAIR coin. (i.e., if flipped  times, would come out heads 50%). • Flip the coin ‘n’ (many) times and record # Heads (e.g., say 60 times) • Then do it over and over again; same # flips. Proportional to #- times it comes out with that many Heads The math – 5 times out of every 100 that I do the coin-flip experiment, I expect to get a difference between my two samples thats AT LEAST as big as this one - even though there is NO ACTUAL difference...26 #WTwebinar
    27. 27. Validity Challenges: Levels of ConfidenceHow do I decide on the right level? • Most common is 95% (i.e., 5% chance you’ll think they’re different when they’re really not) • There is no ‘magic’ to the 95% LoC. • Mainly a matter of ‘convention’ or agreement. • The onus for picking the ‘right’ level for your test is on YOU. • Sometimes the tools limit you Confidence Interval Limits • 95% is seldom a “bad” choice. • Higher = Longer test • Bigger difference needed for validity • Decide based on… • Level of risk of being wrong vs. cost of prolonging the test.27 #WTwebinar
    28. 28. Validity Challenges: You can’t trust data in isolation• History Effect - Something happens in the outside world that causes flawed data in the test• Instrumentation Effect- When a test variable is affected by a change in the measurement instrument• Selection Effect- Occurs when we wrongly assume some portion of the traffic represents the totality of the traffic #WTwebinar
    29. 29. Validity Challenges: Experiment Experiment ID: (Protected) Location: MarketingExperiments Research Library Research Notes: Background: Consumer company that offers online brokerage services Goal: To increase the volume of accounts created online Primary research question: Which page design will generate the highest rate of conversion? Test Design: A/B/C/D multi-factor split test29 #WTwebinar
    30. 30. Experiment: Control Control ROTATING BANNER • Heavily competing imagery and messages • Multiple calls-to-action30 #WTwebinar
    31. 31. Experiment: Treatment 1 Treatment 1 • Most of the elements on the page are unchanged, only one block of information has been ROTATING BANNER optimized • Headline has been added • Bulleted copy highlighted key value proposition points • Chat With a Live Agent CTA removed • Large, clear call-to-action has been added31 #WTwebinar
    32. 32. Experiment: Treatment 2 Treatment 2 • Left column remained the ROTATING BANNER same, but we removed footer elements • Long copy, vertical flow • Added awards and testimonials in right-hand column • Large, clear call-to-action similar to Treatment 132 #WTwebinar
    33. 33. Experiment: Treatment 3 • Similar to Treatment ROTATING 2, except left-hand BANNER column width reduced even further • Left-hand column has a more navigational role • Still a long copy, vertical flow, single call-to-action design33 #WTwebinar
    34. 34. Experiment: Side-by-side Control Treatment 1 Treatment 2 Treatment 334 #WTwebinar
    35. 35. Experiment: Results No Significant Difference None of the treatment designs performed with conclusive results Conversion Test Designs Relative Diff% Rate Control 5.95% - Treatment 1 6.99% 17.42% Treatment 2 6.51% 9.38% Treatment 3 6.77% 13.70% What you need to understand: According to the testing platform we were using, the aggregate results came up inconclusive. None of the treatments outperformed the control with any significant difference.35 #WTwebinar
    36. 36. Experiment: Validity Threat • However, we noticed an interesting performance shift in the control and treatments towards the end of the test. • We discovered that during the test, there was an email sent that skewed the sampling distribution. 19.00% 17.00% Treatment consistently is Control beats 15.00% beating the control the treatmentConversion Rate 13.00% 11.00% Control 9.00% Treatment 3 7.00% 5.00% 3.00% Test Duration 36 #WTwebinar
    37. 37. Experiment: Results 31% increase in conversion The best treatment outperformed the control by 31% Test Designs Treatment Relative Diff% Control 5.35% - Treatment 1 6.67% 25% Treatment 2 6.13% 15% Treatment 3 7.03% 31% What you need to understand: After excluding the data collected after the email had been sent out, each of the treatments substantially outperformed the control with conclusive validity.37 #WTwebinar
    38. 38. Testing Until You Achieve Stabilization #WTwebinar
    39. 39. Selection Effect: The portion is not the wholeof your trafficSelection Effect: The effect on a dependent variable, by an extraneous variableassociated with different types of subjects not being evenly distributed betweenexperimental treatments. Examples • Channel profile does not match customer profiles • Uneven distribution of traffic from sources among treatments • Self selection (bias) #WTwebinar
    40. 40. History Effect: Outside world causes flaws History Effect Definition: The effect on a dependent variable by an extraneous variable associated with the passing of time. Experiment We conducted a 7-day headline test for a site that provides search and mapping services to a nationwide database of registered sex offenders. Treatments 1. Child Predator Registry (control) 2. Predators in Your Area 3. Find Child Predators 4. Is Your Child Safe? 52% less CTR than all other treatments #WTwebinar
    41. 41. History Effect: Outside world causes flaws BUT OUR DATA WAS FLAWED During the test period, the nationally syndicated NBC television program Dateline aired a special called “To Catch a Predator.” This program was viewed by approximately 10 million individuals, many of them concerned parents. Throughout this program sex offenders are referred to as “predators.” #WTwebinar
    42. 42. Instrumentation Effect: Test affected by toolsInstrumentation Effect: The effect on the dependent variable, caused by a variableexternal to an experiment, which is associated with a change in the measurementinstrument. Examples • Short-duration response time slowdowns • E.g., due to server-load, page-weight, page- code problems • Splitter malfunction • Inconsistent URLs • Server downtime #WTwebinar
    43. 43. Landing Page Optimization 3 Keys to successful online testing1 The Case for LPO2 Validity Threats3 Optimization in a Changing Marketplace #WTwebinar
    44. 44. Experiment: Background Experiment ID: (Protected) Location: MarketingExperiments Research Library Test Protocol Number: #TP1092Research Notes: Background: Company is a publisher of electronic marketing information and offers related services. Goal: Increase registrations for a free email newsletter. Primary research question: Which sign-up page will yield the highest conversion rate? Approach: A/B/C Multivariate test involving changes in headline, credibility indicators, and images according to optimization best practices. #WTwebinar
    45. 45. Experiment: Control Original Page • We radically redesigned this page based upon common best practices in landing page optimization. #WTwebinar
    46. 46. Experiment: Treatment 1Treatment 1 • Clearer headline emphasizes the value proposition. • “Featured Clients” list emphasizes value and reduces anxiety. • Bolded key terms make body copy easier to read and scan. • Body copy uses quantitative benefits • Costumer testimonials reduce anxiety. • Anti-spam seal reduces anxiety. #WTwebinar
    47. 47. Experiment: Treatment 2 Treatment 2 • Headline is quantitative to emphasize the value proposition. • Added more testimonials. • Customer logos directly in the eye path • Added personal feel with images and hand-written signature. • “Tell me where to send…” language used. #WTwebinar
    48. 48. Experiment: Side by Side Control Treatment 1 Treatment 2 Which of these marketing campaigns had the highest conversion rate? #WTwebinar
    49. 49. Experiment: Results Original Treatment 1 Vs.Conversion Rate = 14.26% Conversion Rate = 6.74% 53% #WTwebinar
    50. 50. Experiment: Results Original Treatment 2 Vs.Conversion Rate = 14.26% Conversion Rate = 6.84% 52% #WTwebinar
    51. 51. Was this test a failure? ‒ No, because we learned something important about our customers. #WTwebinar
    52. 52. Experiment: Background Experiment ID: (Protected) Location: MarketingExperiments Research Library Test Protocol Number: #TP1115Research Notes: Background: Company is a publisher of electronic marketing information and offers related services. Goal: Increase registrations for a free email newsletter. Primary research question: Which sign-up page will yield the highest conversion rate? Approach: A/B multi-factorial test with a minimalist strategy of reducing page elements. #WTwebinar
    53. 53. Experiment: Control Original • Common landing page best practices failed to improve conversion on this original page. • If adding elements to increase the value proposition decreased conversion, maybe the traffic to this page was already highly motivated? • Maybe visitors didn’t need to see more value and more credibility indicators… #WTwebinar
    54. 54. Experiment: Treatment Treatment • Maybe visitors just needed a simple, easy process. • Much of the copy on this page is removed, leaving simple form submission fields. • No real selling points are included in this design. #WTwebinar
    55. 55. Experiment: Results 78% Increase in Conversion Treatment 1 increased conversion by 78% Landing Page CTR Original 12.09% Treatment 21.54% Relative Difference: 78% #WTwebinar
    56. 56. Experiment: Results X X   What youintuition,understand: We can no longer rely on speculation, We marketer need to or even “best practices” for our marketing efforts. must test because what worked for your colleague might not work for you. And more importantly, we must design tests that provide insight about our customers. #WTwebinar
    57. 57. The Importance of a Good Hypothesis Structure your hypothesis:“If I ___ (do this) ___, then ___ (this) ___ will happen.” #WTwebinar
    58. 58. The Webtrends Optimization Process #WTwebinar
    59. 59. Any Questions? • Daniel Burstein, Director of Editorial Content MECLABS/MarketingSherpa Twitter: @DanielBurstein • Adam Lapp, Associate Director, Optimization and Strategy MECLABS/MarketingSherpa Twitter: @AdamLapp • Kirk Ramble, Optimization Consultant Webtrends Optimization Solutions #WTwebinar
    60. 60. Related Resources• Get a free copy of Marketing Sherpa’s Landing Page Optimization Report• Get the Webtrends whitepaper on Landing Page Optimization• Contact Webtrends when you’re ready to chat: www.webtrends.com North America: 1-877-932-8736 Europe, Middle East, Africa: +44 (0) 1784 415 700 Australasia; Australia, New Zealand, South Pacific: +61 (0) 3 9935 2939 #WTwebinar
    61. 61. Save 25% with special code ENG13WEBINAR

    ×