• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
A/B Testing 101
 

A/B Testing 101

on

  • 319 views

Learning the basic principles for effective A/B testing and design optimization. See examples and learn about common mistakes.

Learning the basic principles for effective A/B testing and design optimization. See examples and learn about common mistakes.

Statistics

Views

Total Views
319
Views on SlideShare
310
Embed Views
9

Actions

Likes
0
Downloads
3
Comments
0

2 Embeds 9

http://www.linkedin.com 6
https://www.linkedin.com 3

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-NonCommercial-NoDerivs LicenseCC Attribution-NonCommercial-NoDerivs LicenseCC Attribution-NonCommercial-NoDerivs License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Everyone has a design opinion – only testing will give you customer-driven insights that will drive real results. Until tested, even a professional designer in NY has only design opinions to offer.
  • 1. ClearHypothesisEvery test starts with a hypothesis that you're trying to prove or refute. Your hypothesis defines the parameters of the test and focuses on the test variable and its effect on the result. Write it at the top of your testing document. (e.g., “A quote on the landing page will increase conversion rate." or “Less fields will result in more contact form submissions”)2. One VariableBy its nature, in A/B testing we’re testing only one variable. This means that everything else must stay constant. If you’re looking to test 2 or more elements, then define the control creative, and test 2 variables against the 1 control (multivariate test)For example, if you test a headline on a contact form, all other variables must be the same – fields, layout, design, colors, etc. If anything else will be different between the two versions, your results won't be conclusive because you wouldn’t be able to attribute the success of one version to one specific variable. This is the most common mistake made on A/B testing. 3. Clear and Defined Success MetricBefore you run the test, decide how you’re going to measure success. Define one success metric that will determine the winner based on the effect or resultyou are trying to get. The success metric and the variable should be as aligned as possible. Beware of using the first user-click as the success metric. Consider what is your “conversion” metric—form submission, sign up, etcs. A different “thumbnail” image may drive more clicks to a case study page, but does it drive more “contact us” form submissions (the conversion)? For more complex user paths, consider doing a “Funnel Analysis” to determine drop off points, etc.4. Volume and Statistical SignificanceFor a successful test you have to have enough trafficvolume to make the data statistically significant. The volume needed isn’t just in the test groups, but also in the results and the difference between them. For those that don’t have a strong background in Math, a Google search should provide you with various calculators and tutorials for this purpose. 6. RandomizationSince you’re only testing one variable, you want to eliminate the variables in the audience selection process. Your control and test groups should be picked randomly. 8. DocumentationIf you're diligent about testing, you should be fanatic about documenting your tests and results, specially when there is testing happening across multiple areas of the site or multiple content owners. This will help you build upon your past lessons, not repeat tests and educate your employees and successors. One person’s test results may also spur other testing ideas.
  • Here is a simple example of what an A/B test might look like….BUT, how many variables were tested in this graphic? What is putting the sign up form below the body content that LIFTED sign ups, or…was it placing the nav bar to the left of the content?? We don’t know – always test for one variable for clear results. In this example, further testing would be needed.HT: With thanks to Smashing Magazine for the graphic. You can read more about AB testing at http://www.smashingmagazine.com/2010/06/24/the-ultimate-guide-to-a-b-testing.
  • Does Removing a Line Rule Between Stories & Ads Increase Engagement? Actual Test Results:Version A, the variation with a thin, grey, line rule between the top story and ads, increased (LIFTED) local audience page views by 9.9% & time on site by 15.8%, without negatively affecting ad clicks or return visits. (with a 95% confidence rate…always make sure you hit a good confidence rate).McClatchy Interactive, the newspaper chain’s Internet division, conducted this test in-house on its Idaho Statesmen site using Adobe Test & Target. To reduce risk, this test was run on 7% of the total traffic from all sources. (Key observation: When testing against a champ, limit the test to the amount of traffic needed to reach statistical significance, and no more, in order to limit the negative impact of suppression should the hypothesis for the test prove to be wrong). This test is a reminder about the importance of the ‘little things’ and how testing can reveal incremental improvements in even the smallest of change.
  • Does having a larger image drive more bidding? Result: v.B made 63% more visitors click to begin bidding and 329% more completed the process. A larger image caused a LIFT to conversion, despite the increased scrolling. It is not clear what impact, if any, the increased scrolling had on conversion vs not scrolling.GOAL: Make more visitors bid and complete bidding process in an auctions site.
  • Does the order of pricing plans impact “upselling” for existing customers?Result: v.A was the winner. 49% more people purchased the middle payment plan.GOAL: Move more people from the start plan to the grow plan (upsell) in a virtual VoIP system company site.
  • Does the verbiage in a footnote Twitter sign up “call to action” impact the clicks through to the Twitter account?In this test, the site publisher did incremental tests on the specific social media engagement call to action to increase Twitter followers. The test shows the value of testing simple verbiage related to a call to action and the power of incremental improvements through sequential testing against a “champ” creative, or in this case, verbiage. As this example shows, effective testing strategy can support the long term impact of social media efforts by driving low-cost incremental engagement.You can read more at: http://dustincurtis.com/you_should_follow_me_on_twitter.html“As the forcefulness and personal identifiability of the phrase increased, the number of clicks likewise increased. "You" identifies the reader directly, "should" implies an obligation, and "follow me on twitter" is a direct command. Moving the link to a literal callout "here" provides a clear location for clicking. I tried other permutations that dulled the command, used the word "please" in place of "should" and made the whole sentence a link. None of them performed as well as the final sentence.At the very least, the data show that users seem to have less control over their actions than they might think, and that web designers and developers have huge leeway for using language to nudge users through an experience. “Open Question: Did the increase in clicks actually result in an increase percent of “Follows” as a percent of clicks thru’s? We don’t know, but it is possible to increase clicks on a call to action, while suppressing the end-goal of driving more conversion (in this case, Twitter followers). This is where knowing what is your key metric is important.
  • A quick look at Conversion Funnel Analysis:This is a sample model of a “Conversion Funnel”. A good conversion funnel analysis requires tracking the clickthru’s and drop-offs or bounce rate (the whole use experience path). Then, by doing A/B testing on specific areas of a user-experience funnel, you can lower attrition rate(drop off), thereby increasing your conversion rate. Or, you can also remove steps in the funnel all-together to increase the final conversion metric. This is an valuable analytics approach for multi-step user experiences.
  • Q&A:

A/B Testing 101 A/B Testing 101 Presentation Transcript

  • A/B Testing 101Design optimization through testing
  • 6 Elements of Effective A/B Testing1. Clear Hypothesis2. Test one variable at a time3. Clear & defined success metrics4. Volume & statistical significance5. Randomization6. Documentation
  • A/B Testing in a visual…
  • Newspaper Homepage Test: A B The importance of the ‘little things’.
  • Does image size & scrolling matter? A B
  • Order of price plans presented A B
  • Fine-tuning the call-to-action
  • Conversion Funnel Flow: Landing Page Case Study Contact Us Form Submit Conversion Funnel: Of the 100% of people who landed on the landing page, only 14% made it to the conversion step of submitting the contact us form (a sales lead).
  • Josue SierraMarketing Consultant302.442.6170jsierra@trellist.comTrellist® Marketing and Technology117 N. Market St. Suite 300Wilmington, DE 19801T 302.778.1300F 302.778.1301www.trellist.com