3. DEFINITION
A/B testing introduction 3
allows you to test one variable at a time. It’s used to pick the
better out of two variables. for instance, if you are testing the
background color of your landing pages to determine which one
helps you generate the most conversions, you would use an A/b
test. You can also test on the page level, which means that you
are looking at the entire page as the variable.
A/B testing can tell you whether or not where are you standing now is better
or worse than where you were. Even if, you’ve redesigned everything.
6. A/B testing 6
Why do we need A/B testing ?
- Testing and research historically has been difficult because:
*Expensive.
*Time consuming.
*Unreliable.
*Competitively complicated.
- Experimentation program success needs speed and infrastructure.
- Old approach wasn’t efficient:
*Mailing 20% offers.
*That’s what users love.
9. A/B testing 9
This is not an A/B testing:
- Too many variants which means that we aren’t sure about what affected
the metrics.
- Bike Size changed.
- Font changed.
- Text changed.
- Order of features is different too.
11. A/B testing 11
What to experiment? (Experimentation 101)
- We don’t work directly on North Star metrics.
For example: we don’t go to MRR but we go to conversion which
affects MRR directly.
-Primary Metrics: When you create a primary metric, it has to be
conclusive in order to accept that there a statistical significance in you
assumption.
Primary Metric = Conclusive Positive or Conclusive Negative.
- Supporting Metrics: Not as conclusive as primary metrics but gives
you an indication about the direction whether it’s positive or negative.
(Customer service tickets or product returns).
- Health Metrics: Related to the performance of your platform to make
sure that any changes doesn’t impact unexpected metrics (Backend
errors- Queries).
12. A/B testing 12
Binominal Goals: Binominal goals are all the little breadcrumbs users
are leaving while navigating your platform.
Like: Bounce Rate from page- Category page visits.
21. A/B testing 21
- Design:
Hypothesis If (…..) Then (……)
Based on [evidence] we believe that if we change
[your change] for [customer segment] it will help them
[impact].
Success Metrics Primary => Conversion
Secondary => Bounce Rate
We will know this is true if we see [your expected change]
in [primary metric]. This is good for our business because
an increase in [primary metric] is an increase in [business
KPI].
Launch Decision Variant A VS Variant B
22. A/B testing 22
- Setup:
1- Correct UX.
2- Correct Tracking.
3- Free of data quality
Issues.
24. A/B testing 24
- Analyze:
1- Ship, iterate or abandon.
2- Statistical significance and power.
3- Confidence level.
25. A/B testing 25
- Learn:
1- Learn from both success and failure.
2- Uncover segment-specific insights and discoveries for future
iterations of ideas.
31. A/B testing 31
Hypothesis: If we added
the sense of joining
people of joining others in
achieving success would
increase conversion
Success Metric:
Conversion from different
touch points.
Highest banner was
converting 15% from its
clicks, the banner
converted 55% from its
clicks
Variant A
Base