9. Decision making & the
product management
Product management is about making decisions
& tackling problems
• Improve the conversion rate
• Get customers to repeat more often
• Increase the engagement with the app
• Monetize your business
• Improve signups
Multiple ideas compete – Choose from options!
9
10. How data helps in
making decisions
10
Will my customers
like and understand
this feature?
Where should I
invest and what
could it return?
What is working?
What is not
working?
What consumer
behaviors might
change and what
won’t?
Where to Invest
What’s Working?What to Build
Customer Behavior
15. Collecting Observations
15
The active acquisition of information from a primary source. Multiple data points to triangulate helps not only
strengthen your case but it also gets you more familiar with knowing your users.
Evaluate
existing
metrics
Internal
audits
Competitor
audits
Dog food
Voice of
the
Customer
reports
Usability
& market
research
teams
Customer
journey
Maps
16. Observations
16
• Observations can be quantitative or qualitative
• Observations can be captured from several sources
internally and externally
• They must be factual, without opinion or subjective
statements
18. Hypothesis
A supposition or proposed explanation based
on limited evidence as a starting point for
further investigation.
A hypothesis is not fact, and should not be argued as right or wrong until it is tested and
proven one way or the other
19. The Purpose
19
Where, for who, and what you are testing
What the expected outcome is and why
you think this is the case
Someone else reading your hypothesis should be able to take away from it…
22. Hypotheses – Points to note
22
• Your hypothesis should be a single change, that is measurable and clear on what you expect to
happen
• Avoid solutions in your hypothesis
• Ensure your expected outcome is based on your change
• This is an iterative process, your hypothesis may evolve as you work with UX, Engineering and
Analytics. Use it to tell the story you think needs to be solved
23. Hypotheses – Example
23
Business Problem ( Observation based on data analysis)
Number of users adding product to cart as a percentage of users landing on the product page decreased during the
COVID
User Problem
Users do not feel confident about the safety & hygiene of the product they are looking to buy
Hypothesis
“ By adding an indicator for the hygiene rating on the product details page the user will have a higher confidence &
propensity to add the product to their cart
resulting
in higher click through rate to the add to cart step and increased conversion rate
based on the user survey done over July'20 on US traffic that indicates a higher user consideration for hygiene and
safety”
26. Testing Approaches
26
Once you have formed the hypothesis you need to understand
whether it really helps you achieve your target metrics
Two possible ways to test the impact
• Pre vs Post analysis
• Split testing
27. Pre-Post vs Split Testing
27
The tradeoffs & considerations
• Time to launch
• Traffic on site
• Strategic calls
• No of variables to be tested
• Impact of seasonality
• Software development approach
• Stage of the product
28. 28
Split Testing
• Split test
A split test is a method of testing by which a control version is
compared to a completely different variation of the same
sample in order to improve response rates. Split refers to the
fact that the traffic is equally split between the existing
variations
• A/B test
An A/B test is a method of testing by which a control version is
compared to a variety of single-variable tests performed on the
same sample in order to improve response rates.
If you're making minor changes to elements, then you should consider running an A/B test. However, for a
complete redesign with a lot of changes, it may be faster to recode the page. In such cases, each page is hosted
on its own URL. When you have to compare multiple pages with distinct URLs, go with a Split URL test.
29. Advanced variations
29
Multivariate testing
Multivariate tests test multiple versions of a page to
isolate which attributes cause the largest impact. In
other words, multivariate tests are like A/B/n tests in
that they test an original against variations, but each
variation contains different design elements
Types
• Full factorial testing, which tests every possible
combination of elements until there’s a clear
winner. This needs a lot of traffic, and traffic is
distributed evenly among the variations.
• Adaptive testing, which uses live data on visitor
actions to decide on the winning combination.
E.g. Multi arm bandit test
30. A/B testing: process
30
• Define the goal/test metric clearly
• Clearly define the variants & differentiation
• Define the traffic bucketing logic
• Define the traffic split for control vs variant
• Ensure representative sample
• Predetermine a sample size based on minimum effect,
significance & the power of the test
• Run the test for full weeks, usually at least two business
cycles
31. A/B test – Avoid pitfalls
31
• Randomization & Selection bias
Poor randomization results in unbalanced Treatment/Control. Certain
visitors are over-represented in treatment vs. control. This leads to more
influence in one group vs another. Also, do not allow visitors to
assign themselves to Treatment/Control groups. For example, visitors
with more risk appetite might select themselves into the test, so our
Treatment visitors might have both a treatment effect and a risk
appetite effect
• Eliminate confounding variables
Make sure that extraneous elements like special events, traffic sources
and referring ads are the same for both variants and that other variables
that could affect your test are eliminated
• Sample Size & effect of weekdays
Run the test until you hit the sample size identified in the pre-testing
calculations. Keep the test running for at least a week.
• Avoid test collision
If there is a high risk of interaction between multiple tests, make sure
34. A/B testing: Math behind the test
34
When running A/B tests, we’re actually applying a process called null
hypothesis testing. We compare the conversion rates of the two landing
pages and test the null hypothesis that there is no difference between the
two conversion rates
• Mean & Margin of error is an estimation of how different your result is
likely to be from the “true” parameter value of the population.
• Statistical Power refers to the probability that A/B test detects a
specific effect when it exists. Tests follow an 80% power standard. To
improve the test’s statistical power, we need to increase the sample
size/ the effect size or extend the test’s duration. Power is the
probability of avoiding a Type II error.
• Significance level is the probability that you conclude there’s an effect
when there’s none. In other words, the significance level is the
probability of getting a false positive result (or a Type 1
error).“Statistically significant” result in our A/B test indicates that the
change we made to the landing page probably had an impact on the
conversion rate . The p-value lies within the threshold (5%).
1-significance level (alpha) is referred as the confidence level of the
test
35. After launching the A/B test..
35
Always drill down further..
• Check for implementation issues: verify the source traffic distribution,
bucketing logic & funnel metrics
• Analyze the segments: The key to learning in A/B testing is
segmenting. Even though B might lose to A in the overall results, B
might beat A in certain segments (organic/paid, mobile/desktop,
repeat/ new etc.). Make sure that you have enough sample size within
the segment.
• Archive the test results and plan a follow up approach to testing
systematically. A structured approach to optimization yields greater
growth and is less-often limited by local maxima
• Analyze the long-term effects post rollout on metrics such as retention
rate, call propensity etc.
36. Resources & tools
36
• Some of the most popular A/B testing tools include:
• Optimizely
• VWO
• Adobe Target
• A/B testing calculators
• AB Test Calculator by CXL
• A/B Split Test Significance Calculator by VWO
• A/B Split and Multivariate Test Duration Calculator by VWO
• Evan Miller’s Sample Size Calculator
• Digital Analytics tools
• Google Analytics
• Adobe Analytics
37.
38. THANK
YOU!
Akhil Sharma
Want to discuss anything on product? Feel free to
reach me at LinkedIn - akhilsharma85
https://app.sli.do/event/6u1zq1y8
Slido #93695