This presentation will explore the basics of the scientific method and examine how proper experimental design, multiple hypothesis testing, cohort analysis, and split testing can effectively reduce batch size and lead to validated insights. You'll leave the webinar with a new understanding of how to experiment in a way that generates real insights, not just noise.
NO1 WorldWide kala jadu Love Marriage Black Magic Punjab Powerful Black Magic...
Fail Well, Pivot Fast: Product Experimentation for Continuous Discovery
1. FAIL WELL, PIVOT FAST:
PRODUCT EXPERIMENTATION
FOR CONTINUOUS DISCOVERY
WITH WILLIAM HAAS EVANS - PRINCIPAL CONSULTANT,
HEAD OF PRODUCT STRATEGY & DESIGN PRACTICE,
KUROSHIO CONSULTING
WEBINAR EXCLUSIVE
MODERATOR:
RAYVONNE CARTER
WEBINAR PRODUCTION MANAGER
PRODUCT MANAGEMENT TODAY
MARCH 16, 2023
11:00 AM PT
2:00 PM ET
7:00 PM GMT
2. 03
AB Tasty is the best-in-class experience optimization solution
for enterprises looking to use controlled experimentation,
recommendation and intelligent search to build better digital
experiences.
A global leader in experimentation, personalization, and
feature management solutions, AB Tasty enables companies
to validate ideas, while maximizing impact, minimizing risk,
and accelerating time to market. Founded in 2013 in Paris,
AB Tasty has offices around the world in 8 countries and
more than 320 employees.
To learn more, visit www.ABTasty.com
Revolutionize
Brand and Product Experiences
Better Products. Better Software. Better Experiences
3. Have questions about todays
presentation?
Click on the Questions panel to
interact with the presenters
Having issues with todays
presentation? Try Dialing in!
TO USE YOUR TELEPHONE:
You must select "Use Telephone" after joining
and call in using the numbers below.
United States: +1 (415) 655-0052
Access Code: 403-718-603
Audio PIN: Shown after joining the webinar
4. 4
Fail Well, Pivot Fast:
Product Experimentation for Continuous Discovery
WILLIAM HAAS EVANS
Principal, Product Strategy Practice Lead
6. 6
T
oday’s Agenda
§ I’m Really Busy, Why Does Experimentation Matter?
§ Why Products Fail
§ Patterns of Discovery: Coverting Guesses into Knowledge
§ Methods of Customer and Product Discovery
§ What/Where/How/When to Experiment
§ Designing Your First Experiment
§ Mapping Assumptions and Uncertainty
§ Managing the Experiment Backlog
8. 8
“It Ain’t What You Don’t Know That Gets You Into Trouble.
It’s What You Know for Sure That Just Ain’t So.”
How to W
e Place Better Product Bets?
9. 9
“We could not find a large enough
audience quickly enough to
convince us the business model was
sustainable in the long term."
― RUPERT MURDOCH, NEWSCORP
“Why Did Murdoch's 'The Daily' Fail?”, NPR
10. 10
When do you want to learn you’re wrong?
Launching a Product is a Risky Proposition.
13. 13
Traditional Product Development
“When is the best time to LEARN you are wrong?
Some Learning Very Little Learning Most of the Learning
DEFINE DESIGN DEVELOPMENT DEPLOYMENT
Epiphany
DRINK
14. 14
“Life’s too short to
build something
nobody wants.”
— ASH MAURYA, RUNNING LEAN
15. 15
If products fail from a lack of customers more often than
product development failure…
Then why do we have:
§ A well-defined process for product development?
§ No defined process for customer development? and
§ No process for ensuring that we build the right thing.
It *always* started with a question…
16. 16
“If a man will begin with certainties,
he shall end in doubts, but if he will
content to begin with doubts, he
shall end in certainties.”
— Francis Bacon (1620)
17. 17
1 2
3 4
5
Develop a question
you want answered
Do background
research to become
more familiar with
your area of study
Make a prediction for
what you think the
outcome of the
experiment will be
Make a procedure
to test your
question and carry
out the experiment
Analyze your data
and compare it
with Hypothesis
Question Research
Hypothesis
Experiment
Analyze
Results
Modern Scientific Method
22. 23
* Lean Startup ≠ Lean
Process for Turning Mysteries into Algorithms
23. 24
Start for the Earlyvangelist
HAS A PROBLEM/NEED/JOB TO BE DONE
IS AWARE OF HAVING A PROBLEM/NEED/JTBD
HAS BEEN ACTIVELY LOOKING FOR A SOLUTION
HAS KLUGED A SOLUTION TOGETHER
HAS OR CAN ACQUIRE A BUDGET
1
2
3
4
5
25. HIGH OCCURRENCE
LOW OCCURRENCE
LOW PAIN HIGH PAIN
High Frequency
High Pain
High Frequency
Low Pain
Low Frequency
High Pain
Low Frequency
Low Pain
26. 27
to turn market uncertainty into validated product learnings
Customer Discovery Process
27. 28
to turn market uncertainty into validated product learnings
Product Discovery Experiments
Pivot Before Product/Market Fit, Optimize After
28. 29
Product Discovery Experiments
1 2 3
CUSTOMER/PROBLEM
FIT
PROBLEM/SOLUTION
FIT
PRODUCT/MARKET
FIT
SCALE
4
Have we found an interesting problem worth solving?
Learning Experiments Growth Experiments
Is there an interesting opportunity or gap?
29. 30
Product Discovery Experiments for Validation
1 2 3
CUSTOMER/PROBLEM
FIT
PROBLEM/SOLUTION
FIT
PRODUCT/MARKET
FIT
SCALE
4
Learning Experiments Growth Experiments
VALIDATE
DESIRABLE
VALIDATE
FEASIBLE
VALIDATE
VIABLE
VALIDATE
SCALABLE
Focus:
Validated Learning
Experiments:
Assumption Testing: Contextual and
Evaluative Research
Focus:
Validated Learning: Satisficing JTBD
Experiments:
Assumption Testing: Contextual and
Evaluative Research
Focus:
Growth
Experiments:
Optimization (Cohort A/B & Multivariate)
Focus:
Fitness
Experiments:
Cohort Analysis (A/B * Multivariate)
30. 31
Context Matters: Where are you?
Optimize for Validated Learning, then OPTIMIZE FOR TRACTION BY REDUCING FRICTIONS to cross
the chasm and find PMF, then Optimize for Margin Growth.
Focus: Validated Learning
Experiments: Pivots
Metrics: Qualitative
Focus: Growth
Experiments: Optimization
Metrics: Quantitative
32. 33
The Process of Designing Experiments to Fail W
ell
There are several steps involved in the process of designing, prioritizing,
conducting and implementing experiments and the results of
experimentation.
1. Generate questions & ideas (assumptions)
2. Create a testable hypothesis
3. Conduct your experiment
4. Communicate your results
5. Re-prioritize your next steps
33. 34
Experiments Start with Questions
Start by asking yourself, “What is my riskiest
assumption?”
Then ask yourself, “What insight do I need to
move forward?”
Then ask, “What’s the simplest test I can run to
get it?”
Finally, think about, “How do I design an
experiment to run this simple test?”
34. 35
Failing W
ell Produces Information which Reduces Uncertainty
§ State Your Assumptions and Outcomes Clearly, Visibly, Publically
§ Challenge Your Assumptions and Outcomes (Wanna Bet?)
§ Seek Disconfirming Data, Interrogate Outliers (don’t hide them in
averages)
§ Use ”Counter-Factuals” to Reframe Hypothesis
§ Learn when to correct your Actions and when to correct your Beliefs
(Mental Models)
An OUTCOME is the verifiable result, condition, or consequence of a plan, process, event,
effort, action, or occurence for which a testable causal relationship can be drawn.
36. 37
IMPACT
OF
BEING
WRONG
MARKET UNCERTAINTY
MINIMAL
MODERATE
CATASTROPHIC
“Let’s Try Something”
SWAG
“Expert” Intuition Based
on Domain Knowledge
Inference from Related or
Anecdotal Data
Extrapolation from
Correlated Data
Hypothesis based on
Extensive Research
Validated by Experimental
Test Results
HIGHER CERTAINTY / LOWER RISK LOWER CERTAINTY / HIGHER RISK
PRODUCT LIFECYLCLE
(LESS EXPERIMENTING
AND VALIDATION)
INNOVATION LAB
(MORE EXPERIMENTING
AND TESTING)
Map Y
our Assumptions & Ideas on the Confidence Matrix
39. 40
The Competency (Expertise) Trap
“We might think of ourselves as open-
minded and capable of updating our
beliefs based on new information, but
the research conclusively shows
otherwise. Instead of altering our beliefs
to fit new information, we do the
opposite, altering our interpretation of
that information to fit our beliefs.”
― ANNIE DUKE, THINKING IN BETS
40. 41
Expertise Trap + Loss-Aversion Bias = W
eak “Bets”
Little
Trapped
Value
POSITIVE VALUE
NEGATIVE VALUE
+ Gains
- Losses
Value of
gains
Value of
losses Reference Point
41. 42
1. During customer discovery
focus experiments on
interesting ideas
2. Before scaling validate your
“WE KNOW” assumptions to
reduce risk of building things
nobody wants.
3. After customer/problem
validation, run experiments to
stimulate feedback where their
may be inchoat value.
Experiments Focus First: Interesting Ideas (50/50 Bets)
Some Tips
42. 43
Pushing Experiments to the Middle
DECREASING COMFORT
INCREASING CONFIDENCE
Ambiguity can create
opportunity (and anxiety).
If a signal were clear,
everyone would see and act
on it, making it
competitively neutral.
Certainty (about facts
considered “known”) can
create risk. Challenge to
confirm known-knowns to
make sure we aren’t blind
to potential risks when the
market and situations
change.
43. 44
IMPACT
OF
BEING
WRONG
MARKET UNCERTAINTY
MINIMAL
MODERATE
CATASTROPHIC
“Let’s Try Something”
SWAG
“Expert” Intuition Based
on Domain Knowledge
Inference from Related or
Anecdotal Data
Extrapolation from
Correlated Data
Hypothesis based on
Extensive Research
Validated by Experimental
Test Results
HIGHER CERTAINTY / LOWER RISK LOWER CERTAINTY / HIGHER RISK
PRODUCT LIFECYLCLE
(LESS EXPERIMENTING
AND VALIDATION)
INNOVATION LAB
(MORE EXPERIMENTING
AND TESTING)
Map Y
our Assumptions & Ideas on the Confidence Matrix
A GOOD PLACE
TO ST
ART
HUNTING FOR
VALUE
44. 45
Setting Up for Success
Testing different variables, followed by careful observation and analysis, yields insight into the
relationships between CAUSE and EFFECT, which ideally can be applied to and tested in other
settings.
To obtain that kind of knowledge—and ensure that business experimentation is worth the
expense and effort—companies need to ask themselves several crucial questions:
§ Does the experiment have a clear purpose?
§ Have stakeholders made a commitment to abide by the results?
§ Is the experiment doable?
§ How can we ensure reliable results?
§ Have we gotten the most value out of the experiment?
46. 47
T
esting for Causality
§ A hypothesis is a testable assertion of fact which can be proved or
disproved.
§ It’s an educated GUESS or a PREDICTION about the relationship
between 2 variables, and you most likely want to get really
comfortable with two works:
IF and THEN
§ The IF precedes the variable we we plan to test, and the THEN
precedes that measurable outcome of the experiment.
47. 48
FORMING A HYPOTHESIS
A hypothesis is a testable statement which can be disproved. It’s an educated
guess or a prediction about the relationship between 2 (or more) variables –
using two critical words: IF and THEN.
[IF] We believe that doing/building: [ ACTION / EXPERIMENT]
§ In order to solve [ the Problem ]
§ For [ THESE PEOPLE/THIS PROCESS ]
[THEN] We will achieve [ THIS MEASURABLE OUTCOME ] by [ TIMEFRAME ]
§ When it fails, we will [NEXT EXPERIMENT]
§ If it succeeds, we will [PLAN FOR ITERATING/SCALING]
48. 49
A Strong Hypothesis V
ersus a W
eak One
STRONG WEAK
Source
Qualitative research, customer insights, problems,
observations, data mining, competitors (example: “We
observed fewer customers during the first store hour”)
Guesses not rooted in observation or
fact (example: “We think that wealthier
buyers will like our products”)
Design
Identifies possible CAUSES and EFFECTS (example:
“Opening our stores one hour later has no impact on daily
sales revenue”)
Does not identify possible causes and
effects (example: “We can extend our
brand upmarket”)
Measuement
QUANTIFIABLE METRICS that establish whether the
hypothesis should be accepted or rejected (example: time
and revenue)
Vague qualitative outcomes driven by
several variables that are hard to isolate and
measure (example: brand value)
Verification The experiment and its results can be replicated by others
The experiment and its results are difficult
to replicate
Relevance to a
meaningful business
outcome
Will have a clear impact (example: “Opening an hour later
will reduce store operating expenses by $XX/t”)
Won’t necessarily have a measurable
impact, or the link between the metric and
business impact is fuzzy (example: “It’s
unclear how extending the brand affects
gross margins erosion.”)
50. 51
Adoption Stages & Progression
EMPATHY
STICKINESS
VIRALITY
REVENUE
SCALE
GROWTH
RATE I’ve found a real, poorly-met need that an addressable
market faces,
I’ve figured out how to solve the problem in a way they
will adopt and pay for.
I’ve built the right product/features/functionality that
keeps users around.
The users and features ffuel growth organically and
artifically.
I’ve found a sustainable, scalable business with the right
margins in a healthy and growing market.
52. 53
WHA
T TO MEASURE: MINIMUM SUCCESS CRITERIA
§ Show to X number of people? What is N?
§ What % of customers will validate?
§ Directional or Statistical?
§ What is the minimum “signal” to perserve
§ Who will give you currency?
§ Who will give you time/feedback?
§ What is the reservation price? How
sensative?
§ Money trumps Time/Engagement for
validation purposes!
Enrico Fermi told his students that an
experiment that successfully proves a
hypothesis is a measurement; one that
doesn’t is a discovery.
56. 57
SIGNALS / QUESTIONS QUESTIONS ASSUMPTIONS OUTCOME HYPOTHESES
T
I
M
E
H
O
R
I
Z
O
N
S
EXPERIMENT
DESIGN
ACTIVE EXPERIMENTS ACTUAL OUTCOMES
HORIZON
1
HORIZON
2
HORIZON
3
0
U N C E R T A I N T Y
We believe
[this to be
true]
We believe
[this to be
true]
We believe
[this to be
true]
We believe
[this outcome]
will be achieved if
[these users] attain
[a benefit] with [this
solution/feature/idea].
WHE 03/23
Manage for Flow (of Experiments)
57. 58
10 Pitfalls to Avoid Reviewing T
est Results
§ Assuming your data is clean
§ Not normalizing your data
§ Excluding outliers
§ Including outliers
§ Ignoring seasonality
§ Ignoring size when reporting growth
§ Data vomit
§ Metrics that cry wolf
§ The “not collected here” syndrome
§ Focusing on noise.
Monica Rogati, a data scientist at LinkedIn, gave us the
following 10 common pitfalls that product managers should
avoid when reviewing test results.
60. MODERATOR:
RAYVONNE CARTER
WEBINAR PRODUCTION MANAGER
Q&A
Principal Consultant, Head of
Product Strategy & Design
Practice, Kuroshio Consulting
William Haas Evans
/in/semanticwill/
semanticfoundry.com
/in/rayvonnecarter/