@nataliewarnert
#dpm2016
A Minimum Viable
Product is that
version of a new
product which allows
a team to collect the
maximum amount of
validated learning
about customers
with the least effort.
– Eric Ries, Lean Startup
”
@nataliewarnert
#dpm2016
What is a Minimum Viable
Product?
• Building just enough to learn and validate a hypothesis
• Learning not optimizing
• Find a plan that works before running out of resources
• Provide enough value to justify charging
THE MVP
@nataliewarnert
#dpm2016
What is Lean UX?
• Interaction design
• Constant measurement and learning loops
• Business, development, customer, and UX unification
• Less focused on deliverables, more on data
LEAN USER EXPERIENCE
@nataliewarnert
#dpm2016
Problem/Solution fit
Do I have a problem worth solving?
Product Market Fit
Have I built something people want?
Scale
How do I accelerate growth and maximize
learning?
WHERE TO START
@nataliewarnert
#dpm2016
Problem/Solution fit
Do I have a problem worth solving?
Product Market Fit
Have I built something people want?
Scale
How do I accelerate growth and maximize
learning?
WHERE TO START
Ideas are cheap!
Acting on them
is expensive $$
@nataliewarnert
#dpm2016
Problem/Solution fit
Do I have a problem worth solving?
Product Market Fit
Have I built something people want?
Scale
How do I accelerate growth and maximize
learning?
WHERE TO START
Ideas are cheap!
Acting on them
is expensive $$
Learning over
growth
Don’t scale
@nataliewarnert
#dpm2016
DEFINE THE SOLUTION
Smallest possible solution to speed up
learning
Build only what is needed (MVP)
Pick bold outcomes to validate learning
Business outcomes over solution
More relevant
product
recommendation +
in-cart experience
=
increased
@nataliewarnert
#dpm2016
SHOW ME THE MVP!
Declare
assumptions and
hypothesis Create an MVP
Run an experiment
to prove or disprove
hypothesis
Customer
feedback (qual
and quant) and
research
alternatives
This is my definition
What are resources
It doesn’t mean half baked or buggy
What are table stakes?
6 min
Designing for the customer and with the customer – how they want to work, not how we want them to, hard shift to make, think testing a product you wrote
Deliverables – less over the wall and more collaborative
All of our traditional language is missing
8 min
9 min
This is where the investigation starts – don’t just keep the idea in your head but SHARE it….start with problem
13 min
14 min
THIS COULD BE ANY RETAILER – ultimately bought, required, also viewed
BBY example here with CWBAB and it not working – started with customers wanting add on accessories
How that ties into the business model and problem of attach rates and conversion
Business problem and customer problem – our one hypothesis fits both
Halloween costume – snoopy book and not even the great pumpkin
So much data we don’t know what to do with it – so focused on this data but it was the wrong data
18 min
Bold outcomes – change something big over small. Page over button. Or in this case embedded in cart vs. just changing items
BBY example – we have our problem and our market fit. So what did we do with it.
Bold outcomes – change something big over small. Page over button. Or in this case embedded in cart vs. just changing items
BBY example – we have our problem and our market fit. So what did we do with it.
Interruption of the data they are used to seeing
Besides CYP what else do we see in Carts? Buy it again…save for later...guests also bought – we have so much data we don’t know what to do with it.
Bad example - polaris and having what we were going to build so defined that it couldn’t pivot (large release one, op model/workstation) – total customization vs. Packages what the customer actually needed vs. What we thought they wanted SOLUTION and BDUF
What is the unique value proposition? This is somewhere in “solution” but we don’t know it...YET
25 min
Remember our plan A here was rich relevance data…
A – assumption that you know what the customer wants...and you know what you do when you assumption...
walk the walls at polaris
Qualitatively – WHAT are they doing once we get them into cart – conversion from there is higher than browse…we know they are looking for more to buy...UX programs to see where hovers are, time spent, clicks, paper prototyping and contextual inquiry (modified)
When did we get out of the building? Went an did customer interviews and showed them what we were thinking (small margin) – is it stat sig? No. Is it sig? yes
Also by walking through with everyone it’s easier to determine what is too high scope
30 min
Validate quantitatively – analytics and A/B testing –
How do we quantify what they are doing? Conversion rates, add rates, abandon rates…we wouldn’t have gotten here if we didn’t build something though. (large releases don’t work for this)
Validate quantitatively – analytics and A/B testing - one ring to rule them all
You can tell any story with numbers, so what are the important ones? This is why we also use qualitative. It tells more of the story
How do we quantify what they are doing? Conversion rates, add rates, abandon rates…we wouldn’t have gotten here if we didn’t build something though. (large releases don’t work for this)
But we are addressing these risks up front by talking with customers, experimenting, and validating with data
Failing fast so oppty cost is less – talk about massive req and traditional stuff
35 min
So what did we learn????
So we have to shorten this cycle!
Mvp != half baked or buggy
Deliver enough value to justify charging (or running more experiments)
Prioritize hypotheses
Attach rate went up
Interaction in cart when the user has already showed more intent to buy
Language of CYP vs. required, vs. CWBAB vs. Suggested…
Try other products before we start to scale more – rich relevance data was expensive and didn’t want to “waste the money” – sunk cost