LearningsfromRunning80+
ExperimentswithLeanTeam
@TravelTriangletoHack
Growth
PRABHATGUPTA
HeadofEngineering&DataScience,OkCredit
Co-Founder,ex-CTO(TravelTriangle)
My Introduction
 A seasoned business-oriented product technology leader having 12+ years of experience
 Growth Hacker & avid believer of fail-fast approach with frugal development cost.
 Co-builtTravelTriangle from scratch, making it category leader in online holiday industry
 with 8M+ monthly traffic (60+ NPS),
 1000+ converting agents network,
 900+ team members,
 1000+ Cr annualized GMV with +ve contribution margin and
 having raised ~270 Cr in venture capital over span of 8 years.
 Built experimentation culture & eco-system enabling running high # of concurrent without much tech bandwidth
 Built high performing team from scratch to 90+ team size with right org structure/OKRs, project portfolio
management system & standard agile practices
Why Startups need to HDD
■ Finding PMF, validating the problem
before solving it
■ Hypothesis-Driven
Development(HDD)
 Yelp Reviews - friends asking friends to reviews
 Instagram (Burbn) - too many features but only
photo sharing was most used
 Groupon - fundraising site for causes and groups.
 Youtube – started from video dating site
■ Hack Growth around Acquisition,
Activation, Retention, Revenue &
Referrals
■ Finding AHA moment (What is working
the most for what kind of users)
 Facebook - 10 friends
 Twitter - 30 users to follow
 Slack - 2000 messages to one another
 HotSpot – Default plan with assisted training
 Airbnb – Craiglist
 Dropbox – get free space on referring
 Amazon – Amazon Prime
Initial Hypothesis needs Work
Based on Data
[ Past Learnings | User Behavior
Global Standards ]
Based on Intuition
[ Personal Learnings | User
Understanding ]
HDD- Experiment Lifecycle!
VALIDATION OF WEAK/
MEDIUM HYPOTHESIS
(GENERATING ADDITIONAL
DATA POINTS)
ITERATE & OPTIMIZE FOR
BEST CONFIGURATION
VALIDATION OF SUCCESS
METRICS BEFORE TAKING
UP AS A PROJECT
1
2
3
DON’T WASTE 4 SPRINTS TIME/EFFORT
Our Experimentation (Growth) POD
Conversion & Retention (L2C)
Acquisition & Activation (Visitor,
V2L)
1 PM, 1 Html/CSS
3 ReactJs, 1 BackEnd
50+ Experiments
1 PM, 1 Html/CSS
2 ReactJs, 2 BackEnd
30+ Experiments
Monetization(Take Rate,
Affiliation)
1 PM, 1 Html/CSS
1 ReactJs, 1 BackEnd
5+ Experiments
Supported by by Core tech-platform & experimentation platform team
Few Impact of Growth POD (Fail-Fast)
• Jump in visitor to lead (V2L) by 100% through a combination of progressive forms, chatbot and
exit intent
• Jump inV2L by 50% through optimizing marketing landing pages.
• Growing Lead to conversion (L2C) by 60% through tweaking funnel management workflow.
• Growing revenue by 30% by experimenting with our revenue instruments.
• and many more...
 All done with minimal tech members & arriving at 100% confidence in numbers within less than 1-month post picking up
the problem/idea.
Growth POD Culture, KPI & Execution
■ Right POD members / Mindset / Culture
– right mix of tech, product, marketing & business expertise to thrash quicly.
– working with fewer details, deploying quick & smart solutions iterating quickly
■ KPI of the POD
– % delta in respective metric found, # of experiments closed (should be higher), and closure SLA (should
be on the lower side).
– # of impactful ideas on the board pending to be executed, implementationSLA, and closure SLA once
the experiment is live.
■ KANBAN execution:
– Ideas segregated as "to be picked", ”WIP", "live & running" & lastly "closed".
– Control # of ideas in each bucket as well as SLAs of movement of the idea(s) from one bucket to another
– Weekly/Bi-WeeklySync ups to see –ves and +ves
Pre-requisite
■ Robust Data Infrastructure
Pre-requisite
■ One great generalist leader to be accountable for this POD
– There’s going to be lot of friction/conflicts with others stakeholders
– Person able to take calls through mixing data with intuition / signals.
– Person should be jack of all trades. Recommend product/tech co-founder leading this up in start
■ Real-time tracker to review experiment results & cross-impact
– Setup dashboard upfront so that data collection doesn’t need bandwidth from day 0 of experimentation
– Lot of time, during and after experiments, gets wasted in collecting data
– Due to high effort in collecting data, team tends to miss lead metrics as well cross impact of the
variation on other metrics.
– Due to absence of real-time data, quick improvisation in variation doesn’t happen
How to Prioritize
■ Separate out product/ideas (AARRR Funnel)
– Acquisition (Customer acquisition channels).
Team to be expert in the marketing & product domain.
– Activation (Engaging users leading him/her to
lead/conversion)
– Retention (Getting users back again and again)
– Referral (Making people tell about you).
These funnels need product experts to hack growth.
– Revenue (Revenue stream from new or returning
users)
Team to be expert in business and product domain
How to Prioritize
■ Prioritize ideas using ICE (Impact, Confidence,
Effort) framework
– It's not about throwing ideas at the wall as fast as
you can and see what sticks.
– The more focussed approach at the start, the
more intentional your experiment and hence
more the impact
– don't be afraid ofWhat-ifs like flipping whole
funnels / change variables of the game
– Don’t get busy cracking local maxima but few
radical outcomes too
BLOCKERS TO FAIL-FAST
■ Idea scarcity and/or lot of just small incremental impact experiments
– Go Back to whiteboard, analyze data deeply or do more customer surveys. Listen to your customer support
calls
■ Slow implementation / launch
– MVP version is not actually MVP and team unable to cut scope to launch faster
– Backend (BE) & frontend (FE) changes taking too much time to implement
– Missing data at time of analyzing leading to starting experiment again
– Data collection taking time and/or is not reliable
■ Slow Discard
– P-value not getting reached in time. Plan for it before
– Magic of sample size - higher impact will produce definitive impact faster
– Improvisation not happening on time and/or new variation not going back in time
– Team too attached with the exp and trying to make it work instead validating/invalidating the same
Quick Implementation on BE/FE
CASE I: Simple UI changes. Eg-
o Text and color changes
o CTA button text and/or placement changes
o Different Placement of UI component already existing on same page
Tool:VWO, Optimizely
CASE 2: Integrating 3rd party plugins JS - GetSiteControl, inspectlet, hotjar etc.
Tool: GoogleTag Manager
CASE 3: Static/Landing page(s) for Marketing
Tool:WVAws S3 or GCP cloud storage or public folder of hosted application integrated with existing
Backend API
Experiment 5: Testing variations for form color & CTA color on form [Kerala]
ORIGINAL
Experiment 5: Testing variations for form color & CTA color on form [Kerala]
VARIATION 2
Experiment 5: Testing variations for form color & CTA color on form [Kerala]
VARIATION 4
Experiment 5: Testing variations for form color & CTA color on form [Kerala]
VARIATION 3
Quick Implementation on BE/FE
CASE 4: More complex UI changes on existing product pages (tweaks needed post React)
○ New inline UI component with existing BE API
○ New overlay components like Popup, Banner to be added
○ Surveys, exit data etc.
Tools: VWO, GetSiteControl,Webengage
CASE 5: New UI component with API /data not present on Backend
o Static / Slightly dynamic content - FAQs, trust section
o Contextual content - Blogs, agent rich testimonials
Tools: Mock.io/Mockable.io,Zoho/Airtable,GSheets+Gscript, Retool, AWS Lamda/GoogleCloud function
CASE 6: Heavy UI changes like dynamic list order / dynamic inline section / dynamic search results
Tools: DynamicComponent Rule Engine (built in-house),AWS Lamda / GoogleCloud function
o Slider form with diff
departure date
o Sticky sort by and filters
o Variation between +
and chat icon (direct
action)
Quick Implementation on BE/FE
Case 9: Email, SMS, IVR A/B test on messages
Tools: DynamicTemplate Engine (in-house)
Case 10:Changes in existing functionality / workflow on BE side + A/B test Data Science models
o Our own CERE – Configurable event-driven rule-based engine
Tools to collect variety of data
Tools available for data stream / attributions:
❏ Segment
❏ Branch
❏ Appsalar
Tools available for reports / funnel:
❏ Google Analytics
❏ MixPanel
❏ CleverTap
❏ Omniture
Tools available for metrics tracking:
❏ Click / Form analytics – Inspectlet,CrazyEgg
❏ Mouse hover – HeatMap,Crazyegg
❏ User Engagement – Google analytics
 WebEngage
 Kissmetrics
❏ Amplitude
 Launch Fast’er’: Be your own QA:
 Limit scope like browser/platform to test direct preview over prod by yourself in least effort/time
 Set it up @ lower traffic like 10% and ensure that all data / event are getting tracked everywhere
correctly within first 2 days.
 Also can fix up any bugs, if found
 Pre-Define sample size: take calculated risk to reach to that size as quickly as possible.
 Evolve with constraints: Develop analytical skills to extrapolate data (if missing) and deciding on risk to
increase traffic incrementally - Slow failing is also a cost to company and missed opportunities.
 Adapt different execution style: High % ofWeaker hypothesis needed very high velocity experiment churn
vs high % of strong/medium hypothesis
Learnings & Pitfalls
Learnings & Pitfalls
 Data will tell “what” but not “why”, so include customer surveys and subjective inputs to connect things
from the first principle
 You need to close the lead metrics along with lag, to understand that impact is coming because of the
solution and not because of some other variable.
 User segment (Channels, platform, devices, intent etc.) could change analysis/insight drastically.
 If tests are neither positive nor negative, control always wins
 Don't associate personal attachment or bias to the experiment as then instead of validating hypothesis, you
start trying to make it work at all cost.
 Experiments never fail, hypothesis are proven wrong. Actual Failure would be if we have not been able to
understand user base better and/or not able to see next set of initiatives to try.
 Do not get tempted to scale the experiment to 100% as is, in lieu of immediate gain. Experiment's solutions
are often done for idea validation and not for scale.
● https://www.linkedin.com/pulse/startup-guide-growth-hacking-achieve-breakthrough-using-
prabhat-gupta/
● https://medium.com/airbnb-engineering/4-principles-for-making-experimentation-count-
7a5f1a5268a
● https://medium.com/booking-com-development/moving-fast-breaking-things-and-fixing-them-as-
quickly-as-possible-a6c16c5a1185
● https://www.linkedin.com/pulse/startup-guide-empowering-product-marketing-teams-fail-fast-
gupta/
● https://barryoreilly.com/2013/10/21/how-to-implement-hypothesis-driven-development/
● https://blog.pivotal.io/labs/labs/lean-hypotheses
● https://www.producttalk.org/2014/09/the-14-most-common-hypothesis-testing-mistakes-product-
teams-make-and-how-to-avoid-them/
● https://www.linkedin.com/pulse/engineering-analytics-traveltriangle-building-complex-prabhat-
gupta/
● https://medium.com/traveltriangle/how-traveltriangle-tt-infrastructure-empowers-faster-data-
driven-decisions-old-vs-new-c36d7eda6eb
● https://insights.traveltriangle.com/technical/dynamic-programming-in-react-with-ab-testing/
● https://insights.traveltriangle.com/technical/rules-rules-everywhere-one-engine-to-rule-them-all/
REFERENCES

Hypothesis-Driven Development & How to Fail-Fast Hacking Growth

  • 1.
  • 2.
    My Introduction  Aseasoned business-oriented product technology leader having 12+ years of experience  Growth Hacker & avid believer of fail-fast approach with frugal development cost.  Co-builtTravelTriangle from scratch, making it category leader in online holiday industry  with 8M+ monthly traffic (60+ NPS),  1000+ converting agents network,  900+ team members,  1000+ Cr annualized GMV with +ve contribution margin and  having raised ~270 Cr in venture capital over span of 8 years.  Built experimentation culture & eco-system enabling running high # of concurrent without much tech bandwidth  Built high performing team from scratch to 90+ team size with right org structure/OKRs, project portfolio management system & standard agile practices
  • 3.
    Why Startups needto HDD ■ Finding PMF, validating the problem before solving it ■ Hypothesis-Driven Development(HDD)  Yelp Reviews - friends asking friends to reviews  Instagram (Burbn) - too many features but only photo sharing was most used  Groupon - fundraising site for causes and groups.  Youtube – started from video dating site ■ Hack Growth around Acquisition, Activation, Retention, Revenue & Referrals ■ Finding AHA moment (What is working the most for what kind of users)  Facebook - 10 friends  Twitter - 30 users to follow  Slack - 2000 messages to one another  HotSpot – Default plan with assisted training  Airbnb – Craiglist  Dropbox – get free space on referring  Amazon – Amazon Prime
  • 4.
    Initial Hypothesis needsWork Based on Data [ Past Learnings | User Behavior Global Standards ] Based on Intuition [ Personal Learnings | User Understanding ]
  • 5.
    HDD- Experiment Lifecycle! VALIDATIONOF WEAK/ MEDIUM HYPOTHESIS (GENERATING ADDITIONAL DATA POINTS) ITERATE & OPTIMIZE FOR BEST CONFIGURATION VALIDATION OF SUCCESS METRICS BEFORE TAKING UP AS A PROJECT 1 2 3
  • 6.
    DON’T WASTE 4SPRINTS TIME/EFFORT
  • 7.
    Our Experimentation (Growth)POD Conversion & Retention (L2C) Acquisition & Activation (Visitor, V2L) 1 PM, 1 Html/CSS 3 ReactJs, 1 BackEnd 50+ Experiments 1 PM, 1 Html/CSS 2 ReactJs, 2 BackEnd 30+ Experiments Monetization(Take Rate, Affiliation) 1 PM, 1 Html/CSS 1 ReactJs, 1 BackEnd 5+ Experiments Supported by by Core tech-platform & experimentation platform team
  • 8.
    Few Impact ofGrowth POD (Fail-Fast) • Jump in visitor to lead (V2L) by 100% through a combination of progressive forms, chatbot and exit intent • Jump inV2L by 50% through optimizing marketing landing pages. • Growing Lead to conversion (L2C) by 60% through tweaking funnel management workflow. • Growing revenue by 30% by experimenting with our revenue instruments. • and many more...  All done with minimal tech members & arriving at 100% confidence in numbers within less than 1-month post picking up the problem/idea.
  • 9.
    Growth POD Culture,KPI & Execution ■ Right POD members / Mindset / Culture – right mix of tech, product, marketing & business expertise to thrash quicly. – working with fewer details, deploying quick & smart solutions iterating quickly ■ KPI of the POD – % delta in respective metric found, # of experiments closed (should be higher), and closure SLA (should be on the lower side). – # of impactful ideas on the board pending to be executed, implementationSLA, and closure SLA once the experiment is live. ■ KANBAN execution: – Ideas segregated as "to be picked", ”WIP", "live & running" & lastly "closed". – Control # of ideas in each bucket as well as SLAs of movement of the idea(s) from one bucket to another – Weekly/Bi-WeeklySync ups to see –ves and +ves
  • 10.
  • 11.
    Pre-requisite ■ One greatgeneralist leader to be accountable for this POD – There’s going to be lot of friction/conflicts with others stakeholders – Person able to take calls through mixing data with intuition / signals. – Person should be jack of all trades. Recommend product/tech co-founder leading this up in start ■ Real-time tracker to review experiment results & cross-impact – Setup dashboard upfront so that data collection doesn’t need bandwidth from day 0 of experimentation – Lot of time, during and after experiments, gets wasted in collecting data – Due to high effort in collecting data, team tends to miss lead metrics as well cross impact of the variation on other metrics. – Due to absence of real-time data, quick improvisation in variation doesn’t happen
  • 13.
    How to Prioritize ■Separate out product/ideas (AARRR Funnel) – Acquisition (Customer acquisition channels). Team to be expert in the marketing & product domain. – Activation (Engaging users leading him/her to lead/conversion) – Retention (Getting users back again and again) – Referral (Making people tell about you). These funnels need product experts to hack growth. – Revenue (Revenue stream from new or returning users) Team to be expert in business and product domain
  • 14.
    How to Prioritize ■Prioritize ideas using ICE (Impact, Confidence, Effort) framework – It's not about throwing ideas at the wall as fast as you can and see what sticks. – The more focussed approach at the start, the more intentional your experiment and hence more the impact – don't be afraid ofWhat-ifs like flipping whole funnels / change variables of the game – Don’t get busy cracking local maxima but few radical outcomes too
  • 15.
    BLOCKERS TO FAIL-FAST ■Idea scarcity and/or lot of just small incremental impact experiments – Go Back to whiteboard, analyze data deeply or do more customer surveys. Listen to your customer support calls ■ Slow implementation / launch – MVP version is not actually MVP and team unable to cut scope to launch faster – Backend (BE) & frontend (FE) changes taking too much time to implement – Missing data at time of analyzing leading to starting experiment again – Data collection taking time and/or is not reliable ■ Slow Discard – P-value not getting reached in time. Plan for it before – Magic of sample size - higher impact will produce definitive impact faster – Improvisation not happening on time and/or new variation not going back in time – Team too attached with the exp and trying to make it work instead validating/invalidating the same
  • 16.
    Quick Implementation onBE/FE CASE I: Simple UI changes. Eg- o Text and color changes o CTA button text and/or placement changes o Different Placement of UI component already existing on same page Tool:VWO, Optimizely CASE 2: Integrating 3rd party plugins JS - GetSiteControl, inspectlet, hotjar etc. Tool: GoogleTag Manager CASE 3: Static/Landing page(s) for Marketing Tool:WVAws S3 or GCP cloud storage or public folder of hosted application integrated with existing Backend API
  • 17.
    Experiment 5: Testingvariations for form color & CTA color on form [Kerala] ORIGINAL
  • 18.
    Experiment 5: Testingvariations for form color & CTA color on form [Kerala] VARIATION 2
  • 19.
    Experiment 5: Testingvariations for form color & CTA color on form [Kerala] VARIATION 4
  • 20.
    Experiment 5: Testingvariations for form color & CTA color on form [Kerala] VARIATION 3
  • 23.
    Quick Implementation onBE/FE CASE 4: More complex UI changes on existing product pages (tweaks needed post React) ○ New inline UI component with existing BE API ○ New overlay components like Popup, Banner to be added ○ Surveys, exit data etc. Tools: VWO, GetSiteControl,Webengage CASE 5: New UI component with API /data not present on Backend o Static / Slightly dynamic content - FAQs, trust section o Contextual content - Blogs, agent rich testimonials Tools: Mock.io/Mockable.io,Zoho/Airtable,GSheets+Gscript, Retool, AWS Lamda/GoogleCloud function CASE 6: Heavy UI changes like dynamic list order / dynamic inline section / dynamic search results Tools: DynamicComponent Rule Engine (built in-house),AWS Lamda / GoogleCloud function
  • 24.
    o Slider formwith diff departure date o Sticky sort by and filters o Variation between + and chat icon (direct action)
  • 27.
    Quick Implementation onBE/FE Case 9: Email, SMS, IVR A/B test on messages Tools: DynamicTemplate Engine (in-house) Case 10:Changes in existing functionality / workflow on BE side + A/B test Data Science models o Our own CERE – Configurable event-driven rule-based engine
  • 29.
    Tools to collectvariety of data Tools available for data stream / attributions: ❏ Segment ❏ Branch ❏ Appsalar Tools available for reports / funnel: ❏ Google Analytics ❏ MixPanel ❏ CleverTap ❏ Omniture Tools available for metrics tracking: ❏ Click / Form analytics – Inspectlet,CrazyEgg ❏ Mouse hover – HeatMap,Crazyegg ❏ User Engagement – Google analytics  WebEngage  Kissmetrics ❏ Amplitude
  • 30.
     Launch Fast’er’:Be your own QA:  Limit scope like browser/platform to test direct preview over prod by yourself in least effort/time  Set it up @ lower traffic like 10% and ensure that all data / event are getting tracked everywhere correctly within first 2 days.  Also can fix up any bugs, if found  Pre-Define sample size: take calculated risk to reach to that size as quickly as possible.  Evolve with constraints: Develop analytical skills to extrapolate data (if missing) and deciding on risk to increase traffic incrementally - Slow failing is also a cost to company and missed opportunities.  Adapt different execution style: High % ofWeaker hypothesis needed very high velocity experiment churn vs high % of strong/medium hypothesis Learnings & Pitfalls
  • 31.
    Learnings & Pitfalls Data will tell “what” but not “why”, so include customer surveys and subjective inputs to connect things from the first principle  You need to close the lead metrics along with lag, to understand that impact is coming because of the solution and not because of some other variable.  User segment (Channels, platform, devices, intent etc.) could change analysis/insight drastically.  If tests are neither positive nor negative, control always wins  Don't associate personal attachment or bias to the experiment as then instead of validating hypothesis, you start trying to make it work at all cost.  Experiments never fail, hypothesis are proven wrong. Actual Failure would be if we have not been able to understand user base better and/or not able to see next set of initiatives to try.  Do not get tempted to scale the experiment to 100% as is, in lieu of immediate gain. Experiment's solutions are often done for idea validation and not for scale.
  • 32.
    ● https://www.linkedin.com/pulse/startup-guide-growth-hacking-achieve-breakthrough-using- prabhat-gupta/ ● https://medium.com/airbnb-engineering/4-principles-for-making-experimentation-count- 7a5f1a5268a ●https://medium.com/booking-com-development/moving-fast-breaking-things-and-fixing-them-as- quickly-as-possible-a6c16c5a1185 ● https://www.linkedin.com/pulse/startup-guide-empowering-product-marketing-teams-fail-fast- gupta/ ● https://barryoreilly.com/2013/10/21/how-to-implement-hypothesis-driven-development/ ● https://blog.pivotal.io/labs/labs/lean-hypotheses ● https://www.producttalk.org/2014/09/the-14-most-common-hypothesis-testing-mistakes-product- teams-make-and-how-to-avoid-them/ ● https://www.linkedin.com/pulse/engineering-analytics-traveltriangle-building-complex-prabhat- gupta/ ● https://medium.com/traveltriangle/how-traveltriangle-tt-infrastructure-empowers-faster-data- driven-decisions-old-vs-new-c36d7eda6eb ● https://insights.traveltriangle.com/technical/dynamic-programming-in-react-with-ab-testing/ ● https://insights.traveltriangle.com/technical/rules-rules-everywhere-one-engine-to-rule-them-all/ REFERENCES

Editor's Notes

  • #18 You can view the experiment variants here - http://traveltriangle.com/mkt/Kerala-tour-Packages?optimizely_x7706492615=4 http://traveltriangle.com/mkt/Kerala-tour-Packages?optimizely_x7706492615=3 http://traveltriangle.com/mkt/Kerala-tour-Packages?optimizely_x7706492615=5 http://traveltriangle.com/mkt/Kerala-tour-Packages?optimizely_x7706492615=2 http://traveltriangle.com/mkt/Kerala-tour-Packages?optimizely_x7706492615=0 [Original]
  • #31 What do you think when we say we’d need to have a product for this in the company?