How to Focus On the Problem, Not the Solution by Spotify PMProduct School
Main takeaways:
-The five why’s - a tried-and-true method to effectively uncover user needs.
-Leveraging the JTBD framework - what are your customers trying to accomplish?
-How do you know you’ve solved the problem? Defining success metrics with your customers from the beginning.
Product-Led Growth by Amazon Senior Product ManagerProduct School
Main takeaways:
- How to build a product that sells itself
- Product-led growth is when your product sits in the center of the growth strategies to attain new customer acquisition, retention, and expansion
- Product-led means creating a champion product which serves as a primary driver for business growth
A workshop on Value Proposition Design by Sam Rye from Lifehack & Enspiral.
This workshop takes you through the Value Proposition Canvas, helps you pitch your vision, and lays out a short exercise to make a 2D or 3D prototype of your solution for feedback.
It draws heavily on the content, language and concepts from this book, which we highly recommend you buy if you're serious about (social) entepreneurship or intrapreneurship : https://strategyzer.com/value-proposition-design
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
Tips for Building a Compelling Product Vision by Amazon Sr PMProduct School
- The key elements of a compelling product vision, what’s important and what’s not
- How to come up with a compelling product vision without relying on luck or magic
- How to use a product vision as a mechanism to guide your team
Lean startup, customer development, and the business model canvasgistinitiative
The document discusses key concepts in lean startup methodology, including building business models focused on customer development rather than business plans, developing minimum viable products to test hypotheses, and using an iterative build-measure-learn process. It provides examples of how startups should focus on building products that solve customer pains and create gains rather than features, and emphasizes conducting customer interviews to gather evidence and test hypotheses about the business model.
Building Better Tech: The Product Manager's Role in Infrastructure & Platform...Product School
This document discusses the role of product managers in platform and infrastructure engineering. It outlines how product managers of platforms can align technical decisions to business metrics, mediate conflicts, and improve productivity by managing priorities and protecting engineers from distractions. Taking on product management for platforms can lead to more technically interesting work, broader impact, and higher pay compared to feature teams. The document encourages engineering teams without product managers to adopt the role to help communicate strategies and tie efforts to goals.
Product Leadership - from FAANG to Traditional Media by The New York Times SV...Product School
This talk will explore the differences and similarities in product management practices between technology companies and legacy companies. Drawing on experiences gained from working at a major tech firm and a legacy company, attendees will gain valuable insights into how product management approaches differ across industries. Additionally, the talk will provide guidance on how to navigate the transition between the two, as more non-tech companies embrace product management. This is a must-attend session for product managers seeking to deepen their understanding of the nuances of product management practices in different industries.
How to Focus On the Problem, Not the Solution by Spotify PMProduct School
Main takeaways:
-The five why’s - a tried-and-true method to effectively uncover user needs.
-Leveraging the JTBD framework - what are your customers trying to accomplish?
-How do you know you’ve solved the problem? Defining success metrics with your customers from the beginning.
Product-Led Growth by Amazon Senior Product ManagerProduct School
Main takeaways:
- How to build a product that sells itself
- Product-led growth is when your product sits in the center of the growth strategies to attain new customer acquisition, retention, and expansion
- Product-led means creating a champion product which serves as a primary driver for business growth
A workshop on Value Proposition Design by Sam Rye from Lifehack & Enspiral.
This workshop takes you through the Value Proposition Canvas, helps you pitch your vision, and lays out a short exercise to make a 2D or 3D prototype of your solution for feedback.
It draws heavily on the content, language and concepts from this book, which we highly recommend you buy if you're serious about (social) entepreneurship or intrapreneurship : https://strategyzer.com/value-proposition-design
SAMPLE SIZE – The indispensable A/B test calculation that you’re not makingZack Notes
If you’re a marketer it’s very likely that you’ve run an A/B test. It’s also likely that you’ve never calculated the sample size for your tests, and instead, you run tests until they reach statistical significance. If this is the case, your strategy is statistically flawed. Conforming to sample size requires marketers to wait longer for test results, but choosing to ignore it will bear false positives and lead to bad decisions.
This deck was created for an email audience for there are valuable lessons for anyone who runs A/B tests.
Tips for Building a Compelling Product Vision by Amazon Sr PMProduct School
- The key elements of a compelling product vision, what’s important and what’s not
- How to come up with a compelling product vision without relying on luck or magic
- How to use a product vision as a mechanism to guide your team
Lean startup, customer development, and the business model canvasgistinitiative
The document discusses key concepts in lean startup methodology, including building business models focused on customer development rather than business plans, developing minimum viable products to test hypotheses, and using an iterative build-measure-learn process. It provides examples of how startups should focus on building products that solve customer pains and create gains rather than features, and emphasizes conducting customer interviews to gather evidence and test hypotheses about the business model.
Building Better Tech: The Product Manager's Role in Infrastructure & Platform...Product School
This document discusses the role of product managers in platform and infrastructure engineering. It outlines how product managers of platforms can align technical decisions to business metrics, mediate conflicts, and improve productivity by managing priorities and protecting engineers from distractions. Taking on product management for platforms can lead to more technically interesting work, broader impact, and higher pay compared to feature teams. The document encourages engineering teams without product managers to adopt the role to help communicate strategies and tie efforts to goals.
Product Leadership - from FAANG to Traditional Media by The New York Times SV...Product School
This talk will explore the differences and similarities in product management practices between technology companies and legacy companies. Drawing on experiences gained from working at a major tech firm and a legacy company, attendees will gain valuable insights into how product management approaches differ across industries. Additionally, the talk will provide guidance on how to navigate the transition between the two, as more non-tech companies embrace product management. This is a must-attend session for product managers seeking to deepen their understanding of the nuances of product management practices in different industries.
Measuring What Matters in Your Product by Amazon Product Leader.pdfProduct School
The document discusses how to determine the right product metrics by focusing on outcomes rather than outputs. It recommends setting a North Star Metric to align the team and measure overall product growth. Feature metrics should support the North Star Metric. OKRs and KPIs can provide goals and feedback to track progress towards objectives. Proxy, counter, and leading/lagging indicators can also be used to balance metrics and point to future success or friction. The key is to not just measure but communicate the value of metrics and celebrate wins.
A/B Testing for New Product Launches by Booking.com Sr PMProduct School
This document discusses A/B testing strategies for new product launches. It begins by explaining what A/B testing is and why companies use it. For new products, qualitative data is more important than quantitative data in the early stages. A minimum viable product (MVP) should be launched to create a foundation for A/B testing. Iterative testing can introduce other features to determine the winning variant, and holdouts can measure long-term success. Other validation methods like focus groups and beta testing are also discussed. The key is to qualify feedback before extensive A/B testing and measure performance over the long run.
Product management and its principles.Ankush Goyal
The document discusses key principles of product management. It covers defining the core purpose or vision of a product, understanding customer problems from their perspective through empathy, and focusing on solving people's problems before defining product features. The document also discusses other principles like identifying the different stages in a product's lifecycle from introduction to maturity to decline, and the roles and responsibilities of a product manager in planning, coordinating, and developing a product.
The document discusses testing minimum viable products (MVPs) for startups. It begins by providing examples of products that failed despite large investments and outlines common reasons for failure. It then defines the context of a startup as experimentation to validate business models through frequent customer feedback. The document describes the three stages of a startup and emphasizes that learning is progress. It advocates for building an MVP, which is the fastest way to test business hypotheses with minimum effort. The rest of the document provides examples of how to test MVPs to validate problems, solutions, and product-market fit with low-cost experiments like landing pages, surveys, prototypes, and pre-orders.
How to Get Promoted and Stand Out from Your Peers by Match fmr VP of Product.pdfProduct School
Learn how to navigate the informal culture of performance as a product manager, and set yourself up for rapid promotions while avoiding layoffs. This segment is timely for PM community members looking to break through career barriers and thrive in their roles. Join us to gain actionable strategies for success.
Product Roadmaps - Tips on how to create and manage roadmapsMarc Abraham
The document discusses best practices for creating and managing product roadmaps. It emphasizes starting with a clear product vision and goals focused on solving user problems rather than features. When creating a roadmap, it is important to consider dependencies, risks, and flexibility for changes. Managing stakeholders and updating the roadmap based on feedback and learning are also discussed as critical aspects of effective roadmapping.
The document provides advice for creating a successful startup. It discusses that Mike, an experienced executive, had a great idea for a product but some key mistakes. It outlines 5 lessons: 1) No business plan survives customer contact. 2) Have a clear business model. 3) Consider alternative models. 4) Treat your model as hypotheses to test. 5) Verify your model before building your company to avoid wasting money. It emphasizes the importance of testing assumptions through customer development and pivoting the model until it is proven.
This document provides an overview of Dan Olsen's background and approach to lean product analytics. It summarizes Olsen's 20 years of experience in product management and consulting. The presentation defines key lean startup concepts like achieving product-market fit, testing hypotheses, and minimizing waste. It also provides models and frameworks for validating product-market fit using both qualitative and quantitative metrics and analyzing user behavior and satisfaction. The document emphasizes using analytics to optimize business results and user experience through iterative learning and improvement.
How to Build a Product Vision by Spotify Product ManagerProduct School
In this episode, Matt Williams talks about building a product vision and getting stakeholder buy in. He also covers 'managing up' and how to navigate within your organization, whilst fostering an understanding of vision and user empathy with engineers.
Feature Prioritization Frameworks by Spotify Sr PMProduct School
Main takeaways:
-Why Feature Prioritization is Important?
-Overview of the popular prioritization frameworks: Rice, Value vs. ---Effort, The MoSCoW Method, Kano, Opportunities
-How to use frameworks?
-Tips and Tricks
This document discusses creating a product vision board to define the vision for a new product. It recommends identifying the target group, their needs, the product features, the value provided to the target group, competitors and alternatives, and creating a concise vision statement. The product vision board brings clarity to the team's understanding of the product and helps focus development efforts. It is an iterative process that involves mapping features and value to customer needs and refining over time. The end goal is for the board to concisely explain the product vision with a single glance.
What Are the Basics of Product Manager Interviews by Google PMProduct School
Ankit walked through an intro to the Product Manager role, the skills needed, and how the role differs between small and large companies. He wrapped up with some advice that's helped him in his Product Manager interviews over the years.
He gave a structured approach to thinking about what a Product Manager actually does (structured, meaning no "top 10" lists) and what are the skills you need to do well as a Product Manager.
This document provides an overview of how to prepare for product manager interviews. It outlines the main question types as design, strategy, analytical, technical, and behavioral. It then dives deeper into examples for each question type. The document also discusses how to assess interviews, outlines preparation steps such as studying materials and doing mocks, and provides resources for additional learning. The overall goal is to help candidates understand the different question styles and feel prepared for the PM interview process.
The Future of Product Management by Product School Founder & CEOProduct School
This document provides information about product management training and certification from Product School. It advertises their Product Manager, Senior Product Manager, and Product Leader certification programs. It also promotes their corporate training and resources like events, courses, podcasts and reports to help individuals and teams improve their product management skills. A QR code is included to provide feedback on Product School events.
Revolutionizing the Customer Experience_ Innovating and Scaling within Enterp...Product School
During this session, learn insights and best practices from Margaret Ryan, Amex's VP of Emerging Payments Product Management, on delivering meaningful product experiences within a large organization, leveraging Amex’s journey to partner with Google to provide customers with a virtual card number (VCN) autofill experience–which completed its roll out in March of this year.
An introduction to the heart, mind, and soul of Product Management: Customer Obsession, Metrics, and Product Sense. Presented at Product School Bellevue.
How to Correctly Use Experimentation in PM by Google PMProduct School
Main takeaways:
- Common misconceptions and pitfalls in using experimentation
- Best practices on using the scientific method for experimentation
- Evaluating how other experimentation techniques such as Multi-Armed Bandit and Multivariate Testing can help you solve different types of problems
Creating a culture that provokes failure and boosts improvementBen Dressler
Everyone fails - but not everyone uses failed attempts as a source of learning and improvement. This talk outlines a framework to turn failure into gaining knowledge by understanding IF, HOW and WHY something fails.
Measuring What Matters in Your Product by Amazon Product Leader.pdfProduct School
The document discusses how to determine the right product metrics by focusing on outcomes rather than outputs. It recommends setting a North Star Metric to align the team and measure overall product growth. Feature metrics should support the North Star Metric. OKRs and KPIs can provide goals and feedback to track progress towards objectives. Proxy, counter, and leading/lagging indicators can also be used to balance metrics and point to future success or friction. The key is to not just measure but communicate the value of metrics and celebrate wins.
A/B Testing for New Product Launches by Booking.com Sr PMProduct School
This document discusses A/B testing strategies for new product launches. It begins by explaining what A/B testing is and why companies use it. For new products, qualitative data is more important than quantitative data in the early stages. A minimum viable product (MVP) should be launched to create a foundation for A/B testing. Iterative testing can introduce other features to determine the winning variant, and holdouts can measure long-term success. Other validation methods like focus groups and beta testing are also discussed. The key is to qualify feedback before extensive A/B testing and measure performance over the long run.
Product management and its principles.Ankush Goyal
The document discusses key principles of product management. It covers defining the core purpose or vision of a product, understanding customer problems from their perspective through empathy, and focusing on solving people's problems before defining product features. The document also discusses other principles like identifying the different stages in a product's lifecycle from introduction to maturity to decline, and the roles and responsibilities of a product manager in planning, coordinating, and developing a product.
The document discusses testing minimum viable products (MVPs) for startups. It begins by providing examples of products that failed despite large investments and outlines common reasons for failure. It then defines the context of a startup as experimentation to validate business models through frequent customer feedback. The document describes the three stages of a startup and emphasizes that learning is progress. It advocates for building an MVP, which is the fastest way to test business hypotheses with minimum effort. The rest of the document provides examples of how to test MVPs to validate problems, solutions, and product-market fit with low-cost experiments like landing pages, surveys, prototypes, and pre-orders.
How to Get Promoted and Stand Out from Your Peers by Match fmr VP of Product.pdfProduct School
Learn how to navigate the informal culture of performance as a product manager, and set yourself up for rapid promotions while avoiding layoffs. This segment is timely for PM community members looking to break through career barriers and thrive in their roles. Join us to gain actionable strategies for success.
Product Roadmaps - Tips on how to create and manage roadmapsMarc Abraham
The document discusses best practices for creating and managing product roadmaps. It emphasizes starting with a clear product vision and goals focused on solving user problems rather than features. When creating a roadmap, it is important to consider dependencies, risks, and flexibility for changes. Managing stakeholders and updating the roadmap based on feedback and learning are also discussed as critical aspects of effective roadmapping.
The document provides advice for creating a successful startup. It discusses that Mike, an experienced executive, had a great idea for a product but some key mistakes. It outlines 5 lessons: 1) No business plan survives customer contact. 2) Have a clear business model. 3) Consider alternative models. 4) Treat your model as hypotheses to test. 5) Verify your model before building your company to avoid wasting money. It emphasizes the importance of testing assumptions through customer development and pivoting the model until it is proven.
This document provides an overview of Dan Olsen's background and approach to lean product analytics. It summarizes Olsen's 20 years of experience in product management and consulting. The presentation defines key lean startup concepts like achieving product-market fit, testing hypotheses, and minimizing waste. It also provides models and frameworks for validating product-market fit using both qualitative and quantitative metrics and analyzing user behavior and satisfaction. The document emphasizes using analytics to optimize business results and user experience through iterative learning and improvement.
How to Build a Product Vision by Spotify Product ManagerProduct School
In this episode, Matt Williams talks about building a product vision and getting stakeholder buy in. He also covers 'managing up' and how to navigate within your organization, whilst fostering an understanding of vision and user empathy with engineers.
Feature Prioritization Frameworks by Spotify Sr PMProduct School
Main takeaways:
-Why Feature Prioritization is Important?
-Overview of the popular prioritization frameworks: Rice, Value vs. ---Effort, The MoSCoW Method, Kano, Opportunities
-How to use frameworks?
-Tips and Tricks
This document discusses creating a product vision board to define the vision for a new product. It recommends identifying the target group, their needs, the product features, the value provided to the target group, competitors and alternatives, and creating a concise vision statement. The product vision board brings clarity to the team's understanding of the product and helps focus development efforts. It is an iterative process that involves mapping features and value to customer needs and refining over time. The end goal is for the board to concisely explain the product vision with a single glance.
What Are the Basics of Product Manager Interviews by Google PMProduct School
Ankit walked through an intro to the Product Manager role, the skills needed, and how the role differs between small and large companies. He wrapped up with some advice that's helped him in his Product Manager interviews over the years.
He gave a structured approach to thinking about what a Product Manager actually does (structured, meaning no "top 10" lists) and what are the skills you need to do well as a Product Manager.
This document provides an overview of how to prepare for product manager interviews. It outlines the main question types as design, strategy, analytical, technical, and behavioral. It then dives deeper into examples for each question type. The document also discusses how to assess interviews, outlines preparation steps such as studying materials and doing mocks, and provides resources for additional learning. The overall goal is to help candidates understand the different question styles and feel prepared for the PM interview process.
The Future of Product Management by Product School Founder & CEOProduct School
This document provides information about product management training and certification from Product School. It advertises their Product Manager, Senior Product Manager, and Product Leader certification programs. It also promotes their corporate training and resources like events, courses, podcasts and reports to help individuals and teams improve their product management skills. A QR code is included to provide feedback on Product School events.
Revolutionizing the Customer Experience_ Innovating and Scaling within Enterp...Product School
During this session, learn insights and best practices from Margaret Ryan, Amex's VP of Emerging Payments Product Management, on delivering meaningful product experiences within a large organization, leveraging Amex’s journey to partner with Google to provide customers with a virtual card number (VCN) autofill experience–which completed its roll out in March of this year.
An introduction to the heart, mind, and soul of Product Management: Customer Obsession, Metrics, and Product Sense. Presented at Product School Bellevue.
How to Correctly Use Experimentation in PM by Google PMProduct School
Main takeaways:
- Common misconceptions and pitfalls in using experimentation
- Best practices on using the scientific method for experimentation
- Evaluating how other experimentation techniques such as Multi-Armed Bandit and Multivariate Testing can help you solve different types of problems
Creating a culture that provokes failure and boosts improvementBen Dressler
Everyone fails - but not everyone uses failed attempts as a source of learning and improvement. This talk outlines a framework to turn failure into gaining knowledge by understanding IF, HOW and WHY something fails.
Testing the unknown: the art and science of working with hypothesisArdita Karaj
Testing what we know, or have a clear understanding of, is relatively straight forward, as is making decisions based on the expected result. But today’s world is presenting us with the Unknown and the Ambiguous, which can only be approached by hypothesizing and experimenting - a lot! This requires intentional thinking, and a different strategy to observe in context.
This session will uncover how testers are helping their teams and product owners, by basing their testing on the science behind creating hypotheses and running experiments. A testing mindset and probing the context around use cases are some of the most valuable competencies testers bring to the team in order to enable decisions based on data.
Organizing Your First Website Usability Test - Cornell Drupal Camp 2016 - part 4Anthony D. Paul
You’ve built a shiny, new Drupal site. You asked your grandma and your client if they like it and they both do. However, you’re lying awake at night wondering if you’re missing something—because you know you’re not the end user. You yearn for actionable feedback.
In this talk, I’ll distill my background in usability research into a how-to framework for taking your site and conducting your first unmoderated usability test. I’ll cover what to look for, best practices in facilitation, tools on the cheap, and how to glean the most from a brief window of time.
Better Living Through Analytics - Strategies for Data DecisionsProduct School
Data is king! Get ready to understand how a successful analytics team can empower managers from product, marketing, and other areas to make effective, data-driven decisions.
Louis Cialdella, a data scientist at ZipRecruiter, shared some case studies and successful strategies that he has used at ZipRecruiter as well as previous experiences. The purpose of this data talk was to enlighten people on how to make sure that analysts can successfully partner with other departments and get them the information they need to do great things.
User Research to Validate Product Ideas WorkshopProduct School
Learn how to leverage User Research techniques to validate customer demand for new products and features before writing a line of code.
See best UX best practices, different user testing experiences (Moderated & Unmoderated) and how to analyze user flows.
Slides Nis Frome recently used in his discussion w/ mentees of The Product Mentor.
The Product Mentor is a program designed to pair Product Mentors and Mentees from around the World, across all industries, from start-up to enterprise, guided by the fundamental goals…Better Decisions. Better Products. Better Product People.
Throughout the program, each mentor leads a conversation in an area of their expertise that is live streamed and available to both mentee and the broader product community.
http://TheProductMentor.com
Organizing Your First Website Usability Test - WordCamp Toronto 2016Anthony D. Paul
You’ve built a shiny, new WordPress site. You asked your co-worker and your boss if they like it and they both do. However, you’re lying awake at night wondering if you’re missing something—because you know you’re not the end user. You yearn for actionable feedback. In this talk, I’ll distill my background in usability research into a how-to framework for taking your site and conducting your first unmoderated usability test. I’ll cover why and when you should be running usability tests; how to set research goals and draft a script for them; setting up your lab environment and capturing feedback; and best practices for facilitation, minimizing bias, keeping users on task and gleaning the most from each brief test.
UX STRAT Online 2020: Dr. Martin Tingley, NetflixUX STRAT
Over the years, the Netflix UI has evolved from a sparse and static webpage into an immersive, video-centric experience tailored to a variety of platforms. In this talk, I’ll describe the simple but powerful framework that Netflix uses to evolve the product experience: we ask our members, through online A/B tests, which of several possible experiences resonate with them. I’ll also describe the steps we are taking to democratize access to experimentation across the company so that we can explore more ideas and identify those that deliver more value to our members.
Usability Testing - 10 Tips For Getting It Right UsabilityTools
Do you wish to know how to:
- turn your visitors into happy clients?
- create a great product?
- deliver outstanding user experience?
- have customers who love your product, use it and spread the word?
Unlock the potential of usability testing with our tips!
This document discusses challenges in optimization and conversion rate optimization (CRO). It addresses common problems like lack of clear goals, lengthy testing processes, and skills gaps. It proposes potential solutions like establishing structured optimization processes, prioritizing issues, testing creative ideas, and leveraging academic research. The document also introduces the idea of launching an in-house lab and online resources to help others in optimization through tools like eye tracking studies, user experiments, and short summaries of academic articles.
Users are Losers! They’ll Like Whatever we Make! and Other Fallacies.Carol Smith
Presented at CodeMash 2013.
If this sounds familiar it is time to make big changes or look for a new job. Failing your users will only end badly. In this session we look at the assumptions that are all-too-often made about users, usability and the User Experience (UX). In response to each of these misguided statements Carol will provide a quick method you can conduct with little or no resources to debunk these myths.
Weapons of Math Instruction: Evolving from Data0-Driven to Science-Drivenindeedeng
Donal McMahon, Director of Data Science at Indeed, presented how to transition from data-driven to science-driven product development. You’ll make better business decisions. It’s provable!
My key tips and tricks for using AI in Discovery phase in the Double Diamond process.
This was presented in a casual after-work session for Turku Design community in May 2023.
Fail Well, Pivot Fast: Product Experimentation for Continuous DiscoveryAggregage
This presentation will explore the basics of the scientific method and examine how proper experimental design, multiple hypothesis testing, cohort analysis, and split testing can effectively reduce batch size and lead to validated insights. You'll leave the webinar with a new understanding of how to experiment in a way that generates real insights, not just noise.
How to Use User Science to Your Product's Benefit by XO Group PMProduct School
Successful Product Managers help their organizations identify and build products that solve their users’ needs. The perfect user-product fit is rarely easy. Trained Product Managers can find the right fit consistently with User Science–the craft of understanding user needs; identifying which problem to solve, and user behaviors; understanding how and why users react to products. It's this craft that arms Product Managers with the data to make informed decisions.
Research and Discovery Tools for Experimentation - 17 Apr 2024 - v 2.3 (1).pdfVWO
You can utilize various forms of Generative Research to deepen your understanding of how people interact with your product or service.
Craig has amassed a vast toolkit of research methods, which he has employed to optimize websites and apps for over 500 companies. He'll share which methods yielded the highest return on investment, identified key customer pain points, and generated the best experiment ideas.
By sharing the top inspection methods essential for our work, Craig will provide advice for each technique. Anticipate insights on driving experiment hypotheses from research, a list of essential toolkit components for tomorrow, and additional resources for further reading.
This document discusses the importance of experimentation in moving ideas to business. It argues that as humans, we tend to overestimate success and inflate impacts, so experimentation helps overcome these biases. Validated learning through experimentation measures effects to inform decisions. Experiments should test falsifiable hypotheses and focus on speed to facilitate faster learning. Different testing methods are outlined from paper prototypes to A/B testing digital versions. Key principles for effective experimentation include making tests simple, close to reality, prioritized, and statistically significant to either validate hypotheses or identify needed pivots.
This document provides information about prototyping and experiments for product development. It discusses the importance of validating solutions with customers through prototyping to ensure problems are being solved. Low-fidelity prototyping techniques like paper prototyping are recommended for initial validation before advancing to higher fidelity digital prototypes. User testing of prototypes is highlighted as a way to get feedback on designs and learn about customers. The document also covers topics like minimum viable products, experimentation, and new technologies that could be explored.
Brent Summers, Director of Marketing at Digital Telepathy Using Data and Design toDrive Your Business June 25, 2015
Data is All Around You 1
Quantitative Data Sales Reports Data is All Around
Quantitative Data Application Performance Data Data is All Around You Quantitative Data Search Engine Optimization Data is All Around
Quantitative Web Analytics Data is All Around You
Qualitative Data Customer Surveys Data is All Around You Qualitative Data Customer Interviews Data is All Around You Get more info at: goo.gl/Jeol7v
Qualitative Data Personas Data is All Around You Get more info at: goo.gl/UW8mgQ
Observation Heat Mapping & Scroll Mapping Data is All Around You Observation User Behavior Data is All Around You
Data Already 
 Informs Design 2
A/B Testing Optimize for conversions. Data Already Informs Design
Eye Tracking People read in F-Shaped Pa erns Data Already Informs Design
Eye Tracking People look where people look. Data Already Informs Design h
Vertical Rhythm There’s a reason paper is ruled. Data Already Informs Design
Color Psychology What does your brand color say about your business?
The Golden Ratio 1.618 —
Consider the Entire 
 User Journey 3
Identify the Friction Evaluate sentiment/friction at each stage of the user journey. Consider the Entire User Journey
Designing for
 Business Objectives 4
Identify the Friction Where can you make the biggest impact? Designing for Business Objectives
User Journey Consideration
Landing Pages Incremental improvements can drive exponential results.
Be er Social Sharing Social sharing + content performance insights.
Animations Scroll is the new click.
Change Language Try different value proposition, calls to action, etc.
Change Layout Use behavior patterns to drive decisions.
User Journey Conversion: The act of purchasing a product or service through self service or a sales process.
Content Marketing Share knowledge to establish trust. Onboarding Step-by-step walkthroughs for new users.
Get the First Click Break through psychological barriers. User Journey Retention: Post-purchase. Activities that drive further product engagement, adoption and upgrades. Designing for Business Objectives
Reduce cognitive load: hide data until a user requests it.
Simplify your user interface for experienced users
Testimonials “Who doesn’t love social proof?” - Brent Summers
Prioritizing Your Backlog
Keep Track of Experiments Practical Advice Use a formula to assess which experiments to do first.
Sample Experiments Which of these experiments should be implemented Paid conversions
What does the data tell you? Identify where can design make the biggest impact.
Rounding Out the Process Your implementation method is unique. Measure the results. Repeat.
Measuring Success 6
Good Design is Great for Business Design lead firms out-perform the S&P 500 by 228%. Measuring Success
Similar to The Scientific Method of Experimentation by Google PM (20)
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - TechProduct School
The document discusses prioritizing a product roadmap by selecting parameters, scoring features, and mapping them on a value vs effort framework. It recommends clearly defining roadmap objectives, choosing a customizable framework like value vs effort, selecting parameters like revenue and customer needs for scoring features, and categorizing investments as strategic, easy wins or maintenance based on the scoring to effectively set the product direction.
Harnessing the Power of GenAI for Exceptional Product Outcomes by Booking.com...Product School
This document discusses harnessing the power of generative AI to improve product outcomes. It describes generative AI as a type of machine learning that allows computers to generate new and original ideas, like a creative chef using knowledge gained from recipes. The author discusses opportunities for generative AI across major business areas like demand generation, productivity, and products. Specific opportunities for Booking.com are explored, like better understanding customer intent and personalized recommendations. The author's vision is for systems that understand users in their natural language and help shape trip intent in a dynamic way that best serves customer needs.
Relationship Counselling: From Disjointed Features to Product-First Thinking ...Product School
The document discusses how Adyen improved its products by shifting from disjointed feature development to product-first thinking. Previously, Adyen had too many OKRs, complex metrics, and local success metrics that led to isolated components and fragmented experiences. It moved to fewer prioritized OKRs, global metrics, and end-to-end product management. This unified its offerings, improved the customer experience, and increased full funnel conversion rates by up to 300 basis points through its integrated risk, authentication, and optimization products working holistically.
Launching New Products In Companies Where It Matters Most by Product Director...Product School
This document discusses lessons learned from launching new products at large companies. It outlines three key lessons: 1) Figure out a clear strategic "why" for the new product that aligns with the company's overall strategy. 2) Really listen to stakeholders across the organization to understand their needs. 3) Assemble a cross-functional team that can get support and input from different parts of the organization, but isn't too large that it becomes unwieldy. The document emphasizes the importance of understanding strategic context, stakeholder needs, and effective team composition for successful new product launches at established companies.
Revolutionizing The Banking Industry: The Monzo Way by CPO, MonzoProduct School
Monzo is revolutionizing the banking industry by taking a customer-first approach called "The Monzo Way." This involves starting from first principles, building products through constant dialogue with users, and piloting internally before growth. Monzo gathers extensive customer feedback and has conducted over 500 research interviews and reports. It strives for industry-leading customer service and uses this research to develop innovative new products for investments and home ownership tailored to customer needs. Monzo's community-focused approach has helped it become the UK's highest rated bank for overall service quality for four years running.
Synergy in Leadership and Product Excellence: A Blueprint for Growth by CPO, ...Product School
This document discusses synergy between leadership and product excellence. It provides a blueprint for growth with three pathways: 1) an agile, retrospective culture, 2) rapid learning and experimentation, and 3) transparency and feedback culture. Ultimately, career fulfillment comes from aligning skills and passions, whether as an individual contributor or manager, by embracing what brings joy and taking a holistic approach to growth.
Act Like an Owner, Challenge Like a VC by former CPO, TripadvisorProduct School
The document discusses how product teams can act like owners and investors to maximize returns. It recommends following three principles: 1) The investment principle - treat time as an investment that should generate ROI. 2) The capping principle - limit ambitions based on discovery. 3) The portfolio principle - allocate resources across a portfolio of high-risk/high-reward, medium-risk, and low-risk/low-hanging fruit initiatives based on their potential ROI. Managing product work like a VC portfolio can help product teams act like owners and challenge stakeholders to seek maximum returns.
The Future of Product, by Founder & CEO, Product SchoolProduct School
Product teams will need to contribute directly to revenue growth, not just user value. They will sit at the intersection of technology and business. Artificial intelligence will allow product teams to do more with less people by automating tasks and providing insights. To succeed in this new era, companies must empower their product teams with the right skills and integrate them closely with other functions like marketing, sales, and customer success.
Webinar How PMs Use AI to 10X Their Productivity by Product School EiR.pdfProduct School
Explore AI tools hands-on and smoothly integrate them into your work routine. This practical experience is here to empower you, offering insights into the mindset of successful Product Managers. Learn the skills to become a more effective Product Manager.
Main Takeaways:
Hands-On AI Integration:
Learn practical strategies for integrating AI tools into your workflow effectively.
Mindset Insights for Success:
Gain valuable insights into the mindset of successful Product Managers, unlocking the secrets to their achievements.
Skill Empowerment for Growth:
Acquire essential skills that empower your evolution toward becoming a more effective and impactful Product Manager.
Webinar: Using GenAI for Increasing Productivity in PM by Amazon PM LeaderProduct School
In this webinar, you will learn how AI can take work off your plate, allowing you to focus on deep thinking or critical work. Cut out the drudge work in Product Management and get more out of your day.
Learnings:
Improve workflows that are high frequency - "manual tasks"
Increase the quality of output that has high importance - "brainy tasks"
Put GenAI to work today
Unlocking High-Performance Product Teams by former Meta Global PMMProduct School
Main Takeaways:
- High-Performing Team Dynamics: You’ll gain insights into fostering high-performance teamwork.
- Unveiling Team Personas: You’ll learn about different personas in the team and how to foster these differences.
- Decoding the Team Needs x Productivity Equation: You’ll learn about different team needs and how they correlate with engagement and productivity.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
14. What is user research?
Systematic approach to discovering
users' aspirations, goals, tasks, needs,
pain points, and information and
interaction requirements.
User research grounds, verifies, and
validates what a team builds.
15. Where does user research fit to product?
Iterative research
Foundational research
Evaluative research
PRODUCT DEVELOPMENT
16. Dimensions of user research methods
Context
Natural or near-natural
Scripted
Not using the product
A hybrid of the above
Attitudinal
what people say
Behavioral
what people do
1
Qualitative
answers why
Quantitative
answers how
much/many
2
3
Source: https://www.nngroup.com/articles/which-ux-research-methods/
Experiments
18. What is an experiment?
An experiment is a way to test a hypothesis about the
product.
An experiment may also refer to the gradual launch of a
new feature.
LIVE
EVAL
Note: Tests, while they are an important part of the software development
journey, are not experiments, since you know in advance the result you expect
20. I’m a PM. I know what will happen.
Humans are terrible at
making predictions
1. Hindsight bias
2. Observational selection bias
3. Projection bias
4. Anchoring bias
… and hundreds of cognitive biases...
21. Doing a pre/post analysis is enough
Brazil Search Traffic
June 2014
27. Fundamentals of experiment design
The scientific method is an empirical method of acquiring knowledge. It is the
systematic observation, measurement, and experimentation of a hypothesis.
Observation1 Hypothesis2 Design3
Experiment4 Analysis5 Prove/Reject6
28. PM flavor of scientific method
Observation1 Hypothesis2 Design3
Experiment4 Analysis5 Prove/Reject6
Ask a question0
Communicate results7
29. 0. Ask a question
How can I increase usage of my product?
How can I increase revenue attributed to my product?
How can I increase user happiness?
How can I simplify code without changing metrics?
How can I affect click behavior?
30. 1. Observation: do background research
What others have done before
Are you doing something different?
Did something change since the previous attempt?
Quantitative data
Behavioral metrics
Surveys
Trends
Qualitative data
Perceptions
Attitudes
Assumptions
Preferences
31. 2. Develop a hypothesis
A (1) testable (2) explanation for a
phenomenon.
The goal of an experiment is to prove or
disprove the hypothesis.
AVOID running experiments to see what happens
or to gather data with no hypothesis. Use other
user research methods and have a POV.
32. 2. Develop a hypothesis
Example
1. Ask a question
a. How can I increase sales for Prime users on the mobile
app?
2. Do background research
a. Users had troubles finding filters on mobile
b. Users get overwhelmed with too many results
c. Decreasing options simplifies decision-making
d. BUT, past experiments limiting results had negative
results
33. 2. Develop a hypothesis
Hypothesis:
Prime users will spend more $ if they can easily narrow their
search results to prime products
Is it valid?
● Is it testable?
● Does it have an explanation?
● Do I have an educated guess?
34. 3. Design experiment
Hypothesis:
Prime users will spend more $ if they can easily narrow their
search results to prime products
Design experiment:
1. Show a prime toggle on the navigation bar for all US prime
users on the iOS app
2. Toggle off by default
3. No changes to
a. Backend algorithms
b. Logic that decides when to enable the prime filter
c. Current prime filter behind the filter button
35. 3. Design experiment
BTriggering criteria
● Who: US prime users using iOS app
● When: If results include a prime product
● How: Session-based
Duration
● 2 weeks
Launch criteria (success metric)
● Statistically significant increase in revenue
● No increase in latency
37. 5. Analyze the data
BResults
+2.5% Revenue [1.9%, 3.1%], p=0.05
38. 5. Analyze the data
1. Statistical significance is the likelihood that the numeric difference
between a control and treatment outcome is not due to random chance
2. Null hypothesis states there is no significant difference between control and
treatment, any observed difference is due to sampling or experimental error
3. P-value evaluates how well the sample data supports the argument that the
null hypothesis is true. A low p value suggests you can reject the null hypothesis
4. Confidence interval is a range of values (lower and upper bound) that is
likely to contain an unknown population parameter
40. 6. Draw conclusions
Hypothesis:
Prime users will spend more $ if they can easily narrow
their search results to prime products
1. Validate data
2. Craft a story
3. Evaluate results
a. Arguments in favor and against it
b. Key observations and durable learning
c. Next steps
B
44. Choose the right metrics
1. Think both short-term and long-term
2. Use metrics that matter
3. Align on the success metrics beyond your
own team
45. Be a good wannabe scientist
1. The scientific method is not a suggestion
2. Be suspicious if you didn’t predict a specific
result in advance
3. The more you slice and dice your data, the
more false positives you’ll get
4. Lean against rolling out flat experiments,
unless there are valid reasons
46. Create and follow templates and processes
1. Setup an intake process to get ideas from
everyone
2. Establish a pre and post-experiment design
template
3. Document all learnings and make them widely
available
"As you checked in we sent you an email to join our online communities, events, and to apply for product management jobs. As members of the Product School community we'd like to provide you with these resources at your disposal."
Hello everyone, it’s a pleasure to be here.
My name is Ruben Lozano and I’m a Product Manager at Google Maps.
Before Maps, I was a PM at Google Cloud, Amazon, and Microsoft.
And today, I want to talk to you about using the scientific method when conducting experiments as product managers
From my experience, when people talk about conducting experiments in tech--they talk about A/B testing.
For the few of you who may not be familiar with A/B testing, at its most basic, it is a way to compare two versions of something to figure out which of the two performs better.
There are other more advanced methodologies of experimentation, like Multivariate Testing or Multi-armed Bandit, but I won’t be covering them during this presentation.
But in general, experiments are one of many methods withini your product management toolkit to conduct user research when building products.
That is why I want to briefly talk to you about user research
So you understand when is good a idea to use experiments compared to other research methodologies.
User research is a systematic approach to discovering users’ aspirations, goals, tasks, needs, paint points---you name it.
To me, it is that magical component that helps you ground, verify, and validate what you and your team build.
Research fits in every phase of product development
For example, foundational research usually starts before design and development; but I encourage you to use it even after your product has been launched. Examples of foundational research are diary or ethnographic studies, these help you build empathy towards people, uncover opportunities, and inform your overall product strategy and direction.
Iterative research is commonly used when you have already identified the problem you want to solve, and you may want to conduct an in-lab usability study to gather user input to direct which path your solution should focus on.
Experimentation, fits into evaluative research. In other words, you use it when your product is done or almost done, and you want to improve it.
Experiments will provide you rich data--but not in every dimension.
Experiments will provide you “Behavioral” data, in other words--what people do. Experiments will not provide you attitudinal data--like how people feel, what they want, or their aspiration.
Experiments will provide you “Quantitative” data, in other words--they will answer “how much” or “how many”---but not exactly “why” users do or what they like.
And finally, Experiments will provide you data from a Natural or near-natural context. In other words, you need to have a product already in the wild to collect this data accurately
With that in mind, let’s define an experiment.
An experiment is a way to test a hypothesis about your product.
At Google or Amazon, experiments may also refer to the gradual launch of a new feature.
For this talk, I would only focus on the first ones, the live experiments.
It is important to note that Tests are not experiments, as in tests, you know in advance the result you expect.
So why run live experiments?
Most of the time, you already built the feature. You did user research, you conducted usability studies.
You are the PM--you are smart. You know what will happen, right?
But let me tell you--humans are terrible at making predictions.
Too soon?
I know. The worst part is that our own mind tricks us with multiple cognitive biases. For example, hindsight bias.
I am confident you, or many people you know, say that they deeply knew the results of the 2016 election. So they feel they are good at making predictions, but they are not. We are not. The same happens with product. We don’t always know.
So what about a pre/post analysis? You already built the feature. Launch it and see what happens.
But the world is complicated. Let me give you an example.
This graph roughly shows Google Search traffic over time
The Google Search team released a feature right when you see a big drop in Searches.
Just by looking at pre/post, the team should have been concerned--but they were not. Why?
Let me give you a hint. These data comes from Brazil in June 2014. Any ideas?
Yes. The World Cup. People were not searching, they were watching soccer--it was not your feature. Thank you A/B experiments.
This is the beauty of A/B testing. It isolates the impact of just the product changes you deploy.
Experiments help you understand if something is a good idea.
For example, you decide to add images to your search results. It seems like a better UX, people like images.
Buf if you think deeply--will it be better? What if the site gets slower, what if you show less results on the same screen space, what if the most relevant result doesn’t have an image? Not that straightforward--but If you do an A/B test, you could measure its impact.
A/B tests are very useful, they can help you
Iterate on a good idea
Remove features from your product
Measure impact of changes.
At some point, you may even feel they are magical.
But it’s not true. They are not magical.
The A/B test concept is very easy to understand and there are tools that make it easy to implement.
Ergo, they are overused and used incorrectly.
And as Maslow wisely said: “if the only tool you have is a hammer, you will treat everything as if it were a nail.”
This is when we bring science.
Conducting experiments means doing science--and science follow a very strict methodology.
If you don’t, you are doing pseudoscience.
Not sure about you, but I don’t trust pseudoscience--not even “directionally” or as a “better than nothing” outcome.
To conduct a sound experiment, we should follow the scientific method. Yes, the one you learned long time ago.
It follows 6 steps
Observer the world
Formulate a hypothesis
Design an experiment
Run an experiment
Analyze that experiment
And prove or reject your hypothesis
For product management, it is basically the same. I would just add two steps.
First, you may have a specific question you want to answer
And last, you should invest on communicating your results.
Let’s start by asking questions
And these questions could be like--how can I increase revenue or usage of my product
Or something more philosophical--like---how can I increase happiness or make people love my product?
Then, you move to the observation step, do background research.
First, look at what others have done before, when, and why--has something changed--should we try it again?
Then, look at quantitative and qualitative data from all those user research methods you conducted. What can you learn about your product?
And after that, you develop a hypothesis
A hypothesis is a testable explanation for a phenomenon. It has two parts
Testable: you should be able to measure it
Explanation: you should have a story to explains it
Before you run an experiment, you actually need to have an educated guess of what you think it will happen.
This is required because the goal of an experiment is to prove or disprove a hypothesis.
As an example, let’s use an experiment I conducted at Amazon as the PM of the mobile app.
I asked the question: How can I increase sales of Prime users on the mobile app?
I did background research: I found through different data sources that users get overwhelmed with too many results, users had troubles finding filters on mobile, but also that experiments that limited number of results had negative results, and overall psychological research on how decreasing options helps decision-making
So… let’s try to develop a hypothesis
My hypothesis is that Prime users will spend more $ if they can easily narrow their search results to prime products
Let’s check they hypothesis
Is it testable? Yes, I can measure changes in revenue
Does it have an explanation? Yes, I am saying the change will happen because Prime users will be able to easily narrow their search results to prime products
Do I have an educated guess? Yes, I am saying revenue will increase
Based on that hypothesis, I designed the experiment in this way
Show a prime toggle on the navigation bar for all US prime users on the iOS app
No changes to algorithms
No changes to when the prime filter is enabled
No changes to the prime filter within the filter menu
Toggle off by default
Then, you define the triggering of your experiment
Who is going to see it: US primer users using iOS app
When: When results include a prime product
How: Session-based--it means each session is a data point
Duratios is two weeks.
In most consumer products, you test weekly, as user behavior between a Tuesday and a Sunday are drastically different.
Be careful which 2 weeks--avoid experimenting on holidays or on anything that could disrupt regular user behavior
Think if the 2 first weeks are actually the best. Sometimes you could have features with a novelty effect--in other words, their impact can wear off over time; or with a learnability effect--users require time to adapt
Launch criteria
This is when you define your success metric
I won’t go over details here because it could be its own session
But whatever you decide on your experiment duration or launch criteria, don’t change it after the experiment starts.
Why? To prevent data manipulation or the perception of data manipulation.
Many experiment owners will be tempted to stop or keep running an experiment, or change the narrative of success, to fit their own agenda.
So you run the experiment. Here you see the only difference.
Two weeks pass. You get the results, and they look something like that
Increase of +2.5% in revenue with a confidence interval from 1.9% to 3.2% and a p-value of 0.02
So--how should you read this?
There are four concepts that are important to understand.
Statistical significance. That is the likelihood that the numeric difference between a control and treatment outcome is not due to random chance.
In other words, most of the times you want them to be statistically significant.
Null hypothesis. The null hypothesis says that there is no significant difference between control and treatment, and that any observed difference is due to sampling or experimental error.
In other words, most of the times, you want to reject the null hypothesis--as you expect a difference between control and treatment.
P-value evaluates how well the sample data supports the argument that the null hypothesis is true. A low p value suggests you can reject the null hypothesis.
In other words, most of the times, you want a low p-value
Confidence interval is a range of values (lower and upper bound) that is likely to contain an unknown population parameter
If we look at our data, we will see that our result is significantly positive as the confidence interval is on the positive side. So you can make the assumption that the metric will increase
If the full confidence interval were in the negative side, you could make the assumption that the metric will decrease
If the confidence interval crosses the zero, you don’t have enough data to know if your metric will increase or decrease.
And finally, there is an inconclusive result known as “flat” where the lower bound of your confidence interval is above a threshold called “practical significance”.
Let’s say, you put that threshold at -0.5%. This means that you are ok losing up to 0.5% of revenue when launching your feature product. Put it in another way. “Do no harm” experiments do not exist--you just need to define how much harm you are ok with.
Also, “Leaning positive” or “Leaning negative” outcomes do not exist. If you hear someone using them, make sure they take some statistic courses.
And finally, it’s time to draw conclusions
First, validate the data.
Do the numbers seem off, or are they too good to be true?
Then, craft a story.
Use the experiment data but also, all your previous data. Does it make sense? Does it approve or reject your hypothesis?
And write it down. I recommend to
Write arguments in favor or against launching the feature based on your pre-define launch criteria and other metrics you were tracking.
Record any observations and learnings
Write down next steps? Will you do another iteration, will you expand to other markets?
And after you capture everything--share it. Share successful and failed experiments.
Not only because sharing is caring--but because these insights are very helpful. Even to people who were not involved at all in the experiment.
And as one of my heroes would say, Isaac Asimov, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka‘ but ‘That’s funny…’”