SlideShare a Scribd company logo
1 of 56
Moneyball and the Science of
Building Great Testing Teams
Peter Varhol
© 2011 Seapine Software, Inc. All rights reserved.
Agenda
What is Moneyball?
What does this have to do with testing
Bias and its variations
Applying Moneyball to testing
Building great testing teams
Summary and questions
© 2011 Seapine Software, Inc. All rights reserved.
Moneyball is About Baseball
© 2011 Seapine Software, Inc. All rights reserved.
Fielding
© 2011 Seapine Software, Inc. All rights reserved.
Pitching
© 2011 Seapine Software, Inc. All rights reserved.
And Especially Offense
© 2011 Seapine Software, Inc. All rights reserved.
But It’s Also About Building Great
Teams
© 2011 Seapine Software, Inc. All rights reserved.
Oakland Had a Problem
• There are rich teams and there are poor teams, then
there's fifty feet of crap, and then there's us.
© 2011 Seapine Software, Inc. All rights reserved.
How Do You Build Great Teams?
• Baseball experts didn’t know a number of things
• Getting on base is highly correlated with winning
games
• Pitching is important but not a game-changer
• Fielding is over-rated
• In general, data wins out over expert judgment
• Bias clouds judgment
© 2011 Seapine Software, Inc. All rights reserved.
What Does This Have to Do With
Testing?
• We maintain certain beliefs in testing practice
• Which may or may not be factually true
• That bias can affect our testing results
• How do bias and error work?
• We may be predisposed to believe something
• That affects our work and our conclusions
© 2011 Seapine Software, Inc. All rights reserved.
What Does This Have to Do With
Testing?
• We may test the wrong things
• Not find errors, or find false errors
• We may do too many testing activities by rote
• This means we aren’t really thinking about what we
are testing
• We may work too hard and become fatigued
© 2011 Seapine Software, Inc. All rights reserved.
Let’s Consider Human Error
• Thinking, Fast and Slow – Daniel Kahneman
• System 1 thinking – fast, intuitive, and sometimes
wrong
• System 2 thinking – slower, more deliberate, more
accurate
© 2011 Seapine Software, Inc. All rights reserved.
Consider This Problem
• A bat and ball cost $1.10
• The bat cost one dollar more than the ball
• How much does the ball cost?
© 2011 Seapine Software, Inc. All rights reserved.
Where Does Error Come In?
• System 1 thinking keeps us functioning
• Fast decisions, usually right enough
• Gullible and biased
• System 2 makes deliberate, thoughtful decisions
• It is in charge of doubt and unbelieving
• But is often lazy
• Difficult to engage
© 2011 Seapine Software, Inc. All rights reserved.
Where Does Error Come In?
• What are the kinds of errors that we can make, either
by not engaging System 2, or by overusing it?
• Priming
• Cognitive strain
• Halo effect
• Heuristics
• Regression to the mean
© 2011 Seapine Software, Inc. All rights reserved.
The Role of Priming
• System 1 can be influenced by prior events
• How would you describe your financial situation?
• Are you happy?
• People tend to be primed by the first question
• And answer the second based on financial concerns
© 2011 Seapine Software, Inc. All rights reserved.
The Role of Priming
• If we are given preconceived notions, our instinct is to
support them
• We address this by limiting advance advice/
opinions
• “They were primed to find flaws, and that is exactly
what they found.”
© 2011 Seapine Software, Inc. All rights reserved.
The Role of Cognitive Strain
• Ease can introduce errors
• When dealing with the familiar, we accept System 1
• Strain engages System 2
• We have to think about the situation
• And are more likely to draw the correct conclusion
© 2011 Seapine Software, Inc. All rights reserved.
But . . .
• We tire of cognitive strain
• System 2 makes mistakes when in continual use
• System 1 responds
• Often in error
© 2011 Seapine Software, Inc. All rights reserved.
The Halo Effect of Thinking
• Favorable first impressions influence later judgments
• We want our first impressions to be correct
• Provides a simple explanation for results
• Cause and effect get reversed
• A leader who succeeds is decisive;
the same leader who fails is rigid
© 2011 Seapine Software, Inc. All rights reserved.
The Halo Effect of Thinking
• Controlling for halo effects
• Facts and standards predominate
• Resist the desire to try to explain
© 2011 Seapine Software, Inc. All rights reserved.
The Role of Heuristics
• We unconsciously form rules of thumb
• That enable us to quickly evaluate a situation and
make a decision
• No thinking necessary
• System 1 in action
© 2011 Seapine Software, Inc. All rights reserved.
The Role of Heuristics
• Sometimes heuristics can be incomplete or even wrong
• And we make mistakes
© 2011 Seapine Software, Inc. All rights reserved.
The Anchoring Effect
• Expectations play a big role in results
• Suggesting a value ahead of time significantly
influences our prediction
• It doesn’t matter what the value is
© 2011 Seapine Software, Inc. All rights reserved.
Regression to the Mean
• We seek causal reasons for exceptional performances
• But most of the time they are due to normal
variation
© 2011 Seapine Software, Inc. All rights reserved.
Regression to the Mean
• “When I praise a good performance, the next time it’s
not as good.”
• “When I criticize a poor performance, it always
improves the next time.”
• But achievement = skill + luck
• Praise or criticism for exceptional
performances won’t help
© 2011 Seapine Software, Inc. All rights reserved.
Regression to the Mean
• We are terrible at forecasting the future
• Picking team members
• Estimating testing times
• But are great at explaining the past
• She was a great hire
© 2011 Seapine Software, Inc. All rights reserved.
Regression to the Mean
• But we believe in the value of experts
• Even when those experts are often wrong
• And we discount algorithmic answers
• Even when they have a better record than experts
• We take credit for the positive outcome
• And discount negative ones
© 2011 Seapine Software, Inc. All rights reserved.
Regression to the Mean
• Are expertise and expert intuition real?
• Yes, under certain circumstances
• A domain with largely unchanging rules and
circumstances
• Years of work to develop expertise
• But even experts often don’t realize their
limits
© 2011 Seapine Software, Inc. All rights reserved.
And About Those Statistics
• We don’t believe they apply to our unique
circumstances!
• We can extrapolate from the particular to the general
• But not from the general to the particular
© 2011 Seapine Software, Inc. All rights reserved.
Thinking About Testing
© 2011 Seapine Software, Inc. All rights reserved.
Thinking About Testing
• We don’t know the quality of an application
• So we substitute other questions
• Number and type of defects
• Performance or load characteristics
© 2011 Seapine Software, Inc. All rights reserved.
Thinking About Testing
• Those we can answer
• But do they relate to the question on quality?
• It depends on our definition of quality
• We could be speaking different languages
• Quality to users may be different
© 2011 Seapine Software, Inc. All rights reserved.
Thinking About Testing
• Back to System 1 and System 2 thinking
• System 1 lets us form immediate impressions
• Helps us identify defects
• But often fails at the bigger picture
• System 2 is more accurate
• But requires greater effort
© 2011 Seapine Software, Inc. All rights reserved.
Thinking About Testing
• With simple rote tests, System 1 is adequate
• The process is well-defined
• Exploratory testing engages System 2
• Exploratory testing is a good change of pace
• Too much exploratory testing will wear you out
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• Decorrelate errors
• Don’t let biases become common
• Identify the possible sources of bias
• Anchor
• Halo
• Priming
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• Recognize and reduce bias
• Preconceived expectations of quality will influence
testing
• Even random information may affect results
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• Automation is more than simply an ROI calculation
• It reduces bias and team errors
• Workflow, test case execution, and defect tracking
can especially benefit
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• Formulas versus judgments
• Formulas usually win
• Except . . .
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• If the expert has the same data as the formula
• The expert can add value
• That value sometimes produces a better result
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• We estimate badly
• We assume the best possible outcomes on a series
of tasks
• Past experience is the best predictor of future
performance
• Use your data
• But add value after the fact
© 2011 Seapine Software, Inc. All rights reserved.
Lessons to Testing
• Regression to the mean is a central fallacy
• There will be variation in team members and
performances
• These variations should not be assigned cause
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build a Great Team?
© 2011 Seapine Software, Inc. All rights reserved.
How Do You Build Great Teams
• You minimize error in judgment
• Recognize and reduce bias
• You keep people sharp by not continually stressing
them
• Overwork can make thinking lazy
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build Great Teams?
• Choose great testers
• Choose team members who exhibit the
characteristics of great testers
• Not necessarily those whose resume matches the
job description
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build Great Teams?
• What are the characteristics of great testers?
• Mix of System 1 and System 2 thinking
• Creativity
• Curiosity
• Willingness to question and question
• Ability to see the big picture
• Focus and perseverance
• Team player, but able to work individually
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build Great Teams?
• Who exhibits those characteristics?
• The usual suspects
• But who else?
• Scientists
• Marathon Runners
• Fashion Designers
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build Great Teams?
• Where do we find them?
• Colleges
• Civic Organizations
• Sports Leagues
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build Great Teams?
• Interview Questions:
• Tell me about a problem and how you solved it.
• Have you ever found a “bug” in anything you have
bought or used, not necessarily software?
• How did you find it?
• What did you do about it?
• Tell me about something you bought or used that
you feel is poorly designed and why.
© 2011 Seapine Software, Inc. All rights reserved.
How Can We Build Great Teams?
• Preconceived expectations of quality will bias us
• Keep expectations to a minimum
• Avoid groupthink
• Once the team has agreed, ask them how the plan
can fail
• Re-evaluate the plan in this light
© 2011 Seapine Software, Inc. All rights reserved.
Building a Great Team
• Vary rote and exploratory testing
• Explore a portion of the application
• Then run test cases
• System 1 and System 2 are exercised in succession
© 2011 Seapine Software, Inc. All rights reserved.
Building a Great Team
• Expertise is good up to a point
• But experts need to be certain of the limits of their
expertise
• Experts shouldn’t make the decisions
• But they can provide input
• Must be willing to work from data
© 2011 Seapine Software, Inc. All rights reserved.
Building a Great Team
• Becoming skillful at testing
• But learn broad rather than deep
• Expertise may be more of a hindrance
• Seek jacks-of-all-trades
© 2011 Seapine Software, Inc. All rights reserved.
Summary
• Testing is influenced by a variety of thinking errors
• Understanding how people think can make us attuned
to the errors we make
• We can adapt our approach to testing and team
management to account for errors
• Expertise matters, but only to a point
© 2011 Seapine Software, Inc. All rights reserved.
For More Information
• Moneyball, a movie starring Brad Pitt
• Moneyball, a book by Michael Lewis
• The King of Human Error, Vanity Fair
• Thinking, Fast and Slow, a book by Daniel Kahneman
• The Halo Effect, a book by
Philip Rosenzweig
© 2011 Seapine Software, Inc. All rights reserved.
Questions?

More Related Content

What's hot

Stop Getting Crushed By Business Pressure
Stop Getting Crushed By Business PressureStop Getting Crushed By Business Pressure
Stop Getting Crushed By Business Pressure
Arty Starr
 
How DevOps is Transforming IT, and What it Can Do for Academia
How DevOps is Transforming IT, and What it Can Do for AcademiaHow DevOps is Transforming IT, and What it Can Do for Academia
How DevOps is Transforming IT, and What it Can Do for Academia
Nicole Forsgren
 
Let's Make the PAIN Visible!
Let's Make the PAIN Visible!Let's Make the PAIN Visible!
Let's Make the PAIN Visible!
Arty Starr
 
2012 SxSW When IT Says No by Gene Kim
2012 SxSW When IT Says No by Gene Kim2012 SxSW When IT Says No by Gene Kim
2012 SxSW When IT Says No by Gene Kim
Gene Kim
 

What's hot (13)

The agile elephant in the room
The agile elephant in the roomThe agile elephant in the room
The agile elephant in the room
 
Stop Getting Crushed By Business Pressure
Stop Getting Crushed By Business PressureStop Getting Crushed By Business Pressure
Stop Getting Crushed By Business Pressure
 
What we learned from three years sciencing the crap out of devops
What we learned from three years sciencing the crap out of devopsWhat we learned from three years sciencing the crap out of devops
What we learned from three years sciencing the crap out of devops
 
DOES SFO 2016 San Francisco - Julia Wester - Predictability: No Magic Required
DOES SFO 2016 San Francisco - Julia Wester - Predictability: No Magic RequiredDOES SFO 2016 San Francisco - Julia Wester - Predictability: No Magic Required
DOES SFO 2016 San Francisco - Julia Wester - Predictability: No Magic Required
 
DevOps: The Key to IT Performance
DevOps: The Key to IT PerformanceDevOps: The Key to IT Performance
DevOps: The Key to IT Performance
 
How DevOps is Transforming IT, and What it Can Do for Academia
How DevOps is Transforming IT, and What it Can Do for AcademiaHow DevOps is Transforming IT, and What it Can Do for Academia
How DevOps is Transforming IT, and What it Can Do for Academia
 
Let's Make the PAIN Visible!
Let's Make the PAIN Visible!Let's Make the PAIN Visible!
Let's Make the PAIN Visible!
 
DOES 2016 Sciencing the Crap Out of DevOps
DOES 2016 Sciencing the Crap Out of DevOpsDOES 2016 Sciencing the Crap Out of DevOps
DOES 2016 Sciencing the Crap Out of DevOps
 
Swarming: How a new approach to support can save DevOps teams from 3rd-line t...
Swarming: How a new approach to support can save DevOps teams from 3rd-line t...Swarming: How a new approach to support can save DevOps teams from 3rd-line t...
Swarming: How a new approach to support can save DevOps teams from 3rd-line t...
 
2016 velocity santa clara state of dev ops report deck final
2016 velocity santa clara state of dev ops report deck final2016 velocity santa clara state of dev ops report deck final
2016 velocity santa clara state of dev ops report deck final
 
2011 06 15 velocity conf from visible ops to dev ops final
2011 06 15 velocity conf   from visible ops to dev ops final2011 06 15 velocity conf   from visible ops to dev ops final
2011 06 15 velocity conf from visible ops to dev ops final
 
2012 SxSW When IT Says No by Gene Kim
2012 SxSW When IT Says No by Gene Kim2012 SxSW When IT Says No by Gene Kim
2012 SxSW When IT Says No by Gene Kim
 
DevOps: What's Buried in the Fine Print
DevOps: What's Buried in the Fine PrintDevOps: What's Buried in the Fine Print
DevOps: What's Buried in the Fine Print
 

Similar to Moneyball peter varhol_starwest2012

Understanding the Business Case for Agile
Understanding the Business Case for AgileUnderstanding the Business Case for Agile
Understanding the Business Case for Agile
Seapine Software
 
Rapid software testing
Rapid software testingRapid software testing
Rapid software testing
Sachin MK
 
How did i miss that bug rtc
How did i miss that bug rtcHow did i miss that bug rtc
How did i miss that bug rtc
GerieOwen
 
Agile Metrics to Boost Software Quality improvement
Agile Metrics to Boost Software Quality improvementAgile Metrics to Boost Software Quality improvement
Agile Metrics to Boost Software Quality improvement
XBOSoft
 
Understanding Root Cause Analysis
Understanding Root Cause AnalysisUnderstanding Root Cause Analysis
Understanding Root Cause Analysis
leanadvisors
 
Reporting for operation
Reporting for operationReporting for operation
Reporting for operation
Dick Lam
 
Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...
Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...
Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...
MassTLC
 

Similar to Moneyball peter varhol_starwest2012 (20)

Understanding the Business Case for Agile
Understanding the Business Case for AgileUnderstanding the Business Case for Agile
Understanding the Business Case for Agile
 
SpringPeople Introduction to Agile and Scrum
SpringPeople Introduction to Agile and ScrumSpringPeople Introduction to Agile and Scrum
SpringPeople Introduction to Agile and Scrum
 
Agile Testing
Agile Testing Agile Testing
Agile Testing
 
Seven Deadly Habits of Ineffective Software Managers
Seven Deadly Habits of Ineffective Software ManagersSeven Deadly Habits of Ineffective Software Managers
Seven Deadly Habits of Ineffective Software Managers
 
Rapid software testing
Rapid software testingRapid software testing
Rapid software testing
 
One For All All For One
One For All   All For OneOne For All   All For One
One For All All For One
 
How to become a better leader with core self evaluations
How to become a better leader with core self evaluations How to become a better leader with core self evaluations
How to become a better leader with core self evaluations
 
DevOps Summit 2014 Delivering your Applications without a Hitch using Automat...
DevOps Summit 2014 Delivering your Applications without a Hitch using Automat...DevOps Summit 2014 Delivering your Applications without a Hitch using Automat...
DevOps Summit 2014 Delivering your Applications without a Hitch using Automat...
 
Seven Deadly Habits of Dysfunctional Software Managers
Seven Deadly Habits of Dysfunctional Software ManagersSeven Deadly Habits of Dysfunctional Software Managers
Seven Deadly Habits of Dysfunctional Software Managers
 
Making disaster routine
Making disaster routineMaking disaster routine
Making disaster routine
 
Finding Efficiencies in Your Development Lifecycle
Finding Efficiencies in Your Development LifecycleFinding Efficiencies in Your Development Lifecycle
Finding Efficiencies in Your Development Lifecycle
 
How did i miss that bug rtc
How did i miss that bug rtcHow did i miss that bug rtc
How did i miss that bug rtc
 
Seven Deadly Habits of Dysfunctional Software Managers
Seven Deadly Habits of Dysfunctional Software ManagersSeven Deadly Habits of Dysfunctional Software Managers
Seven Deadly Habits of Dysfunctional Software Managers
 
A no bullshit introduction to Hebel
A no bullshit introduction to HebelA no bullshit introduction to Hebel
A no bullshit introduction to Hebel
 
Agile Metrics to Boost Software Quality improvement
Agile Metrics to Boost Software Quality improvementAgile Metrics to Boost Software Quality improvement
Agile Metrics to Boost Software Quality improvement
 
Soft Launch Planning and Management | Dylan Tredrea
Soft Launch Planning and Management | Dylan TredreaSoft Launch Planning and Management | Dylan Tredrea
Soft Launch Planning and Management | Dylan Tredrea
 
Things Could Get Worse: Ideas About Regression Testing
Things Could Get Worse: Ideas About Regression TestingThings Could Get Worse: Ideas About Regression Testing
Things Could Get Worse: Ideas About Regression Testing
 
Understanding Root Cause Analysis
Understanding Root Cause AnalysisUnderstanding Root Cause Analysis
Understanding Root Cause Analysis
 
Reporting for operation
Reporting for operationReporting for operation
Reporting for operation
 
Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...
Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...
Gil Irizarry, Constant Contact presentation from MassTLC seminar on taking yo...
 

More from Peter Varhol

More from Peter Varhol (17)

Not fair! testing AI bias and organizational values
Not fair! testing AI bias and organizational valuesNot fair! testing AI bias and organizational values
Not fair! testing AI bias and organizational values
 
DevOps and the Impostor Syndrome
DevOps and the Impostor SyndromeDevOps and the Impostor Syndrome
DevOps and the Impostor Syndrome
 
Not fair! testing ai bias and organizational values
Not fair! testing ai bias and organizational valuesNot fair! testing ai bias and organizational values
Not fair! testing ai bias and organizational values
 
162 the technologist of the future
162   the technologist of the future162   the technologist of the future
162 the technologist of the future
 
Correlation does not mean causation
Correlation does not mean causationCorrelation does not mean causation
Correlation does not mean causation
 
Digital transformation through devops dod indianapolis
Digital transformation through devops dod indianapolisDigital transformation through devops dod indianapolis
Digital transformation through devops dod indianapolis
 
Testing for cognitive bias in ai systems
Testing for cognitive bias in ai systemsTesting for cognitive bias in ai systems
Testing for cognitive bias in ai systems
 
What Aircrews Can Teach Testing Teams
What Aircrews Can Teach Testing TeamsWhat Aircrews Can Teach Testing Teams
What Aircrews Can Teach Testing Teams
 
Identifying and measuring testing debt
Identifying and measuring testing debtIdentifying and measuring testing debt
Identifying and measuring testing debt
 
What aircrews can teach devops teams ignite
What aircrews can teach devops teams igniteWhat aircrews can teach devops teams ignite
What aircrews can teach devops teams ignite
 
Talking to people lightning
Talking to people lightningTalking to people lightning
Talking to people lightning
 
Using Machine Learning to Optimize DevOps Practices
Using Machine Learning to Optimize DevOps PracticesUsing Machine Learning to Optimize DevOps Practices
Using Machine Learning to Optimize DevOps Practices
 
Varhol oracle database_firewall_oct2011
Varhol oracle database_firewall_oct2011Varhol oracle database_firewall_oct2011
Varhol oracle database_firewall_oct2011
 
Qa test managed_code_varhol
Qa test managed_code_varholQa test managed_code_varhol
Qa test managed_code_varhol
 
Testing a movingtarget_quest_dynatrace
Testing a movingtarget_quest_dynatraceTesting a movingtarget_quest_dynatrace
Testing a movingtarget_quest_dynatrace
 
Talking to people: the forgotten DevOps tool
Talking to people: the forgotten DevOps toolTalking to people: the forgotten DevOps tool
Talking to people: the forgotten DevOps tool
 
How do we fix testing
How do we fix testingHow do we fix testing
How do we fix testing
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

Moneyball peter varhol_starwest2012

  • 1. Moneyball and the Science of Building Great Testing Teams Peter Varhol
  • 2. © 2011 Seapine Software, Inc. All rights reserved. Agenda What is Moneyball? What does this have to do with testing Bias and its variations Applying Moneyball to testing Building great testing teams Summary and questions
  • 3. © 2011 Seapine Software, Inc. All rights reserved. Moneyball is About Baseball
  • 4. © 2011 Seapine Software, Inc. All rights reserved. Fielding
  • 5. © 2011 Seapine Software, Inc. All rights reserved. Pitching
  • 6. © 2011 Seapine Software, Inc. All rights reserved. And Especially Offense
  • 7. © 2011 Seapine Software, Inc. All rights reserved. But It’s Also About Building Great Teams
  • 8. © 2011 Seapine Software, Inc. All rights reserved. Oakland Had a Problem • There are rich teams and there are poor teams, then there's fifty feet of crap, and then there's us.
  • 9. © 2011 Seapine Software, Inc. All rights reserved. How Do You Build Great Teams? • Baseball experts didn’t know a number of things • Getting on base is highly correlated with winning games • Pitching is important but not a game-changer • Fielding is over-rated • In general, data wins out over expert judgment • Bias clouds judgment
  • 10. © 2011 Seapine Software, Inc. All rights reserved. What Does This Have to Do With Testing? • We maintain certain beliefs in testing practice • Which may or may not be factually true • That bias can affect our testing results • How do bias and error work? • We may be predisposed to believe something • That affects our work and our conclusions
  • 11. © 2011 Seapine Software, Inc. All rights reserved. What Does This Have to Do With Testing? • We may test the wrong things • Not find errors, or find false errors • We may do too many testing activities by rote • This means we aren’t really thinking about what we are testing • We may work too hard and become fatigued
  • 12. © 2011 Seapine Software, Inc. All rights reserved. Let’s Consider Human Error • Thinking, Fast and Slow – Daniel Kahneman • System 1 thinking – fast, intuitive, and sometimes wrong • System 2 thinking – slower, more deliberate, more accurate
  • 13. © 2011 Seapine Software, Inc. All rights reserved. Consider This Problem • A bat and ball cost $1.10 • The bat cost one dollar more than the ball • How much does the ball cost?
  • 14. © 2011 Seapine Software, Inc. All rights reserved. Where Does Error Come In? • System 1 thinking keeps us functioning • Fast decisions, usually right enough • Gullible and biased • System 2 makes deliberate, thoughtful decisions • It is in charge of doubt and unbelieving • But is often lazy • Difficult to engage
  • 15. © 2011 Seapine Software, Inc. All rights reserved. Where Does Error Come In? • What are the kinds of errors that we can make, either by not engaging System 2, or by overusing it? • Priming • Cognitive strain • Halo effect • Heuristics • Regression to the mean
  • 16. © 2011 Seapine Software, Inc. All rights reserved. The Role of Priming • System 1 can be influenced by prior events • How would you describe your financial situation? • Are you happy? • People tend to be primed by the first question • And answer the second based on financial concerns
  • 17. © 2011 Seapine Software, Inc. All rights reserved. The Role of Priming • If we are given preconceived notions, our instinct is to support them • We address this by limiting advance advice/ opinions • “They were primed to find flaws, and that is exactly what they found.”
  • 18. © 2011 Seapine Software, Inc. All rights reserved. The Role of Cognitive Strain • Ease can introduce errors • When dealing with the familiar, we accept System 1 • Strain engages System 2 • We have to think about the situation • And are more likely to draw the correct conclusion
  • 19. © 2011 Seapine Software, Inc. All rights reserved. But . . . • We tire of cognitive strain • System 2 makes mistakes when in continual use • System 1 responds • Often in error
  • 20. © 2011 Seapine Software, Inc. All rights reserved. The Halo Effect of Thinking • Favorable first impressions influence later judgments • We want our first impressions to be correct • Provides a simple explanation for results • Cause and effect get reversed • A leader who succeeds is decisive; the same leader who fails is rigid
  • 21. © 2011 Seapine Software, Inc. All rights reserved. The Halo Effect of Thinking • Controlling for halo effects • Facts and standards predominate • Resist the desire to try to explain
  • 22. © 2011 Seapine Software, Inc. All rights reserved. The Role of Heuristics • We unconsciously form rules of thumb • That enable us to quickly evaluate a situation and make a decision • No thinking necessary • System 1 in action
  • 23. © 2011 Seapine Software, Inc. All rights reserved. The Role of Heuristics • Sometimes heuristics can be incomplete or even wrong • And we make mistakes
  • 24. © 2011 Seapine Software, Inc. All rights reserved. The Anchoring Effect • Expectations play a big role in results • Suggesting a value ahead of time significantly influences our prediction • It doesn’t matter what the value is
  • 25. © 2011 Seapine Software, Inc. All rights reserved. Regression to the Mean • We seek causal reasons for exceptional performances • But most of the time they are due to normal variation
  • 26. © 2011 Seapine Software, Inc. All rights reserved. Regression to the Mean • “When I praise a good performance, the next time it’s not as good.” • “When I criticize a poor performance, it always improves the next time.” • But achievement = skill + luck • Praise or criticism for exceptional performances won’t help
  • 27. © 2011 Seapine Software, Inc. All rights reserved. Regression to the Mean • We are terrible at forecasting the future • Picking team members • Estimating testing times • But are great at explaining the past • She was a great hire
  • 28. © 2011 Seapine Software, Inc. All rights reserved. Regression to the Mean • But we believe in the value of experts • Even when those experts are often wrong • And we discount algorithmic answers • Even when they have a better record than experts • We take credit for the positive outcome • And discount negative ones
  • 29. © 2011 Seapine Software, Inc. All rights reserved. Regression to the Mean • Are expertise and expert intuition real? • Yes, under certain circumstances • A domain with largely unchanging rules and circumstances • Years of work to develop expertise • But even experts often don’t realize their limits
  • 30. © 2011 Seapine Software, Inc. All rights reserved. And About Those Statistics • We don’t believe they apply to our unique circumstances! • We can extrapolate from the particular to the general • But not from the general to the particular
  • 31. © 2011 Seapine Software, Inc. All rights reserved. Thinking About Testing
  • 32. © 2011 Seapine Software, Inc. All rights reserved. Thinking About Testing • We don’t know the quality of an application • So we substitute other questions • Number and type of defects • Performance or load characteristics
  • 33. © 2011 Seapine Software, Inc. All rights reserved. Thinking About Testing • Those we can answer • But do they relate to the question on quality? • It depends on our definition of quality • We could be speaking different languages • Quality to users may be different
  • 34. © 2011 Seapine Software, Inc. All rights reserved. Thinking About Testing • Back to System 1 and System 2 thinking • System 1 lets us form immediate impressions • Helps us identify defects • But often fails at the bigger picture • System 2 is more accurate • But requires greater effort
  • 35. © 2011 Seapine Software, Inc. All rights reserved. Thinking About Testing • With simple rote tests, System 1 is adequate • The process is well-defined • Exploratory testing engages System 2 • Exploratory testing is a good change of pace • Too much exploratory testing will wear you out
  • 36. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • Decorrelate errors • Don’t let biases become common • Identify the possible sources of bias • Anchor • Halo • Priming
  • 37. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • Recognize and reduce bias • Preconceived expectations of quality will influence testing • Even random information may affect results
  • 38. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • Automation is more than simply an ROI calculation • It reduces bias and team errors • Workflow, test case execution, and defect tracking can especially benefit
  • 39. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • Formulas versus judgments • Formulas usually win • Except . . .
  • 40. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • If the expert has the same data as the formula • The expert can add value • That value sometimes produces a better result
  • 41. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • We estimate badly • We assume the best possible outcomes on a series of tasks • Past experience is the best predictor of future performance • Use your data • But add value after the fact
  • 42. © 2011 Seapine Software, Inc. All rights reserved. Lessons to Testing • Regression to the mean is a central fallacy • There will be variation in team members and performances • These variations should not be assigned cause
  • 43. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build a Great Team?
  • 44. © 2011 Seapine Software, Inc. All rights reserved. How Do You Build Great Teams • You minimize error in judgment • Recognize and reduce bias • You keep people sharp by not continually stressing them • Overwork can make thinking lazy
  • 45. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build Great Teams? • Choose great testers • Choose team members who exhibit the characteristics of great testers • Not necessarily those whose resume matches the job description
  • 46. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build Great Teams? • What are the characteristics of great testers? • Mix of System 1 and System 2 thinking • Creativity • Curiosity • Willingness to question and question • Ability to see the big picture • Focus and perseverance • Team player, but able to work individually
  • 47. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build Great Teams? • Who exhibits those characteristics? • The usual suspects • But who else? • Scientists • Marathon Runners • Fashion Designers
  • 48. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build Great Teams? • Where do we find them? • Colleges • Civic Organizations • Sports Leagues
  • 49. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build Great Teams? • Interview Questions: • Tell me about a problem and how you solved it. • Have you ever found a “bug” in anything you have bought or used, not necessarily software? • How did you find it? • What did you do about it? • Tell me about something you bought or used that you feel is poorly designed and why.
  • 50. © 2011 Seapine Software, Inc. All rights reserved. How Can We Build Great Teams? • Preconceived expectations of quality will bias us • Keep expectations to a minimum • Avoid groupthink • Once the team has agreed, ask them how the plan can fail • Re-evaluate the plan in this light
  • 51. © 2011 Seapine Software, Inc. All rights reserved. Building a Great Team • Vary rote and exploratory testing • Explore a portion of the application • Then run test cases • System 1 and System 2 are exercised in succession
  • 52. © 2011 Seapine Software, Inc. All rights reserved. Building a Great Team • Expertise is good up to a point • But experts need to be certain of the limits of their expertise • Experts shouldn’t make the decisions • But they can provide input • Must be willing to work from data
  • 53. © 2011 Seapine Software, Inc. All rights reserved. Building a Great Team • Becoming skillful at testing • But learn broad rather than deep • Expertise may be more of a hindrance • Seek jacks-of-all-trades
  • 54. © 2011 Seapine Software, Inc. All rights reserved. Summary • Testing is influenced by a variety of thinking errors • Understanding how people think can make us attuned to the errors we make • We can adapt our approach to testing and team management to account for errors • Expertise matters, but only to a point
  • 55. © 2011 Seapine Software, Inc. All rights reserved. For More Information • Moneyball, a movie starring Brad Pitt • Moneyball, a book by Michael Lewis • The King of Human Error, Vanity Fair • Thinking, Fast and Slow, a book by Daniel Kahneman • The Halo Effect, a book by Philip Rosenzweig
  • 56. © 2011 Seapine Software, Inc. All rights reserved. Questions?

Editor's Notes

  1. The Oakland Athletics major league baseball team had a big problem. It had a payroll that was a tenth the size of the very best teams in the league, and it had to field a winning team in order to remain profitable in its small market. Its general manager, Billy Beane, started looking deeply into the characteristics that produced winning teams.
  2. This and the following two slides will be flipped through in a few seconds each. The idea is to set the stage for illustrating Moneyball.
  3. What Billy Beane did was take what he had available, and make a winning team, year after year. With some research, he understood the errors others were making in evaluating players and teams, and figured out how not to make the same errors.
  4. You likely have a similar problem to what Billy Beane defined here. What do you do when you seemingly lack the resources to build a team and perform testing in ways that seem ideal?
  5. In Moneyball, Billy Beane discovered that the “experts” selected players who looked like their ideal of a baseball player, whether or not that person could actually improve their team. There was also a lack of understanding of what characteristics really won games over the course of a season. Beane wanted to build a competitive team, but didn’t have the budget to do it the way large market teams did. So instead he began paying attention to why teams chose the players they did. After examining a lot of baseball data that was just becoming available, he found that there were some clear truths that weren’t acknowledged by any of the experts. These experts had many years in judging talent, and believed they were the best at doing so. But that was their bias talking.
  6. Our beliefs influence our practice and our results. Beane looked at what others believed, and discovered that they were wrong. Their expertise was really a source of bias that came to incorrect conclusions. We do the same thing in testing. We typically come into a project with a set of biases that we may not even be aware of. The next 15 slides describes some of those biases, and how they affect our thinking.
  7. The result of these errors can potentially mean a number of things. We might test the wrong thing, secure in our belief that errors are more likely to occur in certain places. We might use prepared test cases and automation as a crutch to avoid thinking too much about what we are testing, and what the results are. Or we may perform too much thought work, setting up mental fatigue and increasing the chances of all errors.
  8. Daniel Kahneman, in his book Thinking, Fast and Slow, defines two types of thinking. He calls them System 1 and System 2 thinking. These, of course, are models, and don’t have any physical representation in the brain or elsewhere.
  9. 50-80 percent of college students get it wrong. Why? Because we think we know the answer without any further thought. It’s an error of System 1 thinking.
  10. Daniel Kahneman has developed a model that divides thinking into two components, which he calls System 1 and System 2. System 1 is immediate, reflexive thinking that is for the most part unconscious in nature. We do this sort of thinking many times a day. It keeps us functioning in an environment where we have numerous external stimuli, some important, and many not. System 2 is more deliberate thought. It is engaged for more complex problems, those that require mental effort to evaluate and solve. System 2 makes more accurate evaluations and decisions, but it can’t respond instantly, as is required for many types of day-to-day decisions. And it takes effort, which means it can tire out team members.
  11. When we answer the second question, with intuitively think of our financial situation in defining the answer. That often means that our answer is based on our thought surrounding the prior question. We have been “primed” to think in a certain way. While plastic is convenient, it's also a threat to thrift: A 2011 study found that people paying with cash think more about a purchase's costs; those using credit dwell more on the benefits -- and are primed to pay more.
  12. If we are already thinking about something, those thoughts will influence our subsequent image of the situation, and any decisions that arise out of that situation. If we believe our software has defects because of complexity, confusion, or poor practices, we will likely find more defects than if we believed it was of high quality.
  13. Strain (physical or mental) helps us focus, often invoking System 2 thinking. That helps us break out of a pattern of rapid thought, and forces us to consider the problem in a more thoughtful way. In doing so, we are more likely to make a correct and accurate decision.
  14. System 2 thinking is hard work. We tire, and eventually make mistakes when we’re concentrating at this level. Specifically, when System 2 tires, System 1 takes over for responses. Because System 2 is deferring on more complex decisions, System 1 often makes mistakes when we are mentally tired.
  15. This error is familiar to most of us, because we often let first impressions dictate our subsequent beliefs and actions. We do this because we want to see ourselves as good judges of people and situations. But the halo effect is more than this. It also enables us to judge, or even change our judgments, based on subsequent events. We may believe that our manager is decisive because he/she has a ready decision whenever we ask. But in times of crisis, that same decisiveness may be perceived as rigidity, because they aren’t able to easily process new information and adjust decisions accordingly.
  16. How do we deal with our instinct for early judgment?
  17. How many of you have ever had pilot training? Years ago, in a PA-28 Cherokee 140 like this one, my instructor put me “under the hood” in practicing recovery from unusual attitudes. With the hood down, he put the plane into unusual flying positions from which I had to recover as quickly as possible. When he brought the hood up, I could see only the instrument panel. I rapidly developed a heuristic that enabled me to quickly identify and correct an unusual attitude. In short, I focused on the turn indicator and artificial horizon, and worked to center both of them. My instructor figured out what I was doing, and I did the same thing the next time. My turn indicator and artificial horizon were centered, but I was still losing over 1000 feet a minute! I was stumped. My instructor had “crossed” the controls, leaving me in a slip that my heuristic couldn’t account for. I was worse than wrong; I couldn’t follow through at all once my heuristic failed. I never forgot that experience.
  18. Kahneman had subjects spin a wheel of fortune rigged to stop at either 35 or 65, then asked a question on how many African nations were in the UN. The result of the wheel of fortune demonstrably swayed people’s subsequent answer up or down, even though it had no relationship to the question. They were anchored to a particular number, and that number influenced their subsequent guess.
  19. Exceptionally good or bad performances are probably due in large part to very good luck. Skill plays a role, but not as much as we believe. People can still be good or poor at a particular task, but luck is a variable on top of that skill.
  20. The praise or criticism has nothing to do with it. An exceptional performance is almost certainly in part due to a large measure of luck (good or bad).
  21. We don’t deal well with statistics, because we don’t believe they apply in our individual situations. We understand them in the abstract, but not in application.
  22. That leads to the question of why do we believe experts? Their analysis and forecasts may overall be better than chance, but they almost certainly won’t be exceptionally good. It seems we like experts who take strong stands or say controversial things, for the theater value. We also like it when experts are right, but both they and we tend to discount the times that they are wrong.
  23. Do you watch House, MD? (starring Hugh Laurie) House deals with complex medical cases, and he is typically wrong in his diagnoses 3-4 times before he gets it right. It’s TV, certainly, but he is in a noisy data environment, and can’t really be expected to develop an intuition about cases that are all one-of-a-kind.
  24. Kahneman was part of a group building a new national curriculum. At one point he asked the group how long they thought it would take for them to finish. In a secret ballot, their collective answer was about two years. He then asked a member of the group experienced in doing that task how other groups had fared in the past. This person paused and said “About 40 percent didn’t finish at all, and the rest took between 7-10 years. He also offered the opinion that their particular team was slightly below average in ability at the task. At that point, they should have abandoned the project as a bad bet. Yet they believed their result would turn out different. Ultimately, it took 8 years to complete, and their curriculum was never put into use.
  25. How can we apply these often-surprising facts about thinking to testing activities and testing teams?
  26. While we use the word quality a lot, much of the time we really don’t know what we’re talking about. It’s an abstraction. So instead, we substitute operational definitions based on things we do know, and data we can collect and analyze.
  27. We have operational definitions of quality – definitions that we can measure. Sometimes we explicitly draw a connection between them and quality, and sometimes we just assume that some relationship exists. But we may not know precisely what we mean by quality. And our definition of quality may well be different than that of our users or customers.
  28. Much of testing involves System 1 thinking. We apply a defined test by rote, record the results, file one or more defects if appropriate, and move on to the next one. This is efficient, but susceptible to the errors noted above (heuristics, halo, regression to the mean) because we are not really thinking about the results. It would be worthwhile to occasionally engage System 2 thinking as a check and balance.
  29. How about exploratory testing as a System 2 stimulant? Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. In reality, testing almost always is a combination of exploratory and scripted testing, but with a tendency towards either one, depending on context. Going back and forth between exploratory testing and writing and running individual test cases makes it possible to shift between System 1 and System 2 thinking, and also make it possible to more readily engage System 2 thinking when System 1 is likely to fail. Consciously going back and forth between the two thinking models has the potential to make both better and more responsive.
  30. What do I mean by decorrelate errors? Errors are going to happen, one way or another. What we don’t want is for errors to occur because of a systematic bias, such as the ones I’ve described here. If we start getting errors (false defects, inconsistency in finding defects, poor test coverage), we need to make sure they aren’t the systematic result of one or more biases. Errors need to be identified and analyzed, not simply shrugged off (or blamed on someone).
  31. Unintentional bias is a big cause of testing and assessment errors.
  32. Automation helps reduce bias by standardizing process, including tests, workflows, and defect tracking. However, it also produces a high level of System 1 thinking, so it shouldn’t be relied on exclusively. We see the results of the automated test, decide based on bias or heuristics, and move on. Automation needs to be supplemented with active manual activities that bring out System 2 decision-making, such as exploratory testing or reporting that enables testers to visualize data.
  33. Where does expertise fit in? For the most part, if we had to choose between an expert judgment or a formula for a solution, the formula is usually just as good, if not better. This once again argues for automation, as well as mathematical analysis of data. This goes against the grain of people generally believing that they can make better analyses and judgments than a software algorithm.
  34. However, if the expert has the same data as the algorithm, the expert will do just as well, and often better. Take advantage of expertise on the testing staff when you can provide them with data to make decisions. Don’t ask for expert analysis in a vacuum.
  35. This should come as no surprise to anyone. We estimate both time to test (and to develop) as though everything goes like clockwork. Life isn’t like that. We have meetings, time off, rework, more meetings, and other things that prevent us from devoting 100 percent of our time toward our direct work. People leave for other jobs, and hiring and training replacements is time-consuming. Instead of estimating based on your expertise, look carefully at past experiences. Those are your best guide to estimating future projects. Make use of data from those projects and use that data as a starting point for estimating. You may adjust those estimates based on greater expertise or better automation, but don’t do so without a good reason.
  36. Looking to the team, we can see individuals achieving or not. But if we seek out exceptional performances at either end of the spectrum, we are likely to be making a mistake on the abilities of those individuals. We tend to use examples of such performances in performance reviews and making decisions on hiring and promoting (or firing). That’s the wrong approach. Instead, we should do the more difficult work of setting objectives, observing performance on a regular basis, and getting feedback from coworkers.
  37. Bias is common in our judgments, even if it is fully unintended and unrecognized. I’ve described several types of bias. The best way to guard against them in building a team is to engage System 2 thinking, rather than working on autopilot (System 1). In managing people, we need to establish a balance between automatic work (driven in large part by automation) and thought work, driven by more difficult problems and decisions. Too much of either can result in errors in judgment and fatigue, which can cause us to make mistakes in testing and project work.
  38. We have opinions of the value and quality of software projects before even starting them. These expectations will invariably skew our approaches, and result in mistakes in judgment and decisions. We can temper these expectations with group efforts to consider the opposite expectation, and come up with reasons why our initial opinions may be in error. We often agree to not rock that boat. That means that the loudest (or most senior) ideas win out, without a good vetting. One strategy for avoiding groupthink is to tell the team that a year from now, the plan has clearly failed. Ask them to individually write down their “pre-mortem” of the failure. These pre-mortems should then be evaluated by the group to see if the plan is really as good as they thought it was.
  39. How can System 1 and System 2 thinking effectively interact? Consider varying the two in a tester’s work. System 1 thinking involves rote activities, such as manual testing (from existing test cases) and automated testing. System 2 thinking is invoked in exploratory testing, context-driven testing, and data analysis. Formal meetings often give rise to System 1 thinking, while in-depth one-on-one conversations engage System 2.
  40. It’s always good to have specific expertise on testing teams; however, you have to use that expertise effectively. You should know the limits of your experts, and call upon their expertise when you have enough information available for them to make a difference.
  41. I knew an expert programmer. In fact, he was an expert C programmer using Borland Turbo C 3.0. He knew that product like no one else. But his expertise quickly disappeared as other languages and tools gained popularity. For a brief period, he commanded a high salary, but wasn’t able to easily transition into newer technology trends. This argues for a broad rather than deep skill set. Testers and other team members should be able to fill multiple roles, and feel comfortable with knowing what they don’t know. It is easier to acquire knowledge just-in-time, and it has the potential to reduce bias. Expertise still has value, but only if that expertise directly applies to the problem domain.
  42. In summary, our projects are influenced in a number of ways by how we think about them. Some of those thought processes can be in error. I’ve discussed several common errors of thinking, based on Kahneman’s theories. And I hope I’ve provided a few ideas on how we can adapt and account for our errors in the realm of software projects and testing.