TEST CASE MANAGEMENT
PAULA HEENAN
LEAD TEST CONSULTANT,
EXCEPTION

APPLYING PSYCHOLOGY TO THE
ESTIMATION OF QUALITY ASSURANCE
Paula Heenan, lead test consultant at Exception, uses psychology to explain
decision-making and the implications this has on risk-based testing…

W

hen I tell people I’m studying psychology
they either assume I’m analysing them or they
start telling me their problems! Either way, a
comment usually follows on how different it must
be to what I do in my day job as test consultant. The reality
is that there is a lot more cross-over than you may expect.
A lot of what I have learned has been applicable to the
softer side of my role, such as leading teams, for example.
Most recently, I’ve been gaining a deeper understanding
of cognitive psychology, which looks at how the brain
processes information and how this affects our behaviour,
memory and decision-making. This article will discuss
mental heuristics, which are rules of thumb that the brain
uses to reduce the amount of mental work (cognitive
processing) required for complex tasks.

TO
MITIGATE
AGAINST
AVAILABILITY BIASES
WHEN TESTING, USE
METRICS FROM PAST
PROJECTS, CHECKLISTS AND
WORK WITH OTHER PROJECT
TEAMS TO UNDERSTAND
THE COMPLEXITY AND
RISK INVOLVED IN THE
SYSTEMS UNDER
TEST

Tversky and Kahneman’s 1974 paper, Judgment in
Uncertainty, stated:
“Many decisions are based upon beliefs concerning
the likelihood of uncertain events such as the outcome
of an election...”
Though the discipline of software testing was still in its
infancy in 1974, it is easy to see how this statement still
applies today. As testers, we are often asked to make
estimates on testing or likely location of defects, despite
having no reliable information to base these upon. In this
article, we’ll take a look at how three different heuristics,
identified by Tversky and Kahnemenn, can affect testers’
judgement and how awareness of these heuristics can
enhance the estimation process.

ADJUST AND ANCHOR
The first heuristic is the “adjust and anchor” heuristic,
which I think is particularly relevant to estimation. The
anchor and adjust heuristic is when estimates are made
by starting from an initial value and then adjusting. These
initial values can be based upon memory of previous
projects or by applying a general rule such as testing will
be a certain percentage of development time.
As we know, a large percentage of projects fail to start
and complete on the planned dates, so we can deduce
that there must be something wrong in how projects
are estimated. The risk with using this heuristic is that the
anchoring point may not be reliable. If the metrics we are
anchoring on were wrong, it’s improbable that they will fit
next time.

PAGE 34

To
resolve
issues with
adjusting and
anchoring it is beneficial
to start afresh with estimation
and use proven estimation techniques
such as function point analysis. Naturally,
unforeseen circumstances will influence test
projects but having an awareness of how
estimation can be affected by this heuristic
improves the robustness of estimations.

FEBRUARY 2014 | www.testmagazine.co.uk
TEST CASE MANAGEMENT

REPRESENTATIVENESS
The second heuristic, and to me the most complex of the
three, is representativeness. It is relevant when planning
what to test and covers a range of fallacies applied
when making a judgment or estimation, including
insensitivity to probability and misconceptions of chance
and randomness.
While these may not appear immediately or easily
relatable to testing, some
examples may help.
When asked if a
tall, blonde,
glamourous
woman

go right this time
or that we may
be fooled by
the simple
appearance of a
user interface.

I BELIEVE THAT
HAVING CHECKLISTS
AND TRACEABILITY
MATRIXES SUPPORTS HAVING
SUFFICIENT COVERAGE, AND
HAVING OPEN DIALOGUES
WITH THE TECHNICAL TEAMS
TO UNDERSTAND RISK WILL
HELP FOCUS TESTING AND
AVOID THE TRAP OF
REPRESENTATIVENESS

So how do we
avoid the traps
of these heuristics
to improve testing?
I believe that
having checklists and
traceability matrixes
supports having sufficient
coverage, and having open
dialogues with the technical teams to
understand risk will help focus testing and avoid the trap
of representativeness.

AVAILABILITY
The last type of heuristic to be discussed is
availability. As with representativeness, there
is more than one factor that impacts the
availability heuristic. One is imaginability; where
the probability of an outcome is based upon
imagined risks; another is the retrievability
of information, which purports that people
will estimate likelihood by how easily they
remember past occurrences.
Imaginability is very pertinent to risk-based
testing, where assessing risk and imagining
outcomes is a key part of the test planning.
With regard to retrievability of information,
when planning testing and identifying the
critical areas to test, easily remembered
functionality which has been problematic in the
past may be focussed upon when, in the reality, the
defect metrics and issue logs tell a different picture.
wearing
designer clothes
is a model or a nurse,
respondents are likely
to respond “model”,
being insensitive to the
probability that there are
more nurses than models in the
general population.
An example of misconceptions of
chance and randomness would be that
in a game of heads and tails, after a run of
heads people are likely to predict tails as the
next outcome, believing that tails must be due.
Now to me these fallacies appear slightly
contradictory; one saying that people ignore
probability and that causes them to make
errors in judgement and the other saying
probability makes people ignore the 50/50
chance. They are applicable to how we judge
where to focus testing – we may think that
something that has failed a lot in the past must

FEBRUARY 2014 | www.testmagazine.co.uk

For example, on a recent project there were two areas
with a lot of defects. Team A managed their defects
effectively and I had daily updates with the team lead.
Team B had more issues managing defects; not all
their developers had access to the defect system and
there were a few resource changes that impacted
defect fixing. The defects for Team B are more easily
remembered yet they did not have more defects
than Team A. This could result in incorrect judgements
regarding how testing is planned and on the functionality
to be tested.
To mitigate against availability biases when testing, use
metrics from past projects, checklists and work with
other project teams to understand the complexity and
risk involved in the systems under test. Again, checklists
and traceability matrixes are helpful to ensure coverage
is sufficient.
These heuristics are only three examples of how studying
psychology can be beneficial to QA and testing. While
there is no magic solution to resolve these, being aware
of them and putting tools in place to mitigate their
occurrence will improve the reliability of future test
estimation and planning.

PAGE 35

Applying Psychology To The Estimation of QA

  • 1.
    TEST CASE MANAGEMENT PAULAHEENAN LEAD TEST CONSULTANT, EXCEPTION APPLYING PSYCHOLOGY TO THE ESTIMATION OF QUALITY ASSURANCE Paula Heenan, lead test consultant at Exception, uses psychology to explain decision-making and the implications this has on risk-based testing… W hen I tell people I’m studying psychology they either assume I’m analysing them or they start telling me their problems! Either way, a comment usually follows on how different it must be to what I do in my day job as test consultant. The reality is that there is a lot more cross-over than you may expect. A lot of what I have learned has been applicable to the softer side of my role, such as leading teams, for example. Most recently, I’ve been gaining a deeper understanding of cognitive psychology, which looks at how the brain processes information and how this affects our behaviour, memory and decision-making. This article will discuss mental heuristics, which are rules of thumb that the brain uses to reduce the amount of mental work (cognitive processing) required for complex tasks. TO MITIGATE AGAINST AVAILABILITY BIASES WHEN TESTING, USE METRICS FROM PAST PROJECTS, CHECKLISTS AND WORK WITH OTHER PROJECT TEAMS TO UNDERSTAND THE COMPLEXITY AND RISK INVOLVED IN THE SYSTEMS UNDER TEST Tversky and Kahneman’s 1974 paper, Judgment in Uncertainty, stated: “Many decisions are based upon beliefs concerning the likelihood of uncertain events such as the outcome of an election...” Though the discipline of software testing was still in its infancy in 1974, it is easy to see how this statement still applies today. As testers, we are often asked to make estimates on testing or likely location of defects, despite having no reliable information to base these upon. In this article, we’ll take a look at how three different heuristics, identified by Tversky and Kahnemenn, can affect testers’ judgement and how awareness of these heuristics can enhance the estimation process. ADJUST AND ANCHOR The first heuristic is the “adjust and anchor” heuristic, which I think is particularly relevant to estimation. The anchor and adjust heuristic is when estimates are made by starting from an initial value and then adjusting. These initial values can be based upon memory of previous projects or by applying a general rule such as testing will be a certain percentage of development time. As we know, a large percentage of projects fail to start and complete on the planned dates, so we can deduce that there must be something wrong in how projects are estimated. The risk with using this heuristic is that the anchoring point may not be reliable. If the metrics we are anchoring on were wrong, it’s improbable that they will fit next time. PAGE 34 To resolve issues with adjusting and anchoring it is beneficial to start afresh with estimation and use proven estimation techniques such as function point analysis. Naturally, unforeseen circumstances will influence test projects but having an awareness of how estimation can be affected by this heuristic improves the robustness of estimations. FEBRUARY 2014 | www.testmagazine.co.uk
  • 2.
    TEST CASE MANAGEMENT REPRESENTATIVENESS Thesecond heuristic, and to me the most complex of the three, is representativeness. It is relevant when planning what to test and covers a range of fallacies applied when making a judgment or estimation, including insensitivity to probability and misconceptions of chance and randomness. While these may not appear immediately or easily relatable to testing, some examples may help. When asked if a tall, blonde, glamourous woman go right this time or that we may be fooled by the simple appearance of a user interface. I BELIEVE THAT HAVING CHECKLISTS AND TRACEABILITY MATRIXES SUPPORTS HAVING SUFFICIENT COVERAGE, AND HAVING OPEN DIALOGUES WITH THE TECHNICAL TEAMS TO UNDERSTAND RISK WILL HELP FOCUS TESTING AND AVOID THE TRAP OF REPRESENTATIVENESS So how do we avoid the traps of these heuristics to improve testing? I believe that having checklists and traceability matrixes supports having sufficient coverage, and having open dialogues with the technical teams to understand risk will help focus testing and avoid the trap of representativeness. AVAILABILITY The last type of heuristic to be discussed is availability. As with representativeness, there is more than one factor that impacts the availability heuristic. One is imaginability; where the probability of an outcome is based upon imagined risks; another is the retrievability of information, which purports that people will estimate likelihood by how easily they remember past occurrences. Imaginability is very pertinent to risk-based testing, where assessing risk and imagining outcomes is a key part of the test planning. With regard to retrievability of information, when planning testing and identifying the critical areas to test, easily remembered functionality which has been problematic in the past may be focussed upon when, in the reality, the defect metrics and issue logs tell a different picture. wearing designer clothes is a model or a nurse, respondents are likely to respond “model”, being insensitive to the probability that there are more nurses than models in the general population. An example of misconceptions of chance and randomness would be that in a game of heads and tails, after a run of heads people are likely to predict tails as the next outcome, believing that tails must be due. Now to me these fallacies appear slightly contradictory; one saying that people ignore probability and that causes them to make errors in judgement and the other saying probability makes people ignore the 50/50 chance. They are applicable to how we judge where to focus testing – we may think that something that has failed a lot in the past must FEBRUARY 2014 | www.testmagazine.co.uk For example, on a recent project there were two areas with a lot of defects. Team A managed their defects effectively and I had daily updates with the team lead. Team B had more issues managing defects; not all their developers had access to the defect system and there were a few resource changes that impacted defect fixing. The defects for Team B are more easily remembered yet they did not have more defects than Team A. This could result in incorrect judgements regarding how testing is planned and on the functionality to be tested. To mitigate against availability biases when testing, use metrics from past projects, checklists and work with other project teams to understand the complexity and risk involved in the systems under test. Again, checklists and traceability matrixes are helpful to ensure coverage is sufficient. These heuristics are only three examples of how studying psychology can be beneficial to QA and testing. While there is no magic solution to resolve these, being aware of them and putting tools in place to mitigate their occurrence will improve the reliability of future test estimation and planning. PAGE 35