SlideShare a Scribd company logo
1 of 13
Download to read offline
Contents lists available at ScienceDirect
Information and Software Technology
journal homepage: www.elsevier.com/locate/infsof
Development of a human error taxonomy for software requirements: A
systematic literature review
Vaibhav Anua,1
, Wenhua Hub,1
, Jeffrey C Carverc
, Gursimran S Waliaa,⁎
, Gary Bradshawd
a
Department of Computer Science, Montclair State University, Montclair, NJ, United States
b
Department of Software Engineering and Game Development, Kennesaw State University, Marietta, GA, United States
c
Department of Computer Science, The University of Alabama, Tuscaloosa, AL, United States
d
Department of Psychology, Starkville, MS, United States
A R T I C L E I N F O
Keywords:
Systematic review
Requirements
Human errors
Taxonomy
A B S T R A C T
Background: Human-centric software engineering activities, such as requirements engineering, are prone to
error. These human errors manifest as faults. To improve software quality, developers need methods to prevent
and detect faults and their sources.
Aims: Human error research from the field of cognitive psychology focuses on understanding and categorizing
the fallibilities of human cognition. In this paper, we applied concepts from human error research to the problem
of software quality.
Method: We performed a systematic literature review of the software engineering and psychology literature to
identify and classify human errors that occur during requirements engineering.
Results: We developed the Human Error Taxonomy (HET) by adding detailed error classes to Reason's well-
known human error taxonomy of Slips, Lapses, and Mistakes.
Conclusion: The process of identifying and classifying human error identification provides a structured way to
understand and prevent the human errors (and resulting faults) that occur during human-centric software en-
gineering activities like requirements engineering. Software engineering can benefit from closer collaboration
with cognitive psychology researchers.
1. Introduction
Software engineering, especially during the early phases, is a
human-centric activity. Software engineers must gather customer
needs, translate those needs into requirements, and validate the cor-
rectness, completeness, and feasibility of those requirements. Because
of the involvement of various people in this process, there is the po-
tential for human errors to occur. Cognitive Psychology researchers
have studied how people make errors when performing different types
of tasks. This line of research is called human error research. In this
paper, we apply the findings from human error research to analyze and
classify the types of human errors people make during the requirements
engineering process.
Because the software engineering literature often has competing
definitions for the same terms, we must clearly define our terminology.
Based on IEEE Standard 24,765 [13] (ISO/IEC/IEEE 24,765:2010), we
use the following definitions in this paper.
• Error (also referred to as Human Error) – A mental error, i.e. the
failing of human cognition in the process of problem solving,
planning, or execution.
• Fault [or Defect] – The manifestation of an error recorded in a
software artifact.
• Failure – The incorrect execution of a software system, e.g. resulting
in a crash, unexpected operation, or incorrect output.
The chain from human error to fault to failure is not unbroken and
inevitable. Generally, developers either detect and correct faults
without investigating the underlying errors or they use software testing
to reveal system failures that they then repair. Nevertheless, because
many system failures originate in a human error, researchers need to
investigate how human errors can help in detecting and fixing software
faults and failures. By one estimate, software failures (which often find
their origination in human errors) cost $60 billion/year [27].
Practitioners in other domains, i.e. Aviation and Nuclear Power,
https://doi.org/10.1016/j.infsof.2018.06.011
Received 13 October 2016; Received in revised form 21 May 2018; Accepted 21 June 2018
⁎
Corresponding author.
1
Authors Anu and Hu are co-first authors and contributed equally to leading this paper.
E-mail addresses: vaibhavanu.x@gmail.com (V. Anu), whu4@kennesaw.edu (W. Hu), carver@cs.ua.edu (J.C. Carver), gursimran.walia@ndsu.edu (G.S. Walia),
glb2@psychology.msstate.edu (G. Bradshaw).
Information and Software Technology xxx (xxxx) xxx–xxx
0950-5849/ © 2018 Published by Elsevier B.V.
Please cite this article as: Anu, V., Information and Software Technology (2018), https://doi.org/10.1016/j.infsof.2018.06.011
also faced similar problems in handling problems that result from
human errors. An error that originates in the mind of a pilot or of a
reactor operator can produce a fault that results in a plane crash or a
reactor core meltdown. In both cases, human errors create problems
measured not only in billions of dollars but also in lives lost. The work
of human error specialists has dramatically improved the safety and
efficiency of these domains. The improvement process entailed (1)
analyzing incident reports to identify the human error(s) made; (2)
finding common error patterns; and (3) organizing those errors into a
hierarchical error taxonomy. By examining the most frequent and costly
errors in the hierarchy, investigators developed and implemented
mediation strategies to reduce the frequency and severity of the errors
[7,26,30]. Our premise in this paper is that a similar approach can be
beneficial in software engineering.
Requirements engineering is largely human-centric. Requirement
analysts must interact with customers or domain experts to identify,
understand, and document potential system requirements. Numerous
studies have shown that it is significantly cheaper to find and fix re-
quirements problems during the requirements engineering activities
than during later software development activities. Therefore, it is im-
portant to focus attention on approaches that can help developers im-
prove the quality of the requirements and find requirements problems
early.
Human error research can be particularly beneficial during re-
quirements engineering. The high level of interaction and shared un-
derstanding among multiple stakeholders required at this step of the
software lifecycle can result in a number of problems. Applying human
error research to categorize the types of errors that occur during re-
quirements engineering will have two benefits. First, it will provide a
framework to help requirements engineers understand the types of er-
rors that can occur during requirement development. This awareness
should lead requirement engineers to be more careful. Second, it will
help reviewers ensure that requirements are of sufficient quality for
development to proceed.
Recognizing the potential benefit that a formalized error taxonomy
could provide to the requirements engineering process, a subset of the
current authors conducted a prior Systematic Literature Review to
identify errors published in the literature from 1990–2006. Using a
grounded theory approach [9], they organized those requirement errors
into a Requirement Error Taxonomy to support the detection and pre-
vention of the errors and resulting faults [28]. They performed that
review without reference to contemporary psychological theories of
how human errors occur. In this current review, we extend the previous
review in two ways. First, rather than developing a requirement error
taxonomy strictly ‘bottom-up’ with no guiding theory, we engage di-
rectly with a human error researcher and use human error theory to
ensure that we only include true human errors and organize those errors
based on a standard human error classification system from cognitive
psychology. Second, this review includes papers published since the
first review.
To guide this literature review, we focus on two research questions.
• RQ1: What types of requirements engineering human errors does the
software engineering and psychology literature describe?
• RQ2: How can we organize the human errors identified in RQ1 into
a taxonomy?
• The primary contributions of this work are:
• Adapting human error research to a new domain (software quality
improvement) through interaction between software engineering
researchers and a human error researcher;
• Development of a systematic human error taxonomy for require-
ments to help requirement engineers understand and avoid common
human errors; and
• An analysis of the literature describing human errors in require-
ments engineering to provide insight into whether a community is
forming around common ideas.
The remainder of this paper is organized as follows. Section 2 pro-
vides a background on software quality and on human errors (from a
psychology perspective). Section 3 describes the systematic review
process and its execution. Section 4 reports the results of the review and
presents the human error taxonomy. Section 5 provides a discussion of
the results and the usefulness of the human error taxonomy. Finally,
Section 6 concludes the paper and describes future work.
2. Background
Formal efforts to improve software quality span decades [6,19].
This section briefly reviews some of these efforts as a preface to a more
comprehensive discussion of the role human error research can play in
identifying, understanding, correcting, and avoiding errors in software
engineering. Given the importance of identifying and correcting errors
and faults early in the software development process, we emphasize the
requirements engineering process.
2.1. Historical perspectives on software quality improvement
Initially, software quality improvement research focused on faults.
Because a fault that occurs early in software development may lead to a
series of later faults, researchers developed Root Cause Analysis (RCA)
[19,21] to identify the first fault in a (potentially lengthy) chain. In
RCA, developers trace faults to their origin to identify the triggering
fault. Then the developer can modify procedures to reduce the occur-
rence of faults or eliminate the original fault. The goal is to prevent the
first fault, thereby eliminating the entire fault chain.
Because RCA is time-consuming, researchers developed the
Orthogonal Defect Classification (ODC) to provide structure to the fault
tracing process [6]. Developers use ODC to classify faults and identify
the trigger that causes the fault to surface (not necessarily the cause of
fault insertion). This approach accommodates a large number of fault
reports and identifies the actions that revealed the fault. By focusing on
the action that caused the fault, the ODC avoids the tedious work of
tracing faults back to their origins and reduces the amount of time and
effort required, compared with RCA.
RCA and ODC are retrospective techniques in which the investigative
procedure begins with fault reports. Because the ODC relies on statis-
tical analyses to identify the source fault, it requires the presence of a
large and robust set of fault reports. As retrospective techniques, RCA
and ODC occur late in the development process, after errors and faults
have accumulated and are more expensive to fix. In addition, neither
approach helps identify the human error underlying the fault. Thus,
these approaches focus more on treating the symptoms (manifestations
of the error) rather than the underlying cause (i.e., the error).
Noting that it can be difficult to define, classify, and quantify re-
quirements faults, Lanubile, et al. shifted the emphasis from faults to
errors. They defined the term error as a fault or flaw in the human
thought process that occurs while trying to understand given informa-
tion, while solving problems, or while using methods and tools [17].
Such errors are the cause of faults. This shift is important because it
helps developers understand why the fault occurred. Because this ap-
proach addresses the underlying cause (i.e. the error) rather than the
symptoms (i.e. the faults), it is an advance over the RCA and ODC ap-
proaches.
Errors and faults have a complex relationship. One error may cause
several faults. Similarly, one fault may spring from different errors.
Consequently, it is not trivial to identify the error behind the fault.
Lanubile, et al. asked software inspectors to analyze the faults detected
during an inspection to determine the underlying cause, i.e., the error.
The inspectors then used the resulting list of errors to guide a re-in-
spection to detect additional faults related to those errors. This re-in-
spection produced only a moderate improvement in fault detection
[17].
This error abstraction methodology is also a retrospective process
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
2
that depends upon a developer first making an error that produced a
fault, then detecting the fault to initiate a trace-back process to recover
the error. Lanubile et al. also observed that requirements errors are
generally specific to a particular system, rather than general for all
software [17]. The implication of this observation is that since errors
tend to be system-specific, the error abstraction must be performed
independently for each system. In addition, retrospective fault detec-
tion techniques leave room for faults to go undetected and propagate
into later phases of software development, increasing the time and cost
of fixing the system. Therefore, there is still room for techniques to
prevent faults and to support early fault identification.
2.2. A cognitive psychology perspective on software quality improvement
Contrary to the conclusions drawn by Lanubile et al., most psy-
chologists reject the view that errors are idiosyncratic and strongly
dependent on context. James Reason, a noted authority on human
error, rejected the popular notion that “errors occur ‘out of the blue’ and
are highly variable in their form”. Reason proposed that, rather than
being random, errors take recurrent and predictable forms [23]. This
predictability of errors is crucial to allow software engineers to apply
lessons learned from one project to the next. In such cases, developers
can use errors proactively to search for faults before they have been
identified through other means and preventatively to modify software
development methods to reduce or eliminate errors.
Walia and Carver's systematic literature review of the software en-
gineering and psychology literature identified 119 requirement errors.
Employing a bottom-up approach, Walia and Carver developed a
Requirement Error Taxonomy (RET) by grouping common errors into
fourteen error classes. They then grouped these error classes into three
high-level error types: People Errors (arising from fallibilities of the
people involved in the development process), Process Errors (arising
while selecting the appropriate processes for achieving the desired
goals and relating mostly to the inadequacy of the requirements en-
gineering process), and Documentation Errors (arising from mistakes in
organizing and specifying the requirements, regardless of whether the
developer properly understood the requirements) [28].
Walia and Carver used four experiments to investigate the utility of
the RET [29]. Their work incorporated an inspection-abstraction-re-
inspection paradigm, similar to the one used by Lanubile et al. Parti-
cipants first analyzed a requirement documents to identify faults. Then
they analyzed those faults to abstract errors. Finally, they reinspected
the document guided by the abstracted errors. The inspectors used the
RET to guide the error abstraction process, as opposed to the ad hoc
approach in Lanubile et al.’s work. The reinspection step resulted in
identification of a significant number of new faults. Subjectively, par-
ticipants indicated that the structured error information in the RET
helped them understand their errors.
While the taxonomic structure employed by Walia and Carver drew
upon the published software engineering literature, it is strikingly dif-
ferent from psychological organizations of human errors. As described
above, the RET includes people, process, and documentation errors.
Clearly all of the observed errors were committed by people, most
happened while following or attempting to follow the software en-
gineering process, and resulted in a fault in the project documentation.
Hence, the next step is to divide the RET errors into distinct human
error categories. This observation motivated us to consider that a
stronger connection to the rich psychological literature on human er-
rors might reveal a more useful taxonomy of requirements engineering
errors.
2.3. Reason's classification of human errors
James Reason developed one of the most comprehensive psycho-
logical accounts of human errors [22]. Central to his account is the
construction of a plan. Loosely speaking, a plan is a series of actions
needed to achieve a goal. Reason associated errors with specific cog-
nitive failings, such as inattention, memory failures, and lack of
knowledge. In this approach, three types of plan-based errors can occur.
First, a slip is a failure of execution where the agent fails to carry out a
properly planned step as a result of inattention. Second, a lapse, is a
failure of execution where the agent fails to carry out a properly planned
step as a result of a memory failure. Finally, a mistake is a knowledge-
based error that occurs when the plan itself is inadequate to achieve the
goal. Identification of the cognitive failing (e.g., “I wasn't paying at-
tention when I wrote this down”) can provide a strong clue to the
specific error that resulted. Accordingly, error identification can be
undertaken even by those with little formal training in human psy-
chology.
For example, if our goal is to drive to the store, we might form a
plan consisting of steps such as starting the car, backing down the
driveway to the street, navigating the route, and parking in the store
lot. If we put the wrong key in the ignition, we have committed slip. If
we forget to put the transmission into reverse before stepping on the
gas, the omitted step is a lapse. Failing to take into account the effect of
a bridge closing and getting caught in traffic is a mistake.
Reason's taxonomy has found diverse application in medical errors
[31], problems with Apollo 11 [22], driver errors [24], and errors that
led to the Chernobyl nuclear power plant meltdown [23]. The next
section describes how we developed an error taxonomy for require-
ments based upon Reason's framework.
3. Research method
This review is a replication and extension of Walia and Carver's SLR
[28] that developed the RET (hereafter, referred to as SLR-2009). The
current SLR expands upon the results of SLR-2009 by leveraging the
cognitive theory of human errors to develop a formalized taxonomy of
human errors that occur during requirements engineering. In this re-
view, we employ a rigorous human error identification methodology,
leading to a more formalized taxonomy that appropriately captures the
sources of requirements faults, i.e. human errors. In this review, we
followed the traditional systematic literature review (SLR) approach
[14]. To increase the rigor of the current review, the first and second
authors (subsequently referred to as “researchers”) independently per-
formed each SLR step and met to resolve any disagreements.
3.1. Research questions
To guide the development of the human error taxonomy for re-
quirements errors, we defined the high-level research questions shown
in Table 1.
3.2. Source selection and search
To identify relevant papers, we used the same search strings as in
SLR-2009. To ensure coverage of topics in software engineering, em-
pirical studies, human cognition, and psychology, we executed the
search strings in IEEExplore, INSPEC, ACM Digital Library, SCIRUS
(Elsevier), Google Scholar, PsychINFO (EBSCO), and Science Citation
Index. Because SLR-2009 included papers published through 2006, we
only searched for papers published after 2006 (through October 2014).
Table 1
Research questions.
# Research question
RQ1 What types of requirements engineering human errors does the software
engineering and psychology literature describe?
RQ2 How can we organize the human errors identified in RQ1 into a
taxonomy?
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
3
Appendix A provides the detailed search strings, as adapted for each
database. After the database search, we used snowballing to ensure a
complete and comprehensive primary study selection process.
3.3. Study selection: inclusion and exclusion criteria
During the title-elimination phase, the two researchers identified
144 and 136 studies, respectively (see Fig. 1). Then, each researcher
read the abstracts and keywords to exclude any studies that were not
clearly related to the research questions. When in doubt, they retained
the study. At the end of this stage, the researchers’ lists contained 97
and 88 studies, respectively.
Next, the researchers consolidated their lists. They included the 32
studies in common between their lists. For the remaining 195 studies
(97 and 88, respectively), each researcher examined the studies on the
other researcher's list to determine whether they agreed on inclusion. In
cases where the other researcher agreed, they included the study. In the
remainder of the cases, the researchers discussed and reached an
agreement on whether to include the study. This step resulted in a total
of 96 agreed-upon studies.
Next, the researchers divided the 96 studies between them for full-
text review. Each reviewer used the inclusion/exclusion criteria in
Table 2 to examine the full-text of 48 studies to place each on an in-
clude list or an exclude list. Any study on the include list stayed in the
review. Any study on the exclude list was reviewed by the other re-
searcher to make a final inclusion decision. The researchers discussed
and resolved any disagreements. This step resulted in 34 included
studies.
Finally, the researchers identified four additional studies through
snowballing. The researchers did not find these studies during the in-
itial database search because they included “risk components” and
“warning signs” in the title rather than the more standard terms in-
cluded in the search strings (Appendix A). Therefore, we finally in-
cluded 38 included studies. Fig. 1 summarizes the search and selection
process.
The 38 studies were distributed across journals (15), conferences
(19), workshops (3), and book chapters (1). In addition, these studies
cover a wide spectrum of research topics, including: Computer Science,
Software Engineering, Medical Quality, Healthcare Information
Technology, Cognitive Psychology, Human Computer Interaction,
Safety/Reliability Sciences, and Cognitive Informatics. Interestingly,
the 38 studies appeared in 36 venues. That is, except for two cases
(Journal of Systems Software and Information Systems Journal), each
study came from a unique publication venue. This result suggests that
the research about human errors in software engineering is widely
scattered and there is not yet a standard venue for publishing such
research results. Section 5.4 discusses this observation in more detail.
Appendix B contains the full distribution of studies across publication
venues.
3.4. Data extraction and synthesis
Based on publication venue and the type of information contained
in the study, we classified 11 studies as Cognitive Psychology studies
and 27 as Software Engineering studies. The Cognitive Psychology
studies focused primarily on models of human errors and human error
taxonomies. The Software Engineering studies focused primarily on
errors committed during the software development process. Because
the focus of these types of studies differed, we extracted different types
of data.
To maintain compatibility with SLR-2009, we modeled our data
extraction form on the one used in SLR-2009. We added a small number
of items to ensure coverage of human cognitive failures and human
errors. We reanalyzed the SLR-2009 studies to extract this new in-
formation. Table 3 lists the data extracted from all primary studies.
Table 4 lists the data extracted based on each primary study's research
focus.
To ensure that both researchers had a consistent interpretation of
the data items, we conducted a pilot study. Each researcher in-
dependently extracted data from ten randomly selected studies. The
Fig. 1. Primary study search and selection process.
Table 2
Inclusion and exclusion criteria.
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
4
results revealed a few minor inconsistencies in the interpretation of
some data fields. The researchers discussed these issues to ensure they
had a common understanding of the data items. They then proceeded to
independently extract data from the remaining 28 studies. After ex-
tracting data, the researchers discussed and resolved any discrepancies
to arrive at a final data extraction form for each study.
4. Reporting the review
This section is organized around the two research questions (RQs)
defined in Table 1.
4.1. RQ1 - What types of requirements engineering human errors does the
software engineering and psychology literature describe?
We began by extracting individual errors, error categories, sub-ca-
tegories, error descriptions, and root causes from the 38 studies.
Similarly, we reviewed the published SLR-2009 paper and extracted the
errors and their descriptions (referring back to the original papers
where necessary). An examination of this information revealed dif-
ferent interpretations of the term error. These interpretations included:
program errors, software faults, requirements faults, end-user (or user)
errors, requirements problems, and human error.
Before building our taxonomy, we eliminated items that were not
truly human errors. Independently, the two researchers examined each
reported error to eliminate those that were not human errors. As before,
they compared their results and resolved the discrepancies they had on
less than 5% of the items. Human error expert Bradshaw helped resolve
these discrepancies. This process resulted in 31 human errors. We
eliminated two of those because they did not focus on requirements,
bring the total to 29 human errors in the requirements phase, which are
listed in Table 5 along with their source.
At this point, we also updated the list of studies included in the SLR
to include only those that described a true requirements engineering
human error. Only seven of the twenty-seven software engineering
studies met this criterion. Therefore, we removed the other 20 studies.
None of the cognitive psychology studies met this criterion, as con-
firmed by human error expert Bradshaw. Some of these studies dis-
cussed decision-making errors or operator errors (which are not human
errors). We excluded these studies from our analysis. In addition, there
were eleven studies from SLR-2009 that identified human errors. In
total, we included 18 studies from the software engineering literature
that contain human errors (Appendix C).
4.2. RQ2 – how can we organize the human errors identified in RQ1 into a
taxonomy?
To create the human error taxonomy (HET), each researcher first
grouped the individual errors in Table 5 into classes based on similarity.
The two researchers met to agree upon a final list of error classes. Then,
each researcher individually categorized each error class as either a
Slip, a Lapse, or a Mistake based upon the error's description or the root
cause description. This organization entailed analyzing each error class
to (1) decide whether the error class was a planning error or an execution
error and (2) for execution errors, determine whether the error was
related to inattention failures (Slips) or memory related failures
(Lapses).
The two researchers met and resolved all discrepancies and created
an initial organization of the HET. Finally, human error expert
Bradshaw evaluated and approved the final organization. Table 6 shows
the final HET based on Reason's taxonomy.
The following subsections provide a detailed description of each
error class, to make the information in Table 6 more understandable.
We also describe an example error and fault. For the examples, we use
the Loan Arranger (LA) system, which was developed by researchers at
the University of Maryland for use in software quality research. After a
brief overview of LA, we present the error classes organized by the high-
level groupings in Reason's taxonomy.
Loan Arranger (LA) overview: A loan consolidation organization
purchases loans from banks and bundles them for resale to other in-
vestors. The LA application selects an optimal bundle of loans based on
criteria provided by an investor. These criteria may include: (1) risk
level, (2) principal involved, and (3) expected rate of return. The loan
analyst can then modify the bundle as needed for the investor. LA au-
tomates information management activities, such as updating loan in-
formation monthly.
4.2.1. Slips
A slip occurs when someone performs the steps required to complete
a planned task incorrectly or in the wrong sequence. Slips generally
result from a lack of attention. For example, the steps for “preparing
breakfast cereal” include: (i) get a bowl, (ii) choose cereal, (iii) pour
cereal in the bowl, and (iv) pour milk on the cereal. If someone intends
to “prepare breakfast cereal”, but pours orange juice rather than milk,
he has committed a slip. Table 7 lists the two types of slips that occur
during requirements engineering.
4.2.2. Lapses
A lapse occurs when someone forgets the overall goal of the planned
task in middle of a sequence of actions or omits a step that is part of a
routine sequence. Lapses generally result from memory-related failures.
For example, someone enters a room to accomplish some specific task.
If, upon entering the room, he forgets the planned task, he has
Table 3
Common data items for extracting information.
Data item Description
Study Identifier Unique identifier for the paper (same as the reference
number)
Bibliographic data Author, year, title, source
Type of article Journal/conference/technical report
Focus of Area The field in which the research was conducted e.g.,
Software Engineering or Industrial Engineering or
Psychology or Aviation or Medicine
Study aims The aims or goals of the primary study
Context Relates to one/more search focus, i.e., research area(s)
the paper focuses upon.
Study type Industrial experiment, controlled experiment, survey,
lessons learned.
Unit of analysis Individual developers or department or organizational
Control Group Yes, no; if ‘‘Yes”: number of groups and size per group
Data collection How the data was collected, e.g., interviews,
questionnaires, measurement forms, observations, and
discussion.
Data analysis How the data was analyzed; qualitative, quantitative or
mixed
Concepts The key concepts or major ideas in the primary studies
Higher-order
interpretations
The second- (and higher-) order interpretations arising
from the key concepts of the primary studies. This can
include limitations, guidelines or any additional
information arising from application of major ideas/
concepts
Study findings Major findings and conclusions from the primary study
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
5
committed a lapse. Table 8 lists the three types of lapses that occur
during the requirements engineering.
4.2.3. Mistakes
As opposed to slips and lapses, mistakes are errors in planning ra-
ther than execution. In this scenario, someone executes the plan prop-
erly, but the plan itself is inadequate to achieve the intended outcome.
For example, suppose that a taxi driver's dispatcher has informed him of
a road closure on his typical route to the airport. If the taxi driver
follows his typical route anyway and encounters a delay, the taxi driver
has made a mistake. The taxi driver devised an incorrect plan even
though he had prior knowledge of the road closure. Table 9 lists the 11
mistake errors that occur during the requirements phase.
5. Discussion
This section provides further insights into the detailed results.
5.1. Findings about human errors in requirements engineering
After analyzing the included papers, identifying the presence of
human errors, and organizing those errors into a taxonomy, we can
report some overall findings about human errors in requirements en-
gineering.
• The requirements engineering (RE) research community has not
consistently reported human errors. Requirements Engineering re-
searchers use the term ‘error’ inconsistently in their publications.
Even when they used the term to refer to human thoughts or actions,
that usage often did not match the psychological definition of
human error. We had to thoroughly examine each reported error to
identify those that were truly human errors. Through this process,
we removed a majority of the items RE researchers marked as ‘er-
rors’ in their publications (see Section 4.1), indicating inconsistent
use of the term. Only 18 studies reported a true human error. The
broader set of problems that RE researchers called errors, while
outside the scope of this review, are worthy of further attention.
• Software engineers and psychologists need to collaborate more
closely. Development of the HET required close collaboration be-
tween software engineering experts and a human error expert from
psychology. In reviewing the literature, we noted that many of the
‘errors’ reported in the RE literature did make use of the frameworks
developed by human error researchers. As a result, we had to
carefully analyze each error description to determine whether it
truly described a human error. Our work shows that a close colla-
boration between software engineering researchers and psychology
researchers can provide a theoretically-based human error frame-
work for organizing requirements engineering errors.
• Errors vs. violations. The human error literature distinguishes be-
tween unintentional errors and deliberate violations. This distinc-
tion is important because the two problems need to be handled in
very different ways. Some error reports were, in fact, violations,
such as a situation where software engineers deliberately omitted
some tasks in the requirements engineering process. Preserving the
distinction between errors and violations is useful because the most
appropriate detection and remediation techniques differ. Violations,
although infrequent in the identified studies, fall outside the scope
of this paper.
Table 4
Data items relative to each search focus.
Search Focus Data Item Description
Quality Improvement Approach Focus or process Focus of the quality improvement approach and the process/method used to improve quality
Benefits Any benefits from applying the approach
Limitations Any limitations or problems identified in the approach
Evidence The empirical evidence indicating the benefits of using error information to improve software quality and
any specific errors found
Error focus Yes or No; and if ‘‘Yes”, classify the solution foundation (next data item)
Mechanism or Solution Foundation 1. Ad hoc - just something the investigators thought up but could have been supported by empirical work
showing its effectiveness; or
2. Evidence-based — a notation of a systematic problem in the software engineering process that leads to
a specific remediation method; or
3. Theory-based — draws support from research on human errors
Requirement Errors Problems Problems reported in requirement stage
Errors Errors at the requirements stage
Faults Faults at the requirement stage
Mechanism Process used to analyze or abstract requirement errors (select one of the following):
1. Ad hoc - just something the investigators thought up but could have been supported by empirical work
showing its effectiveness; or
2. Evidence-based — a notation of a systematic problem in the software engineering process that leads to
a specific remediation method; or
3. Theory-based — draws support from research on human errors
Error-fault-defect taxonomies Focus The focus of the taxonomy (i.e., error, fault, or failure)
Error Focus Yes or No; if ‘‘Yes”, What was the process used to classify errors into a taxonomy?
Requirement phase Yes or No (whether it was applied in requirement phase)
Benefits and Limitations Benefits and/or Limitations of the taxonomy
Evidence The empirical evidence regarding the benefits of error/fault/defect taxonomy for software quality
Software inspections Focus The focus of the inspection method (i.e., error, fault of failure)
Error Focus Yes or No; if ‘‘Yes”, how did it focus reviewers’ attention to detect errors during the inspection
Requirement phase Yes or No (Did it inspect requirement documents?)
Evidence The empirical evidence regarding the benefits/limitations of the error-based inspection method
Human errors Human errors and classifications Description of errors made by humans and classes of their fallibilities during planning, decision making
and problem solving
Evidence Empirical evidence regarding errors made by humans in different situations (e.g., aircraft control) that
are related to requirement errors
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
6
5.2. Structuring errors within specific requirements engineering tasks
Requirements engineering consists of five phases.
• Elicitation (E): Discovering and understanding the user's needs and
constraints for the system.
• Analysis (A): Refining the user's needs and constraints.
• Specification (S): Documenting the user's needs and constraints
clearly and precisely.
• Verification (V): Ensuring that the system requirements are com-
plete, correct, consistent, and clear.
• Management (M): Scheduling, coordinating, and documenting the
requirements engineering activities.
Each RE phase has different activities that provide opportunities for
potentially different human errors. Therefore, we developed a second
taxonomic organization of the errors by indicating the relevant RE
phases in which each error occurs.
While Slips, Lapses, and Mistakes can occur during each require-
ment phase, the specific error classes will differ. Two authors each
mapped the error classes onto the requirements phases. As before, the
two authors discussed and resolved any discrepancies. Human error
expert Bradshaw reviewed and approved the final mapping shown in
Table 10. Note that many of the error classes appear across multiple
requirements phases. For example, a clerical error can occur:
• During elicitation if the requirement engineer is careless while taking
notes during the interview, or
• During specification if the requirement engineer is careless while
transcribing the elicited requirements into specifications.
There are several reasons why we believe this two-dimensional
structure is superior to the one-dimensional structure introduced in
Table 6. First, the two-dimensional structure illustrates that the same
error class may appear in different phases. Second, we hypothesize that
as researchers identify new errors, those errors will populate additional
cells in the table. The paucity of existing error reports, coupled with the
prevalence of faults and failures in software, suggests that many errors
are occurring that have not been formally recognized. The two-di-
mensional table provides a framework for identifying those errors.
An analogy may help explain this concept. As Mendeleev was
working on the Periodic Table of Elements, scientists had only dis-
covered about 60 elements. His table had ‘holes’ where elements of a
particular atomic weight and valence should exist. Indeed, the absence
of these elements was considered problematic for the Periodic Table.
Over the next 15 years, chemists discovered three of the missing ele-
ments. Those elements had the properties predicted by the Periodic
Table [25]. Accordingly, we postulate that investigators can employ
this table to help identify all three classes of errors in different re-
quirements engineering activities that were not found during the review
and ‘fill the holes’ in our table. For example, we believe that clerical
errors can be observed, not only in elicitation and specification, but also
in the other RE phases (e.g., recording incorrect values from notes
during the analysis phase).
5.3. Usefulness of the HET
Since development of the HET, we have empirically validated its
usefulness for (1) error detection, (2) fault detection, (3) error pre-
vention, and (4) training of software engineers on retrospective human
error analysis.
From a fault detection perspective, a deeper understanding of the
human errors that occurred during requirements engineering, can lead
developers to detect faults that are may be overlooked otherwise. We
conducted evaluation studies to determine whether use of HET im-
proves fault detection during a requirements inspection. In these
Table 5
Human errors identified in the literature.
Error # Error name Source
1 Problem representation error [12]
2 RE people do not understand the problem [18]
3 Assumptions in grey area. [15]
4 Wrong assumptions about stakeholder opinions [20]
5 Lack of cohesion [20]
6 Loss of information from stakeholders [20]
7 Low understanding of each other's roles Bjamason et al.,
[5]
8 Not having a clear demarcation between client and
users
[16]
9 Mistaken belief that it is impossible to specify non-
functional requirements in a verifiable form
[8]
10 Accidentally overlooking requirements [8]
11 Inadequate Requirements Process [8]
12 Mistaken assumptions about the problem space SLR-2009
13 Environment errors SLR-2009
14 Information Management errors SLR-2009
15 Lack of awareness of sources of requirements SLR-2009
16 Application errors SLR-2009
17 Developer (requirements developer) did not
understand some aspect of the product or process
SLR-2009
18 User needs not well-understood or interpreted by
different stakeholders
SLR-2009
19 Lack of understanding of the system SLR-2009
20 Lack of system knowledge SLR-2009
21 Not understanding some parts of the problem
domain
SLR-2009
22 Misunderstandings caused by working
simultaneously with several different software
systems and domains
SLR-2009
23 Misunderstanding of some aspect of the overall
functionality of the system
SLR-2009
24 Problem-Solution errors SLR-2009
25 Misunderstanding of problem solution processes SLR-2009
26 Semantic errors SLR-2009
27 Syntax errors SLR-2009
28 Clerical errors SLR-2009
29 Carelessness while documenting requirements SLR-2009
Table 6
Human error taxonomy (HET).
Reason's
Taxonomy
Human Error Class Human Error(s)
(Table 5)
Slips Clerical Errors 30, 31
Term Substitution 5
Lapses Loss of information from stakeholders 6
Accidentally overlooking requirements 11
Multiple terms for the same concept errors 5
Mistakes Application Errors: knowledge-based plan is
incomplete
1, 2, 18–25
Solution Choice Errors 26, 27
Syntax Errors 28, 29
Wrong Assumptions: knowledge-based plan
is wrong
3, 4, 14
Environment Errors: knowledge-based plan
is incomplete
15
Information management Errors:
knowledge-based plan is incomplete
16
Inappropriate communication based on
incomplete/faulty understanding of roles:
knowledge-based plan is wrong
8
Not having a clear distinction between
clients and users
9
Mistaken belief that it is impossible to
specify non-functional requirements:
knowledge-based plan is incomplete
10
Inadequate requirements process errors 13
Lack of awareness of requirements sources:
knowledge-based plan is incomplete
17
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
7
studies, we incorporated the HET into a formal Error Abstraction and
Inspection (EAI) technique. The EAI technique augments the traditional
fault checklist (FC) inspection process by adding a human error dis-
covery step and an error-informed re-inspection step. In the human
error discovery step, the HET provides inspectors with a framework of
common requirements errors to help them abstract human errors from
the faults found during the FC inspection. In the error-informed re-in-
spection step, inspectors use the abstracted human error information to
re-inspect the requirements for additional faults. The result of the stu-
dies showed use of human error information allowed participants to
identify a significant number of faults that they missed when using a
fault-checklist inspection approach [2,3,10].
We also anticipate that the HET can help requirements engineers in
better understanding the most common human errors. This under-
standing is likely to help them avoid these human errors and related
faults. Our empirical study on the effect of human error knowledge on
error and fault prevention confirmed that a better understanding of
human errors led to fewer errors and faults during requirements en-
gineering [11].
We have also used the HET to train inspectors to retrospectively
analyze requirements faults to determine the underlying human error.
This process is similar to that of incident/accident investigation done in
the aviation and medical domains. We developed an instrument to train
inspectors to use the Human Error Abstraction Assist (HEAA) decision-
tree framework to investigate requirements faults (available at http://
vaibhavanu.com/NDSU-CS-TP-2016-001.html) [1].
5.4. Citation analysis of the fragmentation in human error research
Our observation that the 38 requirements engineering studies ap-
peared in 36 publication venues triggered an exploratory citation
analysis to understand the coherency, or lack thereof, of human error
research in requirements engineering. In this analysis, we used only the
papers representing the 18 studies that contain true human errors. We
extracted the reference lists from these papers to identify cases where
one paper cited another. We also identified citations to any psychology
papers on human error. Because our search was thorough, we have high
confidence that this set of 18 papers includes all of the papers in the
software engineering literature that describe requirements engineering
human errors.
In the result shown in Fig. 2, an arrow from one paper to another
indicates that the first paper cited the second paper. The result show
that only seven papers cited human error papers (either in requirements
engineering or in psychology), implying that researchers may be una-
ware of related research. Of those papers, six made eight citations to
previous work from the requirements engineering literature and three
included six citations to the psychological literature on errors. Four
papers were cited by others but did not themselves cite any related
papers. Seven papers were ‘island papers,’ citing none of the previous
research and not receiving any citations from other papers. Two papers
[12,20] were responsible for the majority of the identified citations.
Overall, Fig. 2 illustrates that human error research in requirements
engineering is in a formative state. This lack of citations suggests that
many investigators appear to be unaware of related work and are not
building upon others’ contributions. Furthermore, researchers are not
building on the human error research published in the psychology lit-
erature. It is likely that the wide distribution of papers across different
research venues contributes to this problem. Researchers do not have a
small number of venues from which to draw sources. Finally, a subtle,
but encouraging trend emerges with papers that are more recent. Those
papers, especially the ones published from 2012 onwards have more
citations to earlier work. This trend suggests a growing awareness of
prior work in the area.
5.5. Threats to validity
Threats addressed: To gather evidence, we searched multiple elec-
tronic databases covering different software engineering and
Table 7
Slip errors.
Error Name Description Example of error Example of fault
Clerical errors Result from carelessness while performing mechanical
transcriptions from one format or from one medium to
another. Requirement examples include carelessness
while documenting specifications from elicited user
needs.
In notes from the meeting with the client, the
requirement author recorded a different set of
parameters for jumbo loans and regular loans. When
creating the requirements document, the author
incorrectly copied the requirements for regular loans
into the description of jumbo loans.
The requirements for the jumbo loans
incorrectly specify the same behavior
as for regular loans
Term substitution
errors
After introducing a term correctly, the requirement
author substitutes a different term that refers to a
different concept.
The requirement author brings to mind an
inappropriate term to refer to a previously introduced
concept.
The requirement author records the
ambiguous term “organization”
instead of “lending institution”.
Table 8
Lapse errors.
Error Name Description Example of error Example of fault
Loss of information from
stakeholders errors
Result from a requirement author forgetting,
discarding, or failing to store information or
documents provided by stakeholders, e.g. some
important user need
A loan analyst informs the requirement author about the
format for loan reports (file, screen, or printout), who
forgets to note it down
Information about the report formats
is omitted from the requirement
specification.
Accidentally overlooking
requirement errors
Occur when the stakeholders who are the source
of requirements assume that some requirements
are obvious and fail to verbalize them
Because stakeholders assume that abnormal termination
and system recovery is a commonplace occurrence and
will be handled by the requirement analysts or the
system design team, they do not provide system
recovery requirements.
Requirement document does not
describe the process of system
recovery from abnormal termination.
Multiple terms for the
same concept errors
Occur when requirement authors fail to realize
they have already defined a term for a concept and
so introduce a new term at a later time.
The same concept is referred to by different terms
throughout the document.
Use of terms “marked for inclusion”
and “identified for inclusion” to
convey the same information.
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
8
Table 9
Mistake errors.
Error Name Description Example of error Example of fault
Application errors Arise from a misunderstanding of the
application or problem domain or a
misunderstanding of some aspect of overall
system functionality
The requirements author lacks domain
knowledge about loans, investing, and
borrowing. As a result, she incorrectly
believes that the stakeholders have provided
all information necessary to decide when to
remove loans that are in a default status from
the repository.
The requirements specification omits the
requirement to retain information about
borrowers who are in default status (even
after the corresponding loans are deleted
from the system).
Environment errors Result from lack of knowledge about the
available infrastructure (e.g., tools, templates)
that supports the elicitation, understanding, or
documentation of software requirements.
The requirement authors failed to employ a
standard template for documenting
requirements (for example, the IEEE
standard template for SRS) because they
were unaware of the presence of such a
template.
Without the support of the template,
requirements about system scope and
performance were omitted.
Solution Choice errors These errors occur in the process of finding a
solution for a stated and well-understood
problem. If RE analysts do not understand the
correct use of problem-solving methods and
techniques, they might end up analyzing the
problem incorrectly, and choose the wrong
solution
Lack of knowledge of the requirements
engineering process and requirements
engineering terminology on part of the
analysts. The analyst cannot differentiate
between performance and functional
requirements.
A particular requirement listed under
performance requirement should be a
functional requirement.
Syntax errors Occur when a requirement author
misunderstands the grammatical rules of
natural language or the rules, symbols, or
standards in a formal specification language like
UML.
The requirements engineer misunderstood
the use of navigability arrows to illustrate
that one use case extends another.
An association link between two classes on a
UML diagram lacks a navigability arrow to
indicate the directionality of association
resulting a diagram that is ambiguous and
can be misunderstood.
Information Management errors Result from a lack of knowledge about standard
requirements engineering or documentation
practices and procedures within the
organization
The organization has a policy that
requirement specifications include error-
handling information. Unfamiliar with this
policy, the requirements engineer fails to
specify error-handling information.
Specification does not indicate that an error
message should be displayed regarding
errors rather than just returning to a
previous screen with no indication or
notification of the problem.
Wrong Assumption errors Occur when the requirements author has a
mistaken assumption about system features or
stakeholder opinions
The requirements author assumes that error
handling is a task common to all software
projects and will be handled by
programmers. Therefore, s/he does not
gather that information from stakeholders.
Information about what happens when a
lender provides invalid data has been
omitted.
Mistaken belief that it is
impossible to specify non-
functional requirements
The requirements engineer(s) may believe that
non-functional requirements cannot be captured
and therefore omit this process from their
elicitation and development plans.
The requirements engineering team failed to
consider non-functional requirements.
Omission of performance requirements,
security requirements and other non-
functional requirements.
Not having a clear distinction
between client and users
If RE practitioners fail to distinguish between
clients and end users, or do not realize that the
clients are distinct from end users, they may fail
to gather and analyze the end users’
requirements
The requirement engineer failed to gather
information from the actual end user of LA
system, the Loan Analyst.
No functional requirement to edit loan
information has been specified, even though
the ‘Purpose’ specifies loans can be edited.
Lack of awareness of
requirement sources
Requirements gathering person is not aware of
all stakeholders which he/she should contact in
order to gather the complete set of user needs
(including end users, customers, clients, and
decision-makers).
The requirement engineer was not aware of
all end users and clients and did not gather
the needs of a bank lender (one of the end
users of LA system). This end user wanted
the LA system to handle both fixed rate loans
and adjustable rate loans.
Omitted functionality as requirements only
considers fixed rate loans.
Inappropriate communication
based on incomplete or
faulty understanding of
roles
Without proper understanding of developer
roles, communication gaps may arise, either by
failing to communicate at all or by ineffective
communication. The management structure of
project team resources is lacking.
Team members did not know who was
supposed to elicit the requirements from a
bank lender, which affected the participation
of an important stakeholder during the
requirements process.
Omitted functionality, as requirements of a
bank lender (i.e., the LA application system
to handle both fixed rate loans and
adjustable rate loans) were not recorded in
the specification.
Inadequate Requirements
Process errors
Occur when the requirement authors do not
fully understand all of the RE steps necessary to
ensure the software is complete and neglect to
incorporate one or more essential steps into the
plan.
The requirements engineering plan did not
incorporate requirement traceability
measures to link requirements to user needs.
An extraneous requirement that allows loan
analysts to change borrower information is
included that could result in unwanted
functionality and unnecessary work for the
developers.
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
9
psychology journals and conferences. To reduce researcher bias, two
researchers independently performed each step during the review ex-
ecution before consolidating the results. This process also ensured that
the researchers extracted consistent information from each primary
study. We performed both the identification of requirements en-
gineering human errors and the selection of a suitable human error
classification framework under the supervision of human error expert
Bradshaw. This supervision ensured that the HET is strongly grounded
in Cognitive Psychology research.
Threats unaddressed: We conducted this review based on the as-
sumption that SLR-2009 [28] included all relevant studies prior to
2006. We are aware that such an assumption may have resulted in
missing some studies and consequently missing some requirements
phase human errors.
6. Conclusion and future work
A careful application of human error research to requirements en-
gineering resulted in the development of a theoretically sound tax-
onomy of requirements engineering human errors. This taxonomy is
based on a standard human error taxonomy from cognitive psychology.
The primary contributions of this work are:
• Illustration of the ability and value of applying human error re-
search to a software engineering problem through close interaction
between software engineering experts and cognitive psychology
experts;
• Development of a human error taxonomy to organize the common
human errors made during requirements engineering into a sys-
tematic framework; and
• An analysis of the literature describing human errors to identify the
lack of cohesiveness in requirements engineering research as it re-
lates to the use of human error information.
The use of human errors has the potential to provide a more ef-
fective solution to the software quality problem than a focus only on
faults. A taxonomy like the one developed in this paper will help de-
velopers identify the most common types of errors so they can either
implement policies to prevent those errors (e.g. training) or focus the
review process on identification and removal of the faults caused by
those errors.
Although the primary goal of this review was to propose a taxonomy
of requirements engineering human errors, this review may be helpful
for other researchers who want to create human error taxonomies for
other phases of the software lifecycle. While we have empirically va-
lidated the usefulness of the HET for detecting and preventing re-
quirement errors, we expect to gather additional evidence and ob-
servations about requirement errors (see the missing fields in Table 10)
through further empirical evaluations.
Table 10
Mapping human errors to requirements engineering activities.
RE phases
Human error categories and classes E A S V M
Slips Clerical Errors ✓ ✓
Term Substitution errors ✓
Lapses Loss of information from stakeholders errors ✓
Accidentally overlooking requirements errors ✓
Multiple terms for the same concept errors ✓
Mistakes Application errors: knowledge-based plan is
incomplete.
✓ ✓ ✓
Environment errors: knowledge-based plan is
incomplete.
✓ ✓ ✓ ✓
Information Management errors: knowledge-
based plan is incomplete.
✓
Wrong Assumption errors: knowledge-based
plan is wrong
✓ ✓
Inappropriate communication based on
incomplete/faulty understanding of roles errors:
knowledge-based plan is wrong
✓ ✓ ✓
Mistaken belief that it is impossible to specify
non-functional requirements errors: knowledge-
based plan is incomplete
✓ ✓ ✓
Not having a clear demarcation between client
and users errors
✓
Lack of awareness of requirement sources errors:
knowledge-based plan is incomplete
✓
Solution choice errors ✓
Inadequate Requirements Process errors ✓
Syntax errors ✓
Fig. 2. Citation analysis.
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
10
Acknowledgements
This research was supported by National Science Foundation awards
1423279 and 1421006. The authors would like to thank the partici-
pants that attended the Workshop on Applications of Human Error
Research to Improve Software Engineering (WAHESE 2015).
Appendix A
Table 11 describes the search strings along with their search focus that we executed on IEEExplore, INSPEC, ACM Digital Library, SCIRUS
(Elsevier), Google Scholar, PsychINFO (EBSCO), and Science Citation Index databases.
Appendix B
Table 12 lists the distribution of the primary studies across different journal and conference avenues.
Table 11
Search strings.
String# Search focus Detailed search string
1 Software quality improvement
approach
((software OR development OR application OR product OR project) AND (quality OR condition OR character OR property OR
attribute OR aspect) AND (improvement OR enhancement OR advancement OR upgrading OR ameliorate OR betterment) AND
(approach OR process OR system OR technique OR methodology OR procedure OR mechanism OR plan OR pattern))
2 Software inspection methods ((software OR development OR application OR product OR project) AND (inspection OR assessment OR evaluation OR
examination OR review OR measurement) AND (approach OR process OR system OR technique OR methodology OR procedure
OR mechanism OR plan OR pattern))
3 Error abstraction OR root causes (error OR mistake OR problem OR reason OR fault OR defect OR imperfection OR flaw OR lapse OR slip OR err) AND
(abstraction OR root cause OR cause))
4 Requirement stage errors ((requirement OR specification) AND (phase OR stage OR situation OR division OR period OR episode OR part OR state OR
facet) AND (error OR mistake OR problem OR reason OR fault OR defect OR imperfection OR flaw OR lapse OR slip OR err))
5 Software error/fault/defect
taxonomy
((software OR development OR application OR product OR project) AND (error OR mistake OR problem OR reason OR fault OR
defect OR imperfection OR flaw OR lapse OR slip OR err) AND (taxonomy OR classification OR categorization OR grouping OR
organization OR terminology OR systematization))
6 Human error classification ((human OR cognitive OR individual OR psychological) AND (error OR mistake OR problem OR reason OR fault OR defect OR
imperfection OR flaw OR lapse OR slip OR err) AND (taxonomy OR classification OR categorization OR grouping OR
organization OR terminology OR systematization))
Table 12
Distribution of primary studies across venues.
Source Study count
International Journal of Computer Applications 1
IEEE International Conference on Engineering of Complex Computer Systems (ICECCS) 1
Information Sciences Journal 1
IEEE Frontiers in Education Conference 1
International Conference on Software Engineering Advances 1
ACM-IEEE international symposium on Empirical software engineering and
measurement (ESEM)
1
Workshop em Engenharia de Requisitos (WER) 1
IEEE/NASA Software Engineering Workshop (SEW) 1
Minds and Machines Journal 1
Malaysian Conference in Software Engineering (MySEC) 1
Information Systems Journal 2
Information and Software Technology Journal 1
International Workshop on Rapid Continuous Software Engineering (RCoSE) 1
International Conference on Recent Advances in Computing and Software Systems
(RACSS)
1
ISSAT International Conference on Reliability and Quality in Design 1
IEEE International Requirements Engineering Conference (RE) 1
IEEE Annual Computer Software and Applications Conference Workshops 1
IEEE International Conference on Cognitive Informatics 1
IFIP/IEEE International Symposium on Integrated Network Management (IM) 1
IEEE International Symposium on Software Reliability Engineering (ISSRE), 1
International Conference on Innovative Mobile and Internet Services in Ubiquitous
Computing (IMIS)
1
International Conference on Software Engineering and Data Mining (SEDM), 1
Iberian Conference on Information Systems and Technologies (CISTI) 1
International Conference on Software Engineering and Knowledge Engineering (SEKE) 1
Journal of Object Technology 1
Journal of Systems and Software 2
(continued on next page)
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
11
Appendix C
This appendix lists the papers describing the 18 studies that provided input to the HET. Some of these papers are also cited in the main paper.
Those papers are indicated with an asterisk (*) and included in the reference list for the main paper.
• Basili VR, Perricone BT (1984) Software Errors and Complexity: An Empirical Investigation. Commun ACM 27:42–52. doi:10.1145/69,605.2085
• Basili VR, Rombach HD (1987) Tailoring the Software Process to Project Goals and Environments. In: Ninth Int. Conf. Softw. Eng. IEEE Press,
California, USA, pp 345–357
• Bhandari I, Halliday MJ, Chaar J, Chillarege R, Jones K, Atkinson JS, Lepori-Costello C, Jasper PY, Tarver ED, Lewis CC, Yonezawa M (1994) In-
process improvement through defect data interpretation. IBM Syst J 33:182–214. doi:10.1147/sj.331.0182
• *Bjarnason E, Wnuk K, Regnell B [5] Requirements are slipping through the gaps - A case study on causes & effects of communication gaps in
large-scale software development. In: Proc. 2011 IEEE 19th Int. Requir. Eng. Conf. RE 2011. pp 37–46
• Coughlan J, Macredie RD (2002) Effective Communication in Requirements Elicitation: A Comparison of Methodologies. Requir Eng 7:47–60.
doi:10.1007/s007660200004
• *Firesmith D [8] Common requirements problems, their negative consequences, and the industry best practices to help solve them. J Object
Technol 6:17–33. doi:10.5381/jot.2007.6.1.c2
• Galliers J, Minocha S, Sutcliffe AG (1998) A causal model of human error for safety critical user interface design. ACM Trans Comput Interact
5:756–769.
• *Huang F, Liu B, Huang B [12] A Taxonomy System to Identify Human Error Causes for Software Defects. Proc. 18th Int. Conf. Reliab. Qual. Des.
ISSAT 2012
• *Kumaresh S, Baskaran R (2010) Defect Analysis and Prevention for Software Process Quality Improvement. Int J Comput Appl 8:42–47. doi:10.
5120/1218–1759
• *Kushwaha DS, Misra AK [16] Cognitive software development process and associated metrics - A framework. In: Proc. 5th IEEE Int. Conf. Cogn.
Informatics, ICCI 2006. pp 255–260
• *Lehtinen TOA, Mantyla M V., Vanhanen J, Itkonen J, Lassenius C [18] Perceived causes of software project failures - An analysis of their
relationships. Inf Softw Technol 56:623–643. doi:10.1016/j.infsof.2014.01.015
• *Leszak M, Perry DE, Stoll D [19] A case study in root cause defect analysis. Proc 22nd Int Conf Softw Eng - ICSE ’00 428–437. doi:10.1145/337,
180.337232
• *Lopes M, Forster C [20] Application of human error theories for the process improvement of Requirements Engineering. Inf Sci (Ny)
250:142–161.
• Lutz RR (1993) Analyzing software requirements errors in safety-critical, embedded systems. Proc IEEE Int Symp Requir Eng 126–133. doi:10.
1109/ISRE.1993.324825
• Mays RG, Jones CL, Holloway GJ, Studinski DP [21] Experiences with Defect Prevention. IBM Syst J 29:4–32. doi:10.1147/sj.291.0004
• Nakashima T, Oyama M, Hisada H, Ishii N (1999) Analysis of software bug causes and its prevention. Inf Softw Technol 41:1059–1068. doi:10.
1016/S0950-5849(99)00,049-X
• De Oliveira KM, Zlot F, Rocha AR, Travassos GH, Galotta C, De Menezes CS (2004) Domain-oriented software development environment. J Syst
Softw 72:145–161. doi:10.1016/S0164-1212(03)00,233–4
• Zhang X, Pham H (2000) An analysis of factors affecting software reliability. J Syst Softw 50:43–56. doi:10.1016/S0164-1212(99)00,075–8
References
[1] V. Anu, G.S. Walia, G. Bradshaw, W. Hu, J. Carver, Usefulness of a human error
identification tool for requirements inspection: an experience report, Proceedings
23rd International Working Conference on Requirements Engineering: Foundation
for Software Quality. REFSQ 2017, Essen, Germany, 2017.
[2] V. Anu, G.S. Walia, W. Hu, J.C. Carver, G. Bradshaw, Using a cognitive psychology
perspective on errors to improve requirements quality: an empirical investigation,
Proceedings 27th IEEE Int. Symp. on Soft. Rel. Eng. ISSRE 2016, Ottawa, CA, 2016,
pp. 65–76.
[3] V. Anu, G.S. Walia, W. Hu, J.C. Carver, G. Bradshaw, Effectiveness of human error
taxonomy during requirements inspection: an empirical investigation, Proceedings
28th Int. Conf. Softw. Eng. Knowl. Eng. SEKE 2016, San Francisco, CA, USA, 2016,
pp. 531–536.
[5] E. Bjarnason, K. Wnuk, B. Regnell, Requirements are slipping through the gaps - A
case study on causes & effects of communication gaps in large-scale software de-
velopment, Proceedings 2011 IEEE 19th Int. Requir. Eng. Conf. RE 2011, 2011, pp.
37–46.
[6] R. Chillarege, I.S. Bhandari, J.K. Chaar, M.J. Halliday, B.K. Ray, D.S. Moebus,
Orthogonal defect classification a concept for in-process measurements, IEEE Trans.
Softw. Eng. 18 (1992) 943–956, http://dx.doi.org/10.1109/32.177364.
[7] T. Diller, G. Helmrich, S. Dunning, S. Cox, A. Buchanan, S. Shappell, The Human
Factors Analysis Classification System (HFACS) applied to health care, Am. J. Med.
Qual. 29 (2014) 181–190, http://dx.doi.org/10.1177/1062860613491623.
[8] D. Firesmith, Common requirements problems, their negative consequences, and
the industry best practices to help solve them, J. Object. Technol. 6 (2007) 17–33,
http://dx.doi.org/10.5381/jot.2007.6.1.c2.
Table 12 (continued)
Source Study count
The EDP Audit, Control, and Security Newsletter (Journal) 1
International Management Review Journal 1
American Journal of Medical Quality 1
IEEE International Conference on Automation and Logistics (ICAL) 1
Safety Science Journal 1
IFIP Advances in Information and Communication Technology (Journal) 1
International conference on Computer Safety, Reliability, and Security (SAFECOMP) 1
International Conference on Industrial Engineering and Engineering Management 1
Studies in Health Technology and Informatics (Book Series) 1
Professional Safety Journal 1
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
12
[9] B.G. Glaser, A.L. Strauss, The discovery of grounded theory, Int. J. Qual. Methods 5
(1967) 1–10, http://dx.doi.org/10.2307/588533.
[10] W. Hu, J.C. Carver, V. Anu, G.S. Walia, G. Bradshaw, Detection of requirement
errors and faults via a human error taxonomy: a feasibility study, Proceedings 10th
ACM/IEEE Int. Symp. Empir. Softw. Eng. Meas. ESEM 2016, Ciudad Real, Spain,
2016.
[11] W. Hu, J.C. Carver, V. Anu, G.S. Walia, G. Bradshaw, Defect prevention in re-
quirements using human error information: an empirical study, 23rd International
Working Conference on Requirements Engineering: Foundation for Software
Quality, 2017.
[12] F. Huang, B. Liu, B. Huang, A taxonomy system to identify human error causes for
software defects, Proceedings 18th Int. Conf. Reliab. Qual. Des. ISSAT 2012, 2012.
[13] IEEE, Systems and software engineering – vocabulary, ISO/IEC/IEEE 24765
(2010(E)) (2010) 1–418, http://dx.doi.org/10.1109/IEEESTD.2010.5733835.
[14] B. Kitchenham, Procedures for Performing Systematic Reviews 33 Keele Univ,
Keele, UK, 2004, p. 28 10.1.1.122.3308.
[15] S. Kumaresh, R. Baskaran, Defect analysis and prevention for software process
quality improvement, Int. J. Comput. Appl. 8 (2010) 42–47, http://dx.doi.org/10.
5120/1218-1759.
[16] D.S. Kushwaha, A.K. Misra, Cognitive software development process and associated
metrics - A framework, Proceedings 5th IEEE Int. Conf. Cogn. Informatics, ICCI
2006, 2006, pp. 255–260.
[17] F. Lanubile, F. Shull, V.R. Basili, Experimenting with error abstraction in require-
ments documents, Proceedings 5th Int. Symp. Softw. metrics, 1998.
[18] T.O.A. Lehtinen, V. Mantyla M, J. Vanhanen, J. Itkonen, C. Lassenius, Perceived
causes of software project failures - An analysis of their relationships, Inf. Softw.
Technol. 56 (2014) 623–643, http://dx.doi.org/10.1016/j.infsof.2014.01.015.
[19] M. Leszak, D.E. Perry, D. Stoll, A case study in root cause defect analysis,
Proceedings 22nd Int. Conf. Softw. Eng. ICSE 2000, Limerick, Ireland, 2000, pp.
428–437.
[20] M. Lopes, C. Forster, Application of human error theories for the process im-
provement of Requirements Engineering, Inf. Sci. (Ny) 250 (2013) 142–161.
[21] R. Mays, C. Jones, G. Holloway, D. Studinski, Experiences with defect prevention,
IBM Syst. J. 29 (1) (1990) 4–32.
[22] J. Reason, Human Error, Cambridge University Press, New York, NY, 1990.
[23] J. Reason, The Human contribution: Unsafe acts, Accidents and Heroic Recoveries,
Ashgate Publishing, Ltd., Surrey, UK, 2008.
[24] J. Reason, A. Manstead, S. Stradling, J. Baxter, K. Campbell, Errors and violations
on the roads: a real distinction? Ergonomics 33 (1990) 1315–1332, http://dx.doi.
org/10.1080/00140139008925335.
[25] E.R. Scerri, The Periodic Table: Its Story and Its Significance, Oxford University
Press, USA, 2006.
[26] D. Scherer, Q.V. Maria de Fátima, JADN Neto, Human error categorization an ex-
tension to classical proposals applied to electrical systems operations, Hum.-Comp.
Inter. (2010) 234–245.
[27] G. Tassey, The Economic Impacts of Inadequate Infrastructure for Software Testing,
(2002) Gaithersburg, MD.
[28] G.S. Walia, J.C. Carver, A systematic literature review to identify and classify
software requirement errors, Inf. Softw. Technol. 51 (2009) 1087–1109, http://dx.
doi.org/10.1016/j.infsof.2009.01.004.
[29] G.S. Walia, J.C. Carver, Using error abstraction and classification to improve re-
quirement quality: conclusions from a family of four empirical studies, Empir.
Softw. Eng. 18 (2013) 625–658, http://dx.doi.org/10.1007/s10664-012-9202-3.
[30] D. Wiegmann, C. Detwiler, Human error and general aviation accidents : a com-
prehensive, fine-grained analysis using HFACS, Security (2005) 1–5, http://dx.doi.
org/10.1518/001872007X312469.
[31] J. Zhang, V.L. Patel, T.R. Johnson, E.H. Shortliffe, A cognitive taxonomy of medical
errors, J. Biomed. Inform. 37 (2004) 193–204, http://dx.doi.org/10.1016/j.jbi.
2004.04.004.
V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx
13

More Related Content

Similar to Anu2018

A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
 
Requirements elicitation frame work
Requirements elicitation frame workRequirements elicitation frame work
Requirements elicitation frame workijseajournal
 
A guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project managementA guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project managementijcsit
 
Practical Guidelines to Improve Defect Prediction Model – A Review
Practical Guidelines to Improve Defect Prediction Model – A ReviewPractical Guidelines to Improve Defect Prediction Model – A Review
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
 
Behavioural Modelling Outcomes prediction using Casual Factors
Behavioural Modelling Outcomes prediction using Casual  FactorsBehavioural Modelling Outcomes prediction using Casual  Factors
Behavioural Modelling Outcomes prediction using Casual FactorsIJMER
 
Need Nursing Expert to Complete HW.pdf
Need Nursing Expert to Complete HW.pdfNeed Nursing Expert to Complete HW.pdf
Need Nursing Expert to Complete HW.pdfbkbk37
 
Software Systems Requirements Engineering
Software Systems Requirements EngineeringSoftware Systems Requirements Engineering
Software Systems Requirements EngineeringKristen Wilson
 
Essay On Reliability Of Visualization Tools
Essay On Reliability Of Visualization ToolsEssay On Reliability Of Visualization Tools
Essay On Reliability Of Visualization ToolsNatasha Barnett
 
A systematic review of uncertainties in
A systematic review of uncertainties inA systematic review of uncertainties in
A systematic review of uncertainties inijseajournal
 
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
 
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
 
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksSoftware Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
 
please just write the bulk of the paper with in text citations and.docx
please just write the bulk of the paper with in text citations and.docxplease just write the bulk of the paper with in text citations and.docx
please just write the bulk of the paper with in text citations and.docxrandymartin91030
 
User Experience Evaluation for Automation Tools: An Industrial Experience
User Experience Evaluation for Automation Tools: An Industrial ExperienceUser Experience Evaluation for Automation Tools: An Industrial Experience
User Experience Evaluation for Automation Tools: An Industrial ExperienceIJCI JOURNAL
 
User-Interface Usability Evaluation
User-Interface Usability EvaluationUser-Interface Usability Evaluation
User-Interface Usability EvaluationCSCJournals
 
1. Emergence of Software EngineeringIn the software industry, we.docx
1. Emergence of Software EngineeringIn the software industry, we.docx1. Emergence of Software EngineeringIn the software industry, we.docx
1. Emergence of Software EngineeringIn the software industry, we.docxjackiewalcutt
 
Carol Harstad Research Proposal
Carol Harstad   Research ProposalCarol Harstad   Research Proposal
Carol Harstad Research ProposalCarol Harstad
 

Similar to Anu2018 (20)

A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...
 
Requirements elicitation frame work
Requirements elicitation frame workRequirements elicitation frame work
Requirements elicitation frame work
 
A guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project managementA guide to deal with uncertainties in software project management
A guide to deal with uncertainties in software project management
 
Practical Guidelines to Improve Defect Prediction Model – A Review
Practical Guidelines to Improve Defect Prediction Model – A ReviewPractical Guidelines to Improve Defect Prediction Model – A Review
Practical Guidelines to Improve Defect Prediction Model – A Review
 
Behavioural Modelling Outcomes prediction using Casual Factors
Behavioural Modelling Outcomes prediction using Casual  FactorsBehavioural Modelling Outcomes prediction using Casual  Factors
Behavioural Modelling Outcomes prediction using Casual Factors
 
STUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEM
STUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEMSTUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEM
STUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEM
 
Analytical Survey on Bug Tracking System
Analytical Survey on Bug Tracking SystemAnalytical Survey on Bug Tracking System
Analytical Survey on Bug Tracking System
 
Need Nursing Expert to Complete HW.pdf
Need Nursing Expert to Complete HW.pdfNeed Nursing Expert to Complete HW.pdf
Need Nursing Expert to Complete HW.pdf
 
Software Systems Requirements Engineering
Software Systems Requirements EngineeringSoftware Systems Requirements Engineering
Software Systems Requirements Engineering
 
Essay On Reliability Of Visualization Tools
Essay On Reliability Of Visualization ToolsEssay On Reliability Of Visualization Tools
Essay On Reliability Of Visualization Tools
 
A systematic review of uncertainties in
A systematic review of uncertainties inA systematic review of uncertainties in
A systematic review of uncertainties in
 
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
 
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...
 
IJET-V2I6P28
IJET-V2I6P28IJET-V2I6P28
IJET-V2I6P28
 
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksSoftware Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
 
please just write the bulk of the paper with in text citations and.docx
please just write the bulk of the paper with in text citations and.docxplease just write the bulk of the paper with in text citations and.docx
please just write the bulk of the paper with in text citations and.docx
 
User Experience Evaluation for Automation Tools: An Industrial Experience
User Experience Evaluation for Automation Tools: An Industrial ExperienceUser Experience Evaluation for Automation Tools: An Industrial Experience
User Experience Evaluation for Automation Tools: An Industrial Experience
 
User-Interface Usability Evaluation
User-Interface Usability EvaluationUser-Interface Usability Evaluation
User-Interface Usability Evaluation
 
1. Emergence of Software EngineeringIn the software industry, we.docx
1. Emergence of Software EngineeringIn the software industry, we.docx1. Emergence of Software EngineeringIn the software industry, we.docx
1. Emergence of Software EngineeringIn the software industry, we.docx
 
Carol Harstad Research Proposal
Carol Harstad   Research ProposalCarol Harstad   Research Proposal
Carol Harstad Research Proposal
 

Recently uploaded

Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRlizamodels9
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Patrick Diehl
 
Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2John Carlo Rollon
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxNandakishor Bhaurao Deshmukh
 
Forest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantForest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantadityabhardwaj282
 
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxRESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxFarihaAbdulRasheed
 
Harmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms PresentationHarmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms Presentationtahreemzahra82
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxpriyankatabhane
 
Cytokinin, mechanism and its application.pptx
Cytokinin, mechanism and its application.pptxCytokinin, mechanism and its application.pptx
Cytokinin, mechanism and its application.pptxVarshiniMK
 
Transposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.pptTransposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.pptArshadWarsi13
 
insect anatomy and insect body wall and their physiology
insect anatomy and insect body wall and their  physiologyinsect anatomy and insect body wall and their  physiology
insect anatomy and insect body wall and their physiologyDrAnita Sharma
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSarthak Sekhar Mondal
 
Scheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxScheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxyaramohamed343013
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxmalonesandreagweneth
 
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
Artificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PArtificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PPRINCE C P
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024AyushiRastogi48
 
Module 4: Mendelian Genetics and Punnett Square
Module 4:  Mendelian Genetics and Punnett SquareModule 4:  Mendelian Genetics and Punnett Square
Module 4: Mendelian Genetics and Punnett SquareIsiahStephanRadaza
 

Recently uploaded (20)

Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?
 
Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
 
Forest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantForest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are important
 
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxRESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
 
Harmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms PresentationHarmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms Presentation
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
 
Cytokinin, mechanism and its application.pptx
Cytokinin, mechanism and its application.pptxCytokinin, mechanism and its application.pptx
Cytokinin, mechanism and its application.pptx
 
Transposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.pptTransposable elements in prokaryotes.ppt
Transposable elements in prokaryotes.ppt
 
insect anatomy and insect body wall and their physiology
insect anatomy and insect body wall and their  physiologyinsect anatomy and insect body wall and their  physiology
insect anatomy and insect body wall and their physiology
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
 
Scheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxScheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docx
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
 
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
Artificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PArtificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C P
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024
 
Module 4: Mendelian Genetics and Punnett Square
Module 4:  Mendelian Genetics and Punnett SquareModule 4:  Mendelian Genetics and Punnett Square
Module 4: Mendelian Genetics and Punnett Square
 

Anu2018

  • 1. Contents lists available at ScienceDirect Information and Software Technology journal homepage: www.elsevier.com/locate/infsof Development of a human error taxonomy for software requirements: A systematic literature review Vaibhav Anua,1 , Wenhua Hub,1 , Jeffrey C Carverc , Gursimran S Waliaa,⁎ , Gary Bradshawd a Department of Computer Science, Montclair State University, Montclair, NJ, United States b Department of Software Engineering and Game Development, Kennesaw State University, Marietta, GA, United States c Department of Computer Science, The University of Alabama, Tuscaloosa, AL, United States d Department of Psychology, Starkville, MS, United States A R T I C L E I N F O Keywords: Systematic review Requirements Human errors Taxonomy A B S T R A C T Background: Human-centric software engineering activities, such as requirements engineering, are prone to error. These human errors manifest as faults. To improve software quality, developers need methods to prevent and detect faults and their sources. Aims: Human error research from the field of cognitive psychology focuses on understanding and categorizing the fallibilities of human cognition. In this paper, we applied concepts from human error research to the problem of software quality. Method: We performed a systematic literature review of the software engineering and psychology literature to identify and classify human errors that occur during requirements engineering. Results: We developed the Human Error Taxonomy (HET) by adding detailed error classes to Reason's well- known human error taxonomy of Slips, Lapses, and Mistakes. Conclusion: The process of identifying and classifying human error identification provides a structured way to understand and prevent the human errors (and resulting faults) that occur during human-centric software en- gineering activities like requirements engineering. Software engineering can benefit from closer collaboration with cognitive psychology researchers. 1. Introduction Software engineering, especially during the early phases, is a human-centric activity. Software engineers must gather customer needs, translate those needs into requirements, and validate the cor- rectness, completeness, and feasibility of those requirements. Because of the involvement of various people in this process, there is the po- tential for human errors to occur. Cognitive Psychology researchers have studied how people make errors when performing different types of tasks. This line of research is called human error research. In this paper, we apply the findings from human error research to analyze and classify the types of human errors people make during the requirements engineering process. Because the software engineering literature often has competing definitions for the same terms, we must clearly define our terminology. Based on IEEE Standard 24,765 [13] (ISO/IEC/IEEE 24,765:2010), we use the following definitions in this paper. • Error (also referred to as Human Error) – A mental error, i.e. the failing of human cognition in the process of problem solving, planning, or execution. • Fault [or Defect] – The manifestation of an error recorded in a software artifact. • Failure – The incorrect execution of a software system, e.g. resulting in a crash, unexpected operation, or incorrect output. The chain from human error to fault to failure is not unbroken and inevitable. Generally, developers either detect and correct faults without investigating the underlying errors or they use software testing to reveal system failures that they then repair. Nevertheless, because many system failures originate in a human error, researchers need to investigate how human errors can help in detecting and fixing software faults and failures. By one estimate, software failures (which often find their origination in human errors) cost $60 billion/year [27]. Practitioners in other domains, i.e. Aviation and Nuclear Power, https://doi.org/10.1016/j.infsof.2018.06.011 Received 13 October 2016; Received in revised form 21 May 2018; Accepted 21 June 2018 ⁎ Corresponding author. 1 Authors Anu and Hu are co-first authors and contributed equally to leading this paper. E-mail addresses: vaibhavanu.x@gmail.com (V. Anu), whu4@kennesaw.edu (W. Hu), carver@cs.ua.edu (J.C. Carver), gursimran.walia@ndsu.edu (G.S. Walia), glb2@psychology.msstate.edu (G. Bradshaw). Information and Software Technology xxx (xxxx) xxx–xxx 0950-5849/ © 2018 Published by Elsevier B.V. Please cite this article as: Anu, V., Information and Software Technology (2018), https://doi.org/10.1016/j.infsof.2018.06.011
  • 2. also faced similar problems in handling problems that result from human errors. An error that originates in the mind of a pilot or of a reactor operator can produce a fault that results in a plane crash or a reactor core meltdown. In both cases, human errors create problems measured not only in billions of dollars but also in lives lost. The work of human error specialists has dramatically improved the safety and efficiency of these domains. The improvement process entailed (1) analyzing incident reports to identify the human error(s) made; (2) finding common error patterns; and (3) organizing those errors into a hierarchical error taxonomy. By examining the most frequent and costly errors in the hierarchy, investigators developed and implemented mediation strategies to reduce the frequency and severity of the errors [7,26,30]. Our premise in this paper is that a similar approach can be beneficial in software engineering. Requirements engineering is largely human-centric. Requirement analysts must interact with customers or domain experts to identify, understand, and document potential system requirements. Numerous studies have shown that it is significantly cheaper to find and fix re- quirements problems during the requirements engineering activities than during later software development activities. Therefore, it is im- portant to focus attention on approaches that can help developers im- prove the quality of the requirements and find requirements problems early. Human error research can be particularly beneficial during re- quirements engineering. The high level of interaction and shared un- derstanding among multiple stakeholders required at this step of the software lifecycle can result in a number of problems. Applying human error research to categorize the types of errors that occur during re- quirements engineering will have two benefits. First, it will provide a framework to help requirements engineers understand the types of er- rors that can occur during requirement development. This awareness should lead requirement engineers to be more careful. Second, it will help reviewers ensure that requirements are of sufficient quality for development to proceed. Recognizing the potential benefit that a formalized error taxonomy could provide to the requirements engineering process, a subset of the current authors conducted a prior Systematic Literature Review to identify errors published in the literature from 1990–2006. Using a grounded theory approach [9], they organized those requirement errors into a Requirement Error Taxonomy to support the detection and pre- vention of the errors and resulting faults [28]. They performed that review without reference to contemporary psychological theories of how human errors occur. In this current review, we extend the previous review in two ways. First, rather than developing a requirement error taxonomy strictly ‘bottom-up’ with no guiding theory, we engage di- rectly with a human error researcher and use human error theory to ensure that we only include true human errors and organize those errors based on a standard human error classification system from cognitive psychology. Second, this review includes papers published since the first review. To guide this literature review, we focus on two research questions. • RQ1: What types of requirements engineering human errors does the software engineering and psychology literature describe? • RQ2: How can we organize the human errors identified in RQ1 into a taxonomy? • The primary contributions of this work are: • Adapting human error research to a new domain (software quality improvement) through interaction between software engineering researchers and a human error researcher; • Development of a systematic human error taxonomy for require- ments to help requirement engineers understand and avoid common human errors; and • An analysis of the literature describing human errors in require- ments engineering to provide insight into whether a community is forming around common ideas. The remainder of this paper is organized as follows. Section 2 pro- vides a background on software quality and on human errors (from a psychology perspective). Section 3 describes the systematic review process and its execution. Section 4 reports the results of the review and presents the human error taxonomy. Section 5 provides a discussion of the results and the usefulness of the human error taxonomy. Finally, Section 6 concludes the paper and describes future work. 2. Background Formal efforts to improve software quality span decades [6,19]. This section briefly reviews some of these efforts as a preface to a more comprehensive discussion of the role human error research can play in identifying, understanding, correcting, and avoiding errors in software engineering. Given the importance of identifying and correcting errors and faults early in the software development process, we emphasize the requirements engineering process. 2.1. Historical perspectives on software quality improvement Initially, software quality improvement research focused on faults. Because a fault that occurs early in software development may lead to a series of later faults, researchers developed Root Cause Analysis (RCA) [19,21] to identify the first fault in a (potentially lengthy) chain. In RCA, developers trace faults to their origin to identify the triggering fault. Then the developer can modify procedures to reduce the occur- rence of faults or eliminate the original fault. The goal is to prevent the first fault, thereby eliminating the entire fault chain. Because RCA is time-consuming, researchers developed the Orthogonal Defect Classification (ODC) to provide structure to the fault tracing process [6]. Developers use ODC to classify faults and identify the trigger that causes the fault to surface (not necessarily the cause of fault insertion). This approach accommodates a large number of fault reports and identifies the actions that revealed the fault. By focusing on the action that caused the fault, the ODC avoids the tedious work of tracing faults back to their origins and reduces the amount of time and effort required, compared with RCA. RCA and ODC are retrospective techniques in which the investigative procedure begins with fault reports. Because the ODC relies on statis- tical analyses to identify the source fault, it requires the presence of a large and robust set of fault reports. As retrospective techniques, RCA and ODC occur late in the development process, after errors and faults have accumulated and are more expensive to fix. In addition, neither approach helps identify the human error underlying the fault. Thus, these approaches focus more on treating the symptoms (manifestations of the error) rather than the underlying cause (i.e., the error). Noting that it can be difficult to define, classify, and quantify re- quirements faults, Lanubile, et al. shifted the emphasis from faults to errors. They defined the term error as a fault or flaw in the human thought process that occurs while trying to understand given informa- tion, while solving problems, or while using methods and tools [17]. Such errors are the cause of faults. This shift is important because it helps developers understand why the fault occurred. Because this ap- proach addresses the underlying cause (i.e. the error) rather than the symptoms (i.e. the faults), it is an advance over the RCA and ODC ap- proaches. Errors and faults have a complex relationship. One error may cause several faults. Similarly, one fault may spring from different errors. Consequently, it is not trivial to identify the error behind the fault. Lanubile, et al. asked software inspectors to analyze the faults detected during an inspection to determine the underlying cause, i.e., the error. The inspectors then used the resulting list of errors to guide a re-in- spection to detect additional faults related to those errors. This re-in- spection produced only a moderate improvement in fault detection [17]. This error abstraction methodology is also a retrospective process V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 2
  • 3. that depends upon a developer first making an error that produced a fault, then detecting the fault to initiate a trace-back process to recover the error. Lanubile et al. also observed that requirements errors are generally specific to a particular system, rather than general for all software [17]. The implication of this observation is that since errors tend to be system-specific, the error abstraction must be performed independently for each system. In addition, retrospective fault detec- tion techniques leave room for faults to go undetected and propagate into later phases of software development, increasing the time and cost of fixing the system. Therefore, there is still room for techniques to prevent faults and to support early fault identification. 2.2. A cognitive psychology perspective on software quality improvement Contrary to the conclusions drawn by Lanubile et al., most psy- chologists reject the view that errors are idiosyncratic and strongly dependent on context. James Reason, a noted authority on human error, rejected the popular notion that “errors occur ‘out of the blue’ and are highly variable in their form”. Reason proposed that, rather than being random, errors take recurrent and predictable forms [23]. This predictability of errors is crucial to allow software engineers to apply lessons learned from one project to the next. In such cases, developers can use errors proactively to search for faults before they have been identified through other means and preventatively to modify software development methods to reduce or eliminate errors. Walia and Carver's systematic literature review of the software en- gineering and psychology literature identified 119 requirement errors. Employing a bottom-up approach, Walia and Carver developed a Requirement Error Taxonomy (RET) by grouping common errors into fourteen error classes. They then grouped these error classes into three high-level error types: People Errors (arising from fallibilities of the people involved in the development process), Process Errors (arising while selecting the appropriate processes for achieving the desired goals and relating mostly to the inadequacy of the requirements en- gineering process), and Documentation Errors (arising from mistakes in organizing and specifying the requirements, regardless of whether the developer properly understood the requirements) [28]. Walia and Carver used four experiments to investigate the utility of the RET [29]. Their work incorporated an inspection-abstraction-re- inspection paradigm, similar to the one used by Lanubile et al. Parti- cipants first analyzed a requirement documents to identify faults. Then they analyzed those faults to abstract errors. Finally, they reinspected the document guided by the abstracted errors. The inspectors used the RET to guide the error abstraction process, as opposed to the ad hoc approach in Lanubile et al.’s work. The reinspection step resulted in identification of a significant number of new faults. Subjectively, par- ticipants indicated that the structured error information in the RET helped them understand their errors. While the taxonomic structure employed by Walia and Carver drew upon the published software engineering literature, it is strikingly dif- ferent from psychological organizations of human errors. As described above, the RET includes people, process, and documentation errors. Clearly all of the observed errors were committed by people, most happened while following or attempting to follow the software en- gineering process, and resulted in a fault in the project documentation. Hence, the next step is to divide the RET errors into distinct human error categories. This observation motivated us to consider that a stronger connection to the rich psychological literature on human er- rors might reveal a more useful taxonomy of requirements engineering errors. 2.3. Reason's classification of human errors James Reason developed one of the most comprehensive psycho- logical accounts of human errors [22]. Central to his account is the construction of a plan. Loosely speaking, a plan is a series of actions needed to achieve a goal. Reason associated errors with specific cog- nitive failings, such as inattention, memory failures, and lack of knowledge. In this approach, three types of plan-based errors can occur. First, a slip is a failure of execution where the agent fails to carry out a properly planned step as a result of inattention. Second, a lapse, is a failure of execution where the agent fails to carry out a properly planned step as a result of a memory failure. Finally, a mistake is a knowledge- based error that occurs when the plan itself is inadequate to achieve the goal. Identification of the cognitive failing (e.g., “I wasn't paying at- tention when I wrote this down”) can provide a strong clue to the specific error that resulted. Accordingly, error identification can be undertaken even by those with little formal training in human psy- chology. For example, if our goal is to drive to the store, we might form a plan consisting of steps such as starting the car, backing down the driveway to the street, navigating the route, and parking in the store lot. If we put the wrong key in the ignition, we have committed slip. If we forget to put the transmission into reverse before stepping on the gas, the omitted step is a lapse. Failing to take into account the effect of a bridge closing and getting caught in traffic is a mistake. Reason's taxonomy has found diverse application in medical errors [31], problems with Apollo 11 [22], driver errors [24], and errors that led to the Chernobyl nuclear power plant meltdown [23]. The next section describes how we developed an error taxonomy for require- ments based upon Reason's framework. 3. Research method This review is a replication and extension of Walia and Carver's SLR [28] that developed the RET (hereafter, referred to as SLR-2009). The current SLR expands upon the results of SLR-2009 by leveraging the cognitive theory of human errors to develop a formalized taxonomy of human errors that occur during requirements engineering. In this re- view, we employ a rigorous human error identification methodology, leading to a more formalized taxonomy that appropriately captures the sources of requirements faults, i.e. human errors. In this review, we followed the traditional systematic literature review (SLR) approach [14]. To increase the rigor of the current review, the first and second authors (subsequently referred to as “researchers”) independently per- formed each SLR step and met to resolve any disagreements. 3.1. Research questions To guide the development of the human error taxonomy for re- quirements errors, we defined the high-level research questions shown in Table 1. 3.2. Source selection and search To identify relevant papers, we used the same search strings as in SLR-2009. To ensure coverage of topics in software engineering, em- pirical studies, human cognition, and psychology, we executed the search strings in IEEExplore, INSPEC, ACM Digital Library, SCIRUS (Elsevier), Google Scholar, PsychINFO (EBSCO), and Science Citation Index. Because SLR-2009 included papers published through 2006, we only searched for papers published after 2006 (through October 2014). Table 1 Research questions. # Research question RQ1 What types of requirements engineering human errors does the software engineering and psychology literature describe? RQ2 How can we organize the human errors identified in RQ1 into a taxonomy? V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 3
  • 4. Appendix A provides the detailed search strings, as adapted for each database. After the database search, we used snowballing to ensure a complete and comprehensive primary study selection process. 3.3. Study selection: inclusion and exclusion criteria During the title-elimination phase, the two researchers identified 144 and 136 studies, respectively (see Fig. 1). Then, each researcher read the abstracts and keywords to exclude any studies that were not clearly related to the research questions. When in doubt, they retained the study. At the end of this stage, the researchers’ lists contained 97 and 88 studies, respectively. Next, the researchers consolidated their lists. They included the 32 studies in common between their lists. For the remaining 195 studies (97 and 88, respectively), each researcher examined the studies on the other researcher's list to determine whether they agreed on inclusion. In cases where the other researcher agreed, they included the study. In the remainder of the cases, the researchers discussed and reached an agreement on whether to include the study. This step resulted in a total of 96 agreed-upon studies. Next, the researchers divided the 96 studies between them for full- text review. Each reviewer used the inclusion/exclusion criteria in Table 2 to examine the full-text of 48 studies to place each on an in- clude list or an exclude list. Any study on the include list stayed in the review. Any study on the exclude list was reviewed by the other re- searcher to make a final inclusion decision. The researchers discussed and resolved any disagreements. This step resulted in 34 included studies. Finally, the researchers identified four additional studies through snowballing. The researchers did not find these studies during the in- itial database search because they included “risk components” and “warning signs” in the title rather than the more standard terms in- cluded in the search strings (Appendix A). Therefore, we finally in- cluded 38 included studies. Fig. 1 summarizes the search and selection process. The 38 studies were distributed across journals (15), conferences (19), workshops (3), and book chapters (1). In addition, these studies cover a wide spectrum of research topics, including: Computer Science, Software Engineering, Medical Quality, Healthcare Information Technology, Cognitive Psychology, Human Computer Interaction, Safety/Reliability Sciences, and Cognitive Informatics. Interestingly, the 38 studies appeared in 36 venues. That is, except for two cases (Journal of Systems Software and Information Systems Journal), each study came from a unique publication venue. This result suggests that the research about human errors in software engineering is widely scattered and there is not yet a standard venue for publishing such research results. Section 5.4 discusses this observation in more detail. Appendix B contains the full distribution of studies across publication venues. 3.4. Data extraction and synthesis Based on publication venue and the type of information contained in the study, we classified 11 studies as Cognitive Psychology studies and 27 as Software Engineering studies. The Cognitive Psychology studies focused primarily on models of human errors and human error taxonomies. The Software Engineering studies focused primarily on errors committed during the software development process. Because the focus of these types of studies differed, we extracted different types of data. To maintain compatibility with SLR-2009, we modeled our data extraction form on the one used in SLR-2009. We added a small number of items to ensure coverage of human cognitive failures and human errors. We reanalyzed the SLR-2009 studies to extract this new in- formation. Table 3 lists the data extracted from all primary studies. Table 4 lists the data extracted based on each primary study's research focus. To ensure that both researchers had a consistent interpretation of the data items, we conducted a pilot study. Each researcher in- dependently extracted data from ten randomly selected studies. The Fig. 1. Primary study search and selection process. Table 2 Inclusion and exclusion criteria. V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 4
  • 5. results revealed a few minor inconsistencies in the interpretation of some data fields. The researchers discussed these issues to ensure they had a common understanding of the data items. They then proceeded to independently extract data from the remaining 28 studies. After ex- tracting data, the researchers discussed and resolved any discrepancies to arrive at a final data extraction form for each study. 4. Reporting the review This section is organized around the two research questions (RQs) defined in Table 1. 4.1. RQ1 - What types of requirements engineering human errors does the software engineering and psychology literature describe? We began by extracting individual errors, error categories, sub-ca- tegories, error descriptions, and root causes from the 38 studies. Similarly, we reviewed the published SLR-2009 paper and extracted the errors and their descriptions (referring back to the original papers where necessary). An examination of this information revealed dif- ferent interpretations of the term error. These interpretations included: program errors, software faults, requirements faults, end-user (or user) errors, requirements problems, and human error. Before building our taxonomy, we eliminated items that were not truly human errors. Independently, the two researchers examined each reported error to eliminate those that were not human errors. As before, they compared their results and resolved the discrepancies they had on less than 5% of the items. Human error expert Bradshaw helped resolve these discrepancies. This process resulted in 31 human errors. We eliminated two of those because they did not focus on requirements, bring the total to 29 human errors in the requirements phase, which are listed in Table 5 along with their source. At this point, we also updated the list of studies included in the SLR to include only those that described a true requirements engineering human error. Only seven of the twenty-seven software engineering studies met this criterion. Therefore, we removed the other 20 studies. None of the cognitive psychology studies met this criterion, as con- firmed by human error expert Bradshaw. Some of these studies dis- cussed decision-making errors or operator errors (which are not human errors). We excluded these studies from our analysis. In addition, there were eleven studies from SLR-2009 that identified human errors. In total, we included 18 studies from the software engineering literature that contain human errors (Appendix C). 4.2. RQ2 – how can we organize the human errors identified in RQ1 into a taxonomy? To create the human error taxonomy (HET), each researcher first grouped the individual errors in Table 5 into classes based on similarity. The two researchers met to agree upon a final list of error classes. Then, each researcher individually categorized each error class as either a Slip, a Lapse, or a Mistake based upon the error's description or the root cause description. This organization entailed analyzing each error class to (1) decide whether the error class was a planning error or an execution error and (2) for execution errors, determine whether the error was related to inattention failures (Slips) or memory related failures (Lapses). The two researchers met and resolved all discrepancies and created an initial organization of the HET. Finally, human error expert Bradshaw evaluated and approved the final organization. Table 6 shows the final HET based on Reason's taxonomy. The following subsections provide a detailed description of each error class, to make the information in Table 6 more understandable. We also describe an example error and fault. For the examples, we use the Loan Arranger (LA) system, which was developed by researchers at the University of Maryland for use in software quality research. After a brief overview of LA, we present the error classes organized by the high- level groupings in Reason's taxonomy. Loan Arranger (LA) overview: A loan consolidation organization purchases loans from banks and bundles them for resale to other in- vestors. The LA application selects an optimal bundle of loans based on criteria provided by an investor. These criteria may include: (1) risk level, (2) principal involved, and (3) expected rate of return. The loan analyst can then modify the bundle as needed for the investor. LA au- tomates information management activities, such as updating loan in- formation monthly. 4.2.1. Slips A slip occurs when someone performs the steps required to complete a planned task incorrectly or in the wrong sequence. Slips generally result from a lack of attention. For example, the steps for “preparing breakfast cereal” include: (i) get a bowl, (ii) choose cereal, (iii) pour cereal in the bowl, and (iv) pour milk on the cereal. If someone intends to “prepare breakfast cereal”, but pours orange juice rather than milk, he has committed a slip. Table 7 lists the two types of slips that occur during requirements engineering. 4.2.2. Lapses A lapse occurs when someone forgets the overall goal of the planned task in middle of a sequence of actions or omits a step that is part of a routine sequence. Lapses generally result from memory-related failures. For example, someone enters a room to accomplish some specific task. If, upon entering the room, he forgets the planned task, he has Table 3 Common data items for extracting information. Data item Description Study Identifier Unique identifier for the paper (same as the reference number) Bibliographic data Author, year, title, source Type of article Journal/conference/technical report Focus of Area The field in which the research was conducted e.g., Software Engineering or Industrial Engineering or Psychology or Aviation or Medicine Study aims The aims or goals of the primary study Context Relates to one/more search focus, i.e., research area(s) the paper focuses upon. Study type Industrial experiment, controlled experiment, survey, lessons learned. Unit of analysis Individual developers or department or organizational Control Group Yes, no; if ‘‘Yes”: number of groups and size per group Data collection How the data was collected, e.g., interviews, questionnaires, measurement forms, observations, and discussion. Data analysis How the data was analyzed; qualitative, quantitative or mixed Concepts The key concepts or major ideas in the primary studies Higher-order interpretations The second- (and higher-) order interpretations arising from the key concepts of the primary studies. This can include limitations, guidelines or any additional information arising from application of major ideas/ concepts Study findings Major findings and conclusions from the primary study V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 5
  • 6. committed a lapse. Table 8 lists the three types of lapses that occur during the requirements engineering. 4.2.3. Mistakes As opposed to slips and lapses, mistakes are errors in planning ra- ther than execution. In this scenario, someone executes the plan prop- erly, but the plan itself is inadequate to achieve the intended outcome. For example, suppose that a taxi driver's dispatcher has informed him of a road closure on his typical route to the airport. If the taxi driver follows his typical route anyway and encounters a delay, the taxi driver has made a mistake. The taxi driver devised an incorrect plan even though he had prior knowledge of the road closure. Table 9 lists the 11 mistake errors that occur during the requirements phase. 5. Discussion This section provides further insights into the detailed results. 5.1. Findings about human errors in requirements engineering After analyzing the included papers, identifying the presence of human errors, and organizing those errors into a taxonomy, we can report some overall findings about human errors in requirements en- gineering. • The requirements engineering (RE) research community has not consistently reported human errors. Requirements Engineering re- searchers use the term ‘error’ inconsistently in their publications. Even when they used the term to refer to human thoughts or actions, that usage often did not match the psychological definition of human error. We had to thoroughly examine each reported error to identify those that were truly human errors. Through this process, we removed a majority of the items RE researchers marked as ‘er- rors’ in their publications (see Section 4.1), indicating inconsistent use of the term. Only 18 studies reported a true human error. The broader set of problems that RE researchers called errors, while outside the scope of this review, are worthy of further attention. • Software engineers and psychologists need to collaborate more closely. Development of the HET required close collaboration be- tween software engineering experts and a human error expert from psychology. In reviewing the literature, we noted that many of the ‘errors’ reported in the RE literature did make use of the frameworks developed by human error researchers. As a result, we had to carefully analyze each error description to determine whether it truly described a human error. Our work shows that a close colla- boration between software engineering researchers and psychology researchers can provide a theoretically-based human error frame- work for organizing requirements engineering errors. • Errors vs. violations. The human error literature distinguishes be- tween unintentional errors and deliberate violations. This distinc- tion is important because the two problems need to be handled in very different ways. Some error reports were, in fact, violations, such as a situation where software engineers deliberately omitted some tasks in the requirements engineering process. Preserving the distinction between errors and violations is useful because the most appropriate detection and remediation techniques differ. Violations, although infrequent in the identified studies, fall outside the scope of this paper. Table 4 Data items relative to each search focus. Search Focus Data Item Description Quality Improvement Approach Focus or process Focus of the quality improvement approach and the process/method used to improve quality Benefits Any benefits from applying the approach Limitations Any limitations or problems identified in the approach Evidence The empirical evidence indicating the benefits of using error information to improve software quality and any specific errors found Error focus Yes or No; and if ‘‘Yes”, classify the solution foundation (next data item) Mechanism or Solution Foundation 1. Ad hoc - just something the investigators thought up but could have been supported by empirical work showing its effectiveness; or 2. Evidence-based — a notation of a systematic problem in the software engineering process that leads to a specific remediation method; or 3. Theory-based — draws support from research on human errors Requirement Errors Problems Problems reported in requirement stage Errors Errors at the requirements stage Faults Faults at the requirement stage Mechanism Process used to analyze or abstract requirement errors (select one of the following): 1. Ad hoc - just something the investigators thought up but could have been supported by empirical work showing its effectiveness; or 2. Evidence-based — a notation of a systematic problem in the software engineering process that leads to a specific remediation method; or 3. Theory-based — draws support from research on human errors Error-fault-defect taxonomies Focus The focus of the taxonomy (i.e., error, fault, or failure) Error Focus Yes or No; if ‘‘Yes”, What was the process used to classify errors into a taxonomy? Requirement phase Yes or No (whether it was applied in requirement phase) Benefits and Limitations Benefits and/or Limitations of the taxonomy Evidence The empirical evidence regarding the benefits of error/fault/defect taxonomy for software quality Software inspections Focus The focus of the inspection method (i.e., error, fault of failure) Error Focus Yes or No; if ‘‘Yes”, how did it focus reviewers’ attention to detect errors during the inspection Requirement phase Yes or No (Did it inspect requirement documents?) Evidence The empirical evidence regarding the benefits/limitations of the error-based inspection method Human errors Human errors and classifications Description of errors made by humans and classes of their fallibilities during planning, decision making and problem solving Evidence Empirical evidence regarding errors made by humans in different situations (e.g., aircraft control) that are related to requirement errors V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 6
  • 7. 5.2. Structuring errors within specific requirements engineering tasks Requirements engineering consists of five phases. • Elicitation (E): Discovering and understanding the user's needs and constraints for the system. • Analysis (A): Refining the user's needs and constraints. • Specification (S): Documenting the user's needs and constraints clearly and precisely. • Verification (V): Ensuring that the system requirements are com- plete, correct, consistent, and clear. • Management (M): Scheduling, coordinating, and documenting the requirements engineering activities. Each RE phase has different activities that provide opportunities for potentially different human errors. Therefore, we developed a second taxonomic organization of the errors by indicating the relevant RE phases in which each error occurs. While Slips, Lapses, and Mistakes can occur during each require- ment phase, the specific error classes will differ. Two authors each mapped the error classes onto the requirements phases. As before, the two authors discussed and resolved any discrepancies. Human error expert Bradshaw reviewed and approved the final mapping shown in Table 10. Note that many of the error classes appear across multiple requirements phases. For example, a clerical error can occur: • During elicitation if the requirement engineer is careless while taking notes during the interview, or • During specification if the requirement engineer is careless while transcribing the elicited requirements into specifications. There are several reasons why we believe this two-dimensional structure is superior to the one-dimensional structure introduced in Table 6. First, the two-dimensional structure illustrates that the same error class may appear in different phases. Second, we hypothesize that as researchers identify new errors, those errors will populate additional cells in the table. The paucity of existing error reports, coupled with the prevalence of faults and failures in software, suggests that many errors are occurring that have not been formally recognized. The two-di- mensional table provides a framework for identifying those errors. An analogy may help explain this concept. As Mendeleev was working on the Periodic Table of Elements, scientists had only dis- covered about 60 elements. His table had ‘holes’ where elements of a particular atomic weight and valence should exist. Indeed, the absence of these elements was considered problematic for the Periodic Table. Over the next 15 years, chemists discovered three of the missing ele- ments. Those elements had the properties predicted by the Periodic Table [25]. Accordingly, we postulate that investigators can employ this table to help identify all three classes of errors in different re- quirements engineering activities that were not found during the review and ‘fill the holes’ in our table. For example, we believe that clerical errors can be observed, not only in elicitation and specification, but also in the other RE phases (e.g., recording incorrect values from notes during the analysis phase). 5.3. Usefulness of the HET Since development of the HET, we have empirically validated its usefulness for (1) error detection, (2) fault detection, (3) error pre- vention, and (4) training of software engineers on retrospective human error analysis. From a fault detection perspective, a deeper understanding of the human errors that occurred during requirements engineering, can lead developers to detect faults that are may be overlooked otherwise. We conducted evaluation studies to determine whether use of HET im- proves fault detection during a requirements inspection. In these Table 5 Human errors identified in the literature. Error # Error name Source 1 Problem representation error [12] 2 RE people do not understand the problem [18] 3 Assumptions in grey area. [15] 4 Wrong assumptions about stakeholder opinions [20] 5 Lack of cohesion [20] 6 Loss of information from stakeholders [20] 7 Low understanding of each other's roles Bjamason et al., [5] 8 Not having a clear demarcation between client and users [16] 9 Mistaken belief that it is impossible to specify non- functional requirements in a verifiable form [8] 10 Accidentally overlooking requirements [8] 11 Inadequate Requirements Process [8] 12 Mistaken assumptions about the problem space SLR-2009 13 Environment errors SLR-2009 14 Information Management errors SLR-2009 15 Lack of awareness of sources of requirements SLR-2009 16 Application errors SLR-2009 17 Developer (requirements developer) did not understand some aspect of the product or process SLR-2009 18 User needs not well-understood or interpreted by different stakeholders SLR-2009 19 Lack of understanding of the system SLR-2009 20 Lack of system knowledge SLR-2009 21 Not understanding some parts of the problem domain SLR-2009 22 Misunderstandings caused by working simultaneously with several different software systems and domains SLR-2009 23 Misunderstanding of some aspect of the overall functionality of the system SLR-2009 24 Problem-Solution errors SLR-2009 25 Misunderstanding of problem solution processes SLR-2009 26 Semantic errors SLR-2009 27 Syntax errors SLR-2009 28 Clerical errors SLR-2009 29 Carelessness while documenting requirements SLR-2009 Table 6 Human error taxonomy (HET). Reason's Taxonomy Human Error Class Human Error(s) (Table 5) Slips Clerical Errors 30, 31 Term Substitution 5 Lapses Loss of information from stakeholders 6 Accidentally overlooking requirements 11 Multiple terms for the same concept errors 5 Mistakes Application Errors: knowledge-based plan is incomplete 1, 2, 18–25 Solution Choice Errors 26, 27 Syntax Errors 28, 29 Wrong Assumptions: knowledge-based plan is wrong 3, 4, 14 Environment Errors: knowledge-based plan is incomplete 15 Information management Errors: knowledge-based plan is incomplete 16 Inappropriate communication based on incomplete/faulty understanding of roles: knowledge-based plan is wrong 8 Not having a clear distinction between clients and users 9 Mistaken belief that it is impossible to specify non-functional requirements: knowledge-based plan is incomplete 10 Inadequate requirements process errors 13 Lack of awareness of requirements sources: knowledge-based plan is incomplete 17 V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 7
  • 8. studies, we incorporated the HET into a formal Error Abstraction and Inspection (EAI) technique. The EAI technique augments the traditional fault checklist (FC) inspection process by adding a human error dis- covery step and an error-informed re-inspection step. In the human error discovery step, the HET provides inspectors with a framework of common requirements errors to help them abstract human errors from the faults found during the FC inspection. In the error-informed re-in- spection step, inspectors use the abstracted human error information to re-inspect the requirements for additional faults. The result of the stu- dies showed use of human error information allowed participants to identify a significant number of faults that they missed when using a fault-checklist inspection approach [2,3,10]. We also anticipate that the HET can help requirements engineers in better understanding the most common human errors. This under- standing is likely to help them avoid these human errors and related faults. Our empirical study on the effect of human error knowledge on error and fault prevention confirmed that a better understanding of human errors led to fewer errors and faults during requirements en- gineering [11]. We have also used the HET to train inspectors to retrospectively analyze requirements faults to determine the underlying human error. This process is similar to that of incident/accident investigation done in the aviation and medical domains. We developed an instrument to train inspectors to use the Human Error Abstraction Assist (HEAA) decision- tree framework to investigate requirements faults (available at http:// vaibhavanu.com/NDSU-CS-TP-2016-001.html) [1]. 5.4. Citation analysis of the fragmentation in human error research Our observation that the 38 requirements engineering studies ap- peared in 36 publication venues triggered an exploratory citation analysis to understand the coherency, or lack thereof, of human error research in requirements engineering. In this analysis, we used only the papers representing the 18 studies that contain true human errors. We extracted the reference lists from these papers to identify cases where one paper cited another. We also identified citations to any psychology papers on human error. Because our search was thorough, we have high confidence that this set of 18 papers includes all of the papers in the software engineering literature that describe requirements engineering human errors. In the result shown in Fig. 2, an arrow from one paper to another indicates that the first paper cited the second paper. The result show that only seven papers cited human error papers (either in requirements engineering or in psychology), implying that researchers may be una- ware of related research. Of those papers, six made eight citations to previous work from the requirements engineering literature and three included six citations to the psychological literature on errors. Four papers were cited by others but did not themselves cite any related papers. Seven papers were ‘island papers,’ citing none of the previous research and not receiving any citations from other papers. Two papers [12,20] were responsible for the majority of the identified citations. Overall, Fig. 2 illustrates that human error research in requirements engineering is in a formative state. This lack of citations suggests that many investigators appear to be unaware of related work and are not building upon others’ contributions. Furthermore, researchers are not building on the human error research published in the psychology lit- erature. It is likely that the wide distribution of papers across different research venues contributes to this problem. Researchers do not have a small number of venues from which to draw sources. Finally, a subtle, but encouraging trend emerges with papers that are more recent. Those papers, especially the ones published from 2012 onwards have more citations to earlier work. This trend suggests a growing awareness of prior work in the area. 5.5. Threats to validity Threats addressed: To gather evidence, we searched multiple elec- tronic databases covering different software engineering and Table 7 Slip errors. Error Name Description Example of error Example of fault Clerical errors Result from carelessness while performing mechanical transcriptions from one format or from one medium to another. Requirement examples include carelessness while documenting specifications from elicited user needs. In notes from the meeting with the client, the requirement author recorded a different set of parameters for jumbo loans and regular loans. When creating the requirements document, the author incorrectly copied the requirements for regular loans into the description of jumbo loans. The requirements for the jumbo loans incorrectly specify the same behavior as for regular loans Term substitution errors After introducing a term correctly, the requirement author substitutes a different term that refers to a different concept. The requirement author brings to mind an inappropriate term to refer to a previously introduced concept. The requirement author records the ambiguous term “organization” instead of “lending institution”. Table 8 Lapse errors. Error Name Description Example of error Example of fault Loss of information from stakeholders errors Result from a requirement author forgetting, discarding, or failing to store information or documents provided by stakeholders, e.g. some important user need A loan analyst informs the requirement author about the format for loan reports (file, screen, or printout), who forgets to note it down Information about the report formats is omitted from the requirement specification. Accidentally overlooking requirement errors Occur when the stakeholders who are the source of requirements assume that some requirements are obvious and fail to verbalize them Because stakeholders assume that abnormal termination and system recovery is a commonplace occurrence and will be handled by the requirement analysts or the system design team, they do not provide system recovery requirements. Requirement document does not describe the process of system recovery from abnormal termination. Multiple terms for the same concept errors Occur when requirement authors fail to realize they have already defined a term for a concept and so introduce a new term at a later time. The same concept is referred to by different terms throughout the document. Use of terms “marked for inclusion” and “identified for inclusion” to convey the same information. V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 8
  • 9. Table 9 Mistake errors. Error Name Description Example of error Example of fault Application errors Arise from a misunderstanding of the application or problem domain or a misunderstanding of some aspect of overall system functionality The requirements author lacks domain knowledge about loans, investing, and borrowing. As a result, she incorrectly believes that the stakeholders have provided all information necessary to decide when to remove loans that are in a default status from the repository. The requirements specification omits the requirement to retain information about borrowers who are in default status (even after the corresponding loans are deleted from the system). Environment errors Result from lack of knowledge about the available infrastructure (e.g., tools, templates) that supports the elicitation, understanding, or documentation of software requirements. The requirement authors failed to employ a standard template for documenting requirements (for example, the IEEE standard template for SRS) because they were unaware of the presence of such a template. Without the support of the template, requirements about system scope and performance were omitted. Solution Choice errors These errors occur in the process of finding a solution for a stated and well-understood problem. If RE analysts do not understand the correct use of problem-solving methods and techniques, they might end up analyzing the problem incorrectly, and choose the wrong solution Lack of knowledge of the requirements engineering process and requirements engineering terminology on part of the analysts. The analyst cannot differentiate between performance and functional requirements. A particular requirement listed under performance requirement should be a functional requirement. Syntax errors Occur when a requirement author misunderstands the grammatical rules of natural language or the rules, symbols, or standards in a formal specification language like UML. The requirements engineer misunderstood the use of navigability arrows to illustrate that one use case extends another. An association link between two classes on a UML diagram lacks a navigability arrow to indicate the directionality of association resulting a diagram that is ambiguous and can be misunderstood. Information Management errors Result from a lack of knowledge about standard requirements engineering or documentation practices and procedures within the organization The organization has a policy that requirement specifications include error- handling information. Unfamiliar with this policy, the requirements engineer fails to specify error-handling information. Specification does not indicate that an error message should be displayed regarding errors rather than just returning to a previous screen with no indication or notification of the problem. Wrong Assumption errors Occur when the requirements author has a mistaken assumption about system features or stakeholder opinions The requirements author assumes that error handling is a task common to all software projects and will be handled by programmers. Therefore, s/he does not gather that information from stakeholders. Information about what happens when a lender provides invalid data has been omitted. Mistaken belief that it is impossible to specify non- functional requirements The requirements engineer(s) may believe that non-functional requirements cannot be captured and therefore omit this process from their elicitation and development plans. The requirements engineering team failed to consider non-functional requirements. Omission of performance requirements, security requirements and other non- functional requirements. Not having a clear distinction between client and users If RE practitioners fail to distinguish between clients and end users, or do not realize that the clients are distinct from end users, they may fail to gather and analyze the end users’ requirements The requirement engineer failed to gather information from the actual end user of LA system, the Loan Analyst. No functional requirement to edit loan information has been specified, even though the ‘Purpose’ specifies loans can be edited. Lack of awareness of requirement sources Requirements gathering person is not aware of all stakeholders which he/she should contact in order to gather the complete set of user needs (including end users, customers, clients, and decision-makers). The requirement engineer was not aware of all end users and clients and did not gather the needs of a bank lender (one of the end users of LA system). This end user wanted the LA system to handle both fixed rate loans and adjustable rate loans. Omitted functionality as requirements only considers fixed rate loans. Inappropriate communication based on incomplete or faulty understanding of roles Without proper understanding of developer roles, communication gaps may arise, either by failing to communicate at all or by ineffective communication. The management structure of project team resources is lacking. Team members did not know who was supposed to elicit the requirements from a bank lender, which affected the participation of an important stakeholder during the requirements process. Omitted functionality, as requirements of a bank lender (i.e., the LA application system to handle both fixed rate loans and adjustable rate loans) were not recorded in the specification. Inadequate Requirements Process errors Occur when the requirement authors do not fully understand all of the RE steps necessary to ensure the software is complete and neglect to incorporate one or more essential steps into the plan. The requirements engineering plan did not incorporate requirement traceability measures to link requirements to user needs. An extraneous requirement that allows loan analysts to change borrower information is included that could result in unwanted functionality and unnecessary work for the developers. V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 9
  • 10. psychology journals and conferences. To reduce researcher bias, two researchers independently performed each step during the review ex- ecution before consolidating the results. This process also ensured that the researchers extracted consistent information from each primary study. We performed both the identification of requirements en- gineering human errors and the selection of a suitable human error classification framework under the supervision of human error expert Bradshaw. This supervision ensured that the HET is strongly grounded in Cognitive Psychology research. Threats unaddressed: We conducted this review based on the as- sumption that SLR-2009 [28] included all relevant studies prior to 2006. We are aware that such an assumption may have resulted in missing some studies and consequently missing some requirements phase human errors. 6. Conclusion and future work A careful application of human error research to requirements en- gineering resulted in the development of a theoretically sound tax- onomy of requirements engineering human errors. This taxonomy is based on a standard human error taxonomy from cognitive psychology. The primary contributions of this work are: • Illustration of the ability and value of applying human error re- search to a software engineering problem through close interaction between software engineering experts and cognitive psychology experts; • Development of a human error taxonomy to organize the common human errors made during requirements engineering into a sys- tematic framework; and • An analysis of the literature describing human errors to identify the lack of cohesiveness in requirements engineering research as it re- lates to the use of human error information. The use of human errors has the potential to provide a more ef- fective solution to the software quality problem than a focus only on faults. A taxonomy like the one developed in this paper will help de- velopers identify the most common types of errors so they can either implement policies to prevent those errors (e.g. training) or focus the review process on identification and removal of the faults caused by those errors. Although the primary goal of this review was to propose a taxonomy of requirements engineering human errors, this review may be helpful for other researchers who want to create human error taxonomies for other phases of the software lifecycle. While we have empirically va- lidated the usefulness of the HET for detecting and preventing re- quirement errors, we expect to gather additional evidence and ob- servations about requirement errors (see the missing fields in Table 10) through further empirical evaluations. Table 10 Mapping human errors to requirements engineering activities. RE phases Human error categories and classes E A S V M Slips Clerical Errors ✓ ✓ Term Substitution errors ✓ Lapses Loss of information from stakeholders errors ✓ Accidentally overlooking requirements errors ✓ Multiple terms for the same concept errors ✓ Mistakes Application errors: knowledge-based plan is incomplete. ✓ ✓ ✓ Environment errors: knowledge-based plan is incomplete. ✓ ✓ ✓ ✓ Information Management errors: knowledge- based plan is incomplete. ✓ Wrong Assumption errors: knowledge-based plan is wrong ✓ ✓ Inappropriate communication based on incomplete/faulty understanding of roles errors: knowledge-based plan is wrong ✓ ✓ ✓ Mistaken belief that it is impossible to specify non-functional requirements errors: knowledge- based plan is incomplete ✓ ✓ ✓ Not having a clear demarcation between client and users errors ✓ Lack of awareness of requirement sources errors: knowledge-based plan is incomplete ✓ Solution choice errors ✓ Inadequate Requirements Process errors ✓ Syntax errors ✓ Fig. 2. Citation analysis. V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 10
  • 11. Acknowledgements This research was supported by National Science Foundation awards 1423279 and 1421006. The authors would like to thank the partici- pants that attended the Workshop on Applications of Human Error Research to Improve Software Engineering (WAHESE 2015). Appendix A Table 11 describes the search strings along with their search focus that we executed on IEEExplore, INSPEC, ACM Digital Library, SCIRUS (Elsevier), Google Scholar, PsychINFO (EBSCO), and Science Citation Index databases. Appendix B Table 12 lists the distribution of the primary studies across different journal and conference avenues. Table 11 Search strings. String# Search focus Detailed search string 1 Software quality improvement approach ((software OR development OR application OR product OR project) AND (quality OR condition OR character OR property OR attribute OR aspect) AND (improvement OR enhancement OR advancement OR upgrading OR ameliorate OR betterment) AND (approach OR process OR system OR technique OR methodology OR procedure OR mechanism OR plan OR pattern)) 2 Software inspection methods ((software OR development OR application OR product OR project) AND (inspection OR assessment OR evaluation OR examination OR review OR measurement) AND (approach OR process OR system OR technique OR methodology OR procedure OR mechanism OR plan OR pattern)) 3 Error abstraction OR root causes (error OR mistake OR problem OR reason OR fault OR defect OR imperfection OR flaw OR lapse OR slip OR err) AND (abstraction OR root cause OR cause)) 4 Requirement stage errors ((requirement OR specification) AND (phase OR stage OR situation OR division OR period OR episode OR part OR state OR facet) AND (error OR mistake OR problem OR reason OR fault OR defect OR imperfection OR flaw OR lapse OR slip OR err)) 5 Software error/fault/defect taxonomy ((software OR development OR application OR product OR project) AND (error OR mistake OR problem OR reason OR fault OR defect OR imperfection OR flaw OR lapse OR slip OR err) AND (taxonomy OR classification OR categorization OR grouping OR organization OR terminology OR systematization)) 6 Human error classification ((human OR cognitive OR individual OR psychological) AND (error OR mistake OR problem OR reason OR fault OR defect OR imperfection OR flaw OR lapse OR slip OR err) AND (taxonomy OR classification OR categorization OR grouping OR organization OR terminology OR systematization)) Table 12 Distribution of primary studies across venues. Source Study count International Journal of Computer Applications 1 IEEE International Conference on Engineering of Complex Computer Systems (ICECCS) 1 Information Sciences Journal 1 IEEE Frontiers in Education Conference 1 International Conference on Software Engineering Advances 1 ACM-IEEE international symposium on Empirical software engineering and measurement (ESEM) 1 Workshop em Engenharia de Requisitos (WER) 1 IEEE/NASA Software Engineering Workshop (SEW) 1 Minds and Machines Journal 1 Malaysian Conference in Software Engineering (MySEC) 1 Information Systems Journal 2 Information and Software Technology Journal 1 International Workshop on Rapid Continuous Software Engineering (RCoSE) 1 International Conference on Recent Advances in Computing and Software Systems (RACSS) 1 ISSAT International Conference on Reliability and Quality in Design 1 IEEE International Requirements Engineering Conference (RE) 1 IEEE Annual Computer Software and Applications Conference Workshops 1 IEEE International Conference on Cognitive Informatics 1 IFIP/IEEE International Symposium on Integrated Network Management (IM) 1 IEEE International Symposium on Software Reliability Engineering (ISSRE), 1 International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS) 1 International Conference on Software Engineering and Data Mining (SEDM), 1 Iberian Conference on Information Systems and Technologies (CISTI) 1 International Conference on Software Engineering and Knowledge Engineering (SEKE) 1 Journal of Object Technology 1 Journal of Systems and Software 2 (continued on next page) V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 11
  • 12. Appendix C This appendix lists the papers describing the 18 studies that provided input to the HET. Some of these papers are also cited in the main paper. Those papers are indicated with an asterisk (*) and included in the reference list for the main paper. • Basili VR, Perricone BT (1984) Software Errors and Complexity: An Empirical Investigation. Commun ACM 27:42–52. doi:10.1145/69,605.2085 • Basili VR, Rombach HD (1987) Tailoring the Software Process to Project Goals and Environments. In: Ninth Int. Conf. Softw. Eng. IEEE Press, California, USA, pp 345–357 • Bhandari I, Halliday MJ, Chaar J, Chillarege R, Jones K, Atkinson JS, Lepori-Costello C, Jasper PY, Tarver ED, Lewis CC, Yonezawa M (1994) In- process improvement through defect data interpretation. IBM Syst J 33:182–214. doi:10.1147/sj.331.0182 • *Bjarnason E, Wnuk K, Regnell B [5] Requirements are slipping through the gaps - A case study on causes & effects of communication gaps in large-scale software development. In: Proc. 2011 IEEE 19th Int. Requir. Eng. Conf. RE 2011. pp 37–46 • Coughlan J, Macredie RD (2002) Effective Communication in Requirements Elicitation: A Comparison of Methodologies. Requir Eng 7:47–60. doi:10.1007/s007660200004 • *Firesmith D [8] Common requirements problems, their negative consequences, and the industry best practices to help solve them. J Object Technol 6:17–33. doi:10.5381/jot.2007.6.1.c2 • Galliers J, Minocha S, Sutcliffe AG (1998) A causal model of human error for safety critical user interface design. ACM Trans Comput Interact 5:756–769. • *Huang F, Liu B, Huang B [12] A Taxonomy System to Identify Human Error Causes for Software Defects. Proc. 18th Int. Conf. Reliab. Qual. Des. ISSAT 2012 • *Kumaresh S, Baskaran R (2010) Defect Analysis and Prevention for Software Process Quality Improvement. Int J Comput Appl 8:42–47. doi:10. 5120/1218–1759 • *Kushwaha DS, Misra AK [16] Cognitive software development process and associated metrics - A framework. In: Proc. 5th IEEE Int. Conf. Cogn. Informatics, ICCI 2006. pp 255–260 • *Lehtinen TOA, Mantyla M V., Vanhanen J, Itkonen J, Lassenius C [18] Perceived causes of software project failures - An analysis of their relationships. Inf Softw Technol 56:623–643. doi:10.1016/j.infsof.2014.01.015 • *Leszak M, Perry DE, Stoll D [19] A case study in root cause defect analysis. Proc 22nd Int Conf Softw Eng - ICSE ’00 428–437. doi:10.1145/337, 180.337232 • *Lopes M, Forster C [20] Application of human error theories for the process improvement of Requirements Engineering. Inf Sci (Ny) 250:142–161. • Lutz RR (1993) Analyzing software requirements errors in safety-critical, embedded systems. Proc IEEE Int Symp Requir Eng 126–133. doi:10. 1109/ISRE.1993.324825 • Mays RG, Jones CL, Holloway GJ, Studinski DP [21] Experiences with Defect Prevention. IBM Syst J 29:4–32. doi:10.1147/sj.291.0004 • Nakashima T, Oyama M, Hisada H, Ishii N (1999) Analysis of software bug causes and its prevention. Inf Softw Technol 41:1059–1068. doi:10. 1016/S0950-5849(99)00,049-X • De Oliveira KM, Zlot F, Rocha AR, Travassos GH, Galotta C, De Menezes CS (2004) Domain-oriented software development environment. J Syst Softw 72:145–161. doi:10.1016/S0164-1212(03)00,233–4 • Zhang X, Pham H (2000) An analysis of factors affecting software reliability. J Syst Softw 50:43–56. doi:10.1016/S0164-1212(99)00,075–8 References [1] V. Anu, G.S. Walia, G. Bradshaw, W. Hu, J. Carver, Usefulness of a human error identification tool for requirements inspection: an experience report, Proceedings 23rd International Working Conference on Requirements Engineering: Foundation for Software Quality. REFSQ 2017, Essen, Germany, 2017. [2] V. Anu, G.S. Walia, W. Hu, J.C. Carver, G. Bradshaw, Using a cognitive psychology perspective on errors to improve requirements quality: an empirical investigation, Proceedings 27th IEEE Int. Symp. on Soft. Rel. Eng. ISSRE 2016, Ottawa, CA, 2016, pp. 65–76. [3] V. Anu, G.S. Walia, W. Hu, J.C. Carver, G. Bradshaw, Effectiveness of human error taxonomy during requirements inspection: an empirical investigation, Proceedings 28th Int. Conf. Softw. Eng. Knowl. Eng. SEKE 2016, San Francisco, CA, USA, 2016, pp. 531–536. [5] E. Bjarnason, K. Wnuk, B. Regnell, Requirements are slipping through the gaps - A case study on causes & effects of communication gaps in large-scale software de- velopment, Proceedings 2011 IEEE 19th Int. Requir. Eng. Conf. RE 2011, 2011, pp. 37–46. [6] R. Chillarege, I.S. Bhandari, J.K. Chaar, M.J. Halliday, B.K. Ray, D.S. Moebus, Orthogonal defect classification a concept for in-process measurements, IEEE Trans. Softw. Eng. 18 (1992) 943–956, http://dx.doi.org/10.1109/32.177364. [7] T. Diller, G. Helmrich, S. Dunning, S. Cox, A. Buchanan, S. Shappell, The Human Factors Analysis Classification System (HFACS) applied to health care, Am. J. Med. Qual. 29 (2014) 181–190, http://dx.doi.org/10.1177/1062860613491623. [8] D. Firesmith, Common requirements problems, their negative consequences, and the industry best practices to help solve them, J. Object. Technol. 6 (2007) 17–33, http://dx.doi.org/10.5381/jot.2007.6.1.c2. Table 12 (continued) Source Study count The EDP Audit, Control, and Security Newsletter (Journal) 1 International Management Review Journal 1 American Journal of Medical Quality 1 IEEE International Conference on Automation and Logistics (ICAL) 1 Safety Science Journal 1 IFIP Advances in Information and Communication Technology (Journal) 1 International conference on Computer Safety, Reliability, and Security (SAFECOMP) 1 International Conference on Industrial Engineering and Engineering Management 1 Studies in Health Technology and Informatics (Book Series) 1 Professional Safety Journal 1 V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 12
  • 13. [9] B.G. Glaser, A.L. Strauss, The discovery of grounded theory, Int. J. Qual. Methods 5 (1967) 1–10, http://dx.doi.org/10.2307/588533. [10] W. Hu, J.C. Carver, V. Anu, G.S. Walia, G. Bradshaw, Detection of requirement errors and faults via a human error taxonomy: a feasibility study, Proceedings 10th ACM/IEEE Int. Symp. Empir. Softw. Eng. Meas. ESEM 2016, Ciudad Real, Spain, 2016. [11] W. Hu, J.C. Carver, V. Anu, G.S. Walia, G. Bradshaw, Defect prevention in re- quirements using human error information: an empirical study, 23rd International Working Conference on Requirements Engineering: Foundation for Software Quality, 2017. [12] F. Huang, B. Liu, B. Huang, A taxonomy system to identify human error causes for software defects, Proceedings 18th Int. Conf. Reliab. Qual. Des. ISSAT 2012, 2012. [13] IEEE, Systems and software engineering – vocabulary, ISO/IEC/IEEE 24765 (2010(E)) (2010) 1–418, http://dx.doi.org/10.1109/IEEESTD.2010.5733835. [14] B. Kitchenham, Procedures for Performing Systematic Reviews 33 Keele Univ, Keele, UK, 2004, p. 28 10.1.1.122.3308. [15] S. Kumaresh, R. Baskaran, Defect analysis and prevention for software process quality improvement, Int. J. Comput. Appl. 8 (2010) 42–47, http://dx.doi.org/10. 5120/1218-1759. [16] D.S. Kushwaha, A.K. Misra, Cognitive software development process and associated metrics - A framework, Proceedings 5th IEEE Int. Conf. Cogn. Informatics, ICCI 2006, 2006, pp. 255–260. [17] F. Lanubile, F. Shull, V.R. Basili, Experimenting with error abstraction in require- ments documents, Proceedings 5th Int. Symp. Softw. metrics, 1998. [18] T.O.A. Lehtinen, V. Mantyla M, J. Vanhanen, J. Itkonen, C. Lassenius, Perceived causes of software project failures - An analysis of their relationships, Inf. Softw. Technol. 56 (2014) 623–643, http://dx.doi.org/10.1016/j.infsof.2014.01.015. [19] M. Leszak, D.E. Perry, D. Stoll, A case study in root cause defect analysis, Proceedings 22nd Int. Conf. Softw. Eng. ICSE 2000, Limerick, Ireland, 2000, pp. 428–437. [20] M. Lopes, C. Forster, Application of human error theories for the process im- provement of Requirements Engineering, Inf. Sci. (Ny) 250 (2013) 142–161. [21] R. Mays, C. Jones, G. Holloway, D. Studinski, Experiences with defect prevention, IBM Syst. J. 29 (1) (1990) 4–32. [22] J. Reason, Human Error, Cambridge University Press, New York, NY, 1990. [23] J. Reason, The Human contribution: Unsafe acts, Accidents and Heroic Recoveries, Ashgate Publishing, Ltd., Surrey, UK, 2008. [24] J. Reason, A. Manstead, S. Stradling, J. Baxter, K. Campbell, Errors and violations on the roads: a real distinction? Ergonomics 33 (1990) 1315–1332, http://dx.doi. org/10.1080/00140139008925335. [25] E.R. Scerri, The Periodic Table: Its Story and Its Significance, Oxford University Press, USA, 2006. [26] D. Scherer, Q.V. Maria de Fátima, JADN Neto, Human error categorization an ex- tension to classical proposals applied to electrical systems operations, Hum.-Comp. Inter. (2010) 234–245. [27] G. Tassey, The Economic Impacts of Inadequate Infrastructure for Software Testing, (2002) Gaithersburg, MD. [28] G.S. Walia, J.C. Carver, A systematic literature review to identify and classify software requirement errors, Inf. Softw. Technol. 51 (2009) 1087–1109, http://dx. doi.org/10.1016/j.infsof.2009.01.004. [29] G.S. Walia, J.C. Carver, Using error abstraction and classification to improve re- quirement quality: conclusions from a family of four empirical studies, Empir. Softw. Eng. 18 (2013) 625–658, http://dx.doi.org/10.1007/s10664-012-9202-3. [30] D. Wiegmann, C. Detwiler, Human error and general aviation accidents : a com- prehensive, fine-grained analysis using HFACS, Security (2005) 1–5, http://dx.doi. org/10.1518/001872007X312469. [31] J. Zhang, V.L. Patel, T.R. Johnson, E.H. Shortliffe, A cognitive taxonomy of medical errors, J. Biomed. Inform. 37 (2004) 193–204, http://dx.doi.org/10.1016/j.jbi. 2004.04.004. V. Anu et al. Information and Software Technology xxx (xxxx) xxx–xxx 13