Find a recently (post-2010) published study in your field which
applies one or more of the multivariate statistical methods discussed in class. Create a document that includes:
1) A discussion of the main the results of the article
2) A full critique of the methodology used and the author's interpretation of the results
3) A discussion of whether the analysis answers the research question(s) posed
4) Your recommendations for revising the paper and why
JOURNAL OF INFORMATION SYSTEMS American Accounting Association
Vol. 31, No. 3 DOI: 10.2308/isys-51837
Fall 2017
pp. 81–99
When Should Audit Firms Introduce
Analyses of Big Data Into the Audit Process?
Anna M. Rose
Jacob M. Rose
Oregon State University
Kerri-Ann Sanderson
Jay C. Thibodeau
Bentley University
ABSTRACT: This study investigates how the timing of the consideration of Big Data visualizations affects an
auditor’s evaluation of evidence and professional judgments. In addition, we examine whether the use of an intuitive
processing mode, as compared to a deliberative processing mode, influences an auditor’s use and evaluation of Big
Data visualizations. We conduct an experiment with 127 senior auditors from two Big 4 firms and find that auditors
have difficulty recognizing patterns in Big Data visualizations when viewed before more traditional audit evidence.
Our findings also indicate that auditors who view Big Data visualizations containing patterns that are contrary to
management assertions after they view traditional audit evidence have greater concerns about potential
misstatements and increase budgeted hours more. Overall, our results suggest that Big Data visualizations used
as evidential matter have fewer benefits when they are viewed before auditors examine more traditional audit
evidence.
Keywords: Big Data; visualizations; pattern recognition; intuitive processing; deliberative processing.
I. INTRODUCTION
T
he financial statement audit process increasingly involves the use of greater amounts of data and more sophisticated
analytical tools. In order to leverage the value of new data sources and ultimately reduce the risk of material
misstatement, audit firms are now evaluating audit approaches that encompass multiple external and internal sources of
data (Yoon, Hoogduin, and Zhang 2015). Many of the new approaches involve harnessing the richness of information
contained in what is commonly referred to as Big Data. We examine auditors’ use of Big Data to identify relevant patterns that
can be used to inform their audit judgments and decisions.
Big Data consists of large, unstructured datasets that are beyond the processing capabilities of traditional querying tools
and that include data from financial and nonfinancial sources (Brown-Liburd, Issa, and Lombardi 2015). Big Data is generated
on a continuous basis from a wide variety of sources with varying degrees of veracity (Zhang, Yang, and Appelbaum 2015).
Audit firms wish to use this potentially va.
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Find a recently (post-2010) published study in your field which .docx
1. Find a recently (post-2010) published study in your field which
applies one or more of the multivariate statistical methods
discussed in class. Create a document that includes:
1) A discussion of the main the results of the article
2) A full critique of the methodology used and the author's
interpretation of the results
3) A discussion of whether the analysis answers the research
question(s) posed
4) Your recommendations for revising the paper and why
JOURNAL OF INFORMATION SYSTEMS American
Accounting Association
Vol. 31, No. 3 DOI: 10.2308/isys-51837
Fall 2017
pp. 81–99
When Should Audit Firms Introduce
Analyses of Big Data Into the Audit Process?
Anna M. Rose
Jacob M. Rose
Oregon State University
Kerri-Ann Sanderson
Jay C. Thibodeau
Bentley University
ABSTRACT: This study investigates how the timing of the
2. consideration of Big Data visualizations affects an
auditor’s evaluation of evidence and professional judgments. In
addition, we examine whether the use of an intuitive
processing mode, as compared to a deliberative processing
mode, influences an auditor’s use and evaluation of Big
Data visualizations. We conduct an experiment with 127 senior
auditors from two Big 4 firms and find that auditors
have difficulty recognizing patterns in Big Data visualizations
when viewed before more traditional audit evidence.
Our findings also indicate that auditors who view Big Data
visualizations containing patterns that are contrary to
management assertions after they view traditional audit
evidence have greater concerns about potential
misstatements and increase budgeted hours more. Overall, our
results suggest that Big Data visualizations used
as evidential matter have fewer benefits when they are viewed
before auditors examine more traditional audit
evidence.
Keywords: Big Data; visualizations; pattern recognition;
intuitive processing; deliberative processing.
I. INTRODUCTION
T
he financial statement audit process increasingly involves the
use of greater amounts of data and more sophisticated
3. analytical tools. In order to leverage the value of new data
sources and ultimately reduce the risk of material
misstatement, audit firms are now evaluating audit approaches
that encompass multiple external and internal sources of
data (Yoon, Hoogduin, and Zhang 2015). Many of the new
approaches involve harnessing the richness of information
contained in what is commonly referred to as Big Data. We
examine auditors’ use of Big Data to identify relevant patterns
that
can be used to inform their audit judgments and decisions.
Big Data consists of large, unstructured datasets that are beyond
the processing capabilities of traditional querying tools
and that include data from financial and nonfinancial sources
(Brown-Liburd, Issa, and Lombardi 2015). Big Data is
generated
on a continuous basis from a wide variety of sources with
varying degrees of veracity (Zhang, Yang, and Appelbaum
2015).
Audit firms wish to use this potentially vast source of evidence
to enhance audit effectiveness, but research in this area remains
incomplete, in large part, due to rapid technological advances
(KPMG 2012; Yoon et al. 2015).
While the prospects of using Big Data offer much promise to
financial statement auditors, there is currently limited
understanding of the effects of Big Data on the judgment and
4. decision-making processes of financial statement auditors
(Brown-Liburd et al. 2015). Indeed, there are a number of
critical issues for audit firms to address before they can
successfully
implement analyses of Big Data in practice, such as the types of
data to analyze and the most appropriate presentation formats
to utilize (Cao, Chychyla, and Stewart 2015). In addition, there
is a broad issue that has received little or no research attention
to date, but has the capacity to influence the costs and benefits
of all forms of Big Data and all visualization formats. This
issue
The authors thank participants of the 2016 Journal of
Information Systems Conference and workshop participants at
Bentley University for their insightful
comments. We also thank the professional participants who took
the time to complete the experimental materials. Finally, we are
very grateful for the
guidance of the special issue editors, A. Faye Borthick and
Robin R. Pennington.
Editor’s note: Accepted by A. Faye Borthick.
Submitted: April 2016
Accepted: May 2017
Published Online: June 2017
81
5. involves when to provide audit teams with the results of
analyses of Big Data. Failure to use appropriate data or
presentation
formats (Alles 2015; Information Systems Audit and Control
Association [ISACA] 2013) or providing audit teams with data
analyses at inappropriate times in the audit process has the
potential to create audit inefficiencies, increase the possibility
of
decision traps and biases, and even lead to audit failures.
In an effort to shed light on the effects of Big Data on auditor
judgment and decision processes, the purpose of this study is
to examine how the timing of presenting Big Data visualizations
influences an auditor’s evaluation of evidence and related
professional judgments. Audit firms are investing significant
resources into research designed to explore potential
applications
of Big Data visualizations in the audit process, and data-
visualization groups are among the fastest growing practice
areas at the
larger offices of the Big 4 firms. Our interviews of audit
partners at several Big 4 firms indicate that visualizations are
currently
being viewed at different points in the audit engagement, and
auditors can examine these visualizations before or after
traditional audit evidence is examined. As an example of the use
of visualizations prior to the examination of traditional
6. evidence, one firm distributes visualizations to the audit team
for use in a fraud brainstorming session at the beginning of the
audit, and auditors need to recognize patterns before they
evaluate traditional sources of audit evidence. In addition, our
interviews of Chief Audit Executives (CAEs) revealed that some
Fortune 500 corporations are drastically changing their
internal audit processes by using Big Data analyses to identify
patterns that are then used to derive audit plans before other
evidence is examined.
We asked audit partners and CAEs about the expected benefits
of allowing auditors to examine visualizations of Big Data
early in the audit process and before examining traditional audit
evidence, and the underlying assumption was that it is
beneficial for auditors to look for patterns in Big Data before
they derive conclusions from other information. That is, there is
an
assumption that Big Data visualizations can reveal more useful
patterns and more valuable information if auditors have a
‘‘clean
slate’’ and have not already formed hypotheses and drawn
conclusions based on traditional audit evidence. We propose
that
such an approach may create threats to effective auditing and
may limit the benefits of Big Data analytics.
In addition, we posit that intuitive versus deliberative
processing of evidence influences auditors’ use of Big Data
7. visualizations. Prior research demonstrates that when decision
makers employ intuitive processing, they are better able to
recognize evidence that does not conform to expectations than
are decision makers who employ deliberative processing
(Wilson and Schooler 1991; Zhong 2011). Thus, intuitive
processing should enhance auditors’ professional skepticism and
improve their ability to recognize and identify threats relative
to deliberative processing. Our experiment, therefore, addresses
two key issues facing audit firms as they evaluate applications
of Big Data in practice: (1) when to present the results of Big
Data analyses to audit teams in order to maximize judgment
benefits and minimize undesirable judgment traps and biases,
and
(2) to better understand factors (e.g., processing modes) that
may affect auditors’ interpretation and incorporation of Big
Data
visualizations into the financial statement audit process.
We employ a 2 3 2 between-participant research design that
involves 127 experienced audit seniors from two Big 4 firms
to examine the implications of the timing of Big Data
visualizations used during the audit. We manipulate when
auditors
examine visualizations of Big Data (before or after examining
more traditional client information that creates a decision
anchor)
8. and investigate the effects of such timing on multiple auditor
judgments. In addition, the experimental design includes a
second
manipulation that is known to prime a deliberative versus an
intuitive processing mode, which allows us to evaluate whether
the
use of Big Data visualizations and their timing have different
effects on auditor judgment depending upon the predominant
processing mode that auditors use to evaluate the data.
Our results indicate that auditors have difficulty recognizing
relevant patterns in Big Data visualizations when viewed
before the evaluation of more traditional client information. In
addition, and importantly, there are significant effects of the
timing of Big Data use on auditor judgments that affect both
audit planning and overall audit effectiveness. Auditors who
view
Big Data visualizations containing information that is contrary
to managements’ disclosures after they review preliminary
analytical procedures (i.e., traditional audit evidence) express
more concerns about misstatements, relative to auditors who
receive these visualizations before reviewing preliminary
analytical procedures. Further, auditors increase budgeted hours
more
when visualizations that are contrary to management assertions
are presented after reviewing preliminary analytical procedures,
rather than before reviewing the analytical procedures.
9. Mediation analyses reveal that these effects are driven, at least
in part,
by the effects of presentation timing on beliefs about the
truthfulness of management’s explanation for events and
concerns
about the need to collect additional data when visualizations
suggest conflict with the results of traditional audit procedures.
The results of this study are important because auditors may
ignore Big Data visualizations entirely in favor of internal data
sources or fail to recognize relevant patterns in visualizations,
such as trends that contradict information discovered during
traditional analytical procedures or relationships that are
contrary to management assertions. Thus, presentation timing is
important to the recognition of meaningful patterns in Big Data
and the integration of Big Data into decision processes. Overall,
our results indicate that Big Data offers fewer benefits and
potentially undesirable consequences when the visualizations
are
introduced into the decision processes of auditors before they
have examined other audit evidence and formed initial
hypotheses and expectations that can drive their search for
patterns in Big Data.
82 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
10. II. BACKGROUND AND HYPOTHESIS DEVELOPMENT
Use of Big Data in the Financial Statement Audit
Professional standards require external auditors to perform
preliminary analytical procedures during the planning stages of
each financial statement audit. The objective of such procedures
is to direct the auditor’s attention to those significant financial
statement accounts that might contain a material misstatement
(Louwers, Blay, Sinason, Strawser, and Thibodeau 2018). To
complete the analytical procedures, auditors should consider all
types of relevant data that would help to improve their
understanding of the client’s business and industry, including
‘‘data that is preliminary or data that is aggregated at a high
level’’
in their analyses (Public Company Accounting Oversight Board
[PCAOB] 2010, }48). As a result, and given the extent of data
that are currently available from a wide variety of sources, the
increased use of Big Data holds much appeal as a way to
improve the effectiveness of preliminary analytical review
procedures.
This potential is acknowledged in a recent
PricewaterhouseCoopers (PwC) publication that states that
‘‘data analytics are
altering the way the audit process is done . . . auditors have new
tools to extract and visualize data, allowing them to dig into
larger, non-traditional data sets and perform more intricate
11. analysis . . . the ability to analyze all of it leads to better
insight’’
(PwC 2015, 6). However, in order to take advantage of Big Data
when completing a preliminary analytical review, auditors
must be able to recognize patterns in the Big Data being
analyzed.
For some time, researchers have known about the importance of
an auditor’s ability to recognize data patterns while
assessing audit risks that may suggest the existence of errors or
fraud in the financial statements (Libby 1985; Bedard and Biggs
1991; Coakley and Brown 1993; O’Donnell and Perkins 2011).
New data sources offer exciting opportunities for the
identification of decision-relevant patterns that have previously
been hidden from auditors. Indeed, auditors are taking a more
encompassing approach in their audit risk assessments by
gathering and examining evidence from a variety of sources in
an
effort to decrease the likelihood of material misstatement and
audit failure (Yoon et al. 2015). Evidential matter from Big
Data
has the potential to be instrumental in this process. Data
analytic tools allow auditors to search for patterns in Big Data
that
would likely be undetectable in typical audit samples or smaller
datasets (Alles and Gray 2014; Alles 2015). However, the large
volumes of output from analyses of Big Data could be
12. overwhelming for auditors to cognitively process and may
exacerbate
auditors’ inability to effectively recognize patterns (Brown-
Liburd et al. 2015).
In fact, prior research finds that auditors often fail to
adequately recognize patterns in financial and nonfinancial data
(Libby 1985; Bedard and Biggs 1991; Bierstaker, Bedard, and
Biggs 1999; Asare, Trompeter, and Wright 2000; O’Donnell and
Perkins 2011). Further, studies find that auditors can fail to
correctly extrapolate findings from time-series data (Biggs and
Wild
1985), and may not adequately use prior information when
analyzing subsequently identified patterns to inform their
ultimate
judgment (Bedard and Biggs 1991). Results from these prior
studies suggest that there is a significant danger that auditors
will
not recognize important patterns in new and more complicated
data sources. In addition to the possibility that auditors will fail
to recognize patterns and use them as inputs to their judgments
and decisions, visualizations of Big Data could also lead to the
identification of a vast number of meaningless patterns that
distract and lead the decision maker to pursue irrelevant and/or
inefficient investigations (Brown-Liburd et al. 2015).
Although the prior literature suggests that auditors often fail to
13. adequately identify relevant data patterns, there are factors
related to the auditor and the auditing environment that can
improve the pattern recognition process. Hammersley (2006 )
finds
that industry-specialist auditors have better developed problem
representations about their subject industry and are better able
to
identify seeded misstatement patterns. Likewise, Selby (2011)
finds that auditors with a procedural understanding of
automated
controls better assimilate the meanings of risk patterns in
automated controls. Studies also find that visualization
strategies can
help auditors recognize patterns. O’Donnell and Perkins (2011)
find that employing a systems-thinking tool approach can help
auditors better identify fluctuation patterns in accounts and
appropriately assess misstatement risk. These studies provide
evidence that auditors’ pattern recognition deficiencies can be
overcome by enhancing their knowledge structures and also by
altering the manner of information presentation. In this study,
we focus on one critical aspect of information presentation (i.e.,
its timing). And, importantly, employing experienced auditors
allows us to examine the effects of Big Data on a participant
pool that has knowledge and experience in the subject matter
addressed in our decision task, thereby reducing the likelihood
14. that knowledge deficits or lack of training are causing the
results.
Effects of Big Data Visualization Timing on Auditor Decisions
Auditing firms stress the importance of overcoming judgment
biases during the audit process (KPMG 2011). The use of
Big Data in audit practice has the potential to create a variety of
decision biases and exacerbate existing biases (Brown-Liburd
et al. 2015). For example, auditors may ignore relevant patterns
in Big Data, fail to recognize patterns, or identify irrelevant
patterns. Failing to recognize patterns or focusing on irrelevant
patterns would severely limit the value of Big Data to the audit
process and decrease audit efficiency and effectiveness.
Focusing on patterns that are irrelevant to audit decisions is
particularly
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 83
Journal of Information Systems
Volume 31, Number 3, 2017
worrisome because decision makers tend to overgeneralize from
interesting examples and anecdotes, while placing too little
emphasis on statistically supported conclusions (e.g., Borgida
and Nisbett 1977; Hamill, Wilson, and Nisbett 1980; Fagerlin,
Wang, and Ubel 2005; Kida 2006 ). The novelty of Big Data
15. visualizations could make these more interesting than
traditional
audit procedures, and identification of irrelevant patterns would
result in excessive auditing, focusing on irrelevant issues, and
possibly incorrect audit conclusions.
Big Data analysis has the potential to change the way auditors
collect evidence and make audit decisions (Brown-Liburd et
al. 2015). Although there are a few prior studies that discuss the
potential impacts of Big Data on auditors’ decisions (Brown-
Liburd et al. 2015) and factors that influence the use of Big
Data on the audit (e.g., Alles 2015; Cao et al. 2015), there is a
dearth of research investigating how and in what ways the
review of Big Data affects auditor judgment. This study aims to
fill
this gap in the literature by examining how the timing of Big
Data review influences auditors’ judgments and evidence
evaluation. As there is limited prior research in this area, we
gather insights from practitioners about the potential effects of
the
timing of Big Data review on auditors’ subsequent judgments.
Interviews with partners at Big 4 firms suggest that there is no
defined standard for the timing of Big Data examination during
the audit engagement. Big Data may be reviewed before or after
examining traditional audit evidence. In some large
corporations, Internal Auditing departments require Big Data
16. analysis and
review during the audit planning phase before examination of
more traditional audit evidence. Practitioners argue that
examining Big Data visualizations before traditional evidence
allows auditors to identify useful patterns that will enhance
their
interpretation and analyses of more traditional audit evidence.
Other practitioners argue that Big Data visualizations should
only be used to supplement other audit evidence and that Big
Data should be reviewed after the examination of more
traditional
audit evidence. While practitioners hold differing perspectives
regarding when Big Data evidence should be reviewed during
the audit engagement, there is no research that we are aware of
that investigates the potential effects of timing on auditor
decision making.
In the context of analyzing Big Data, we expect that developing
an initial expectation or hypothesis about an audit matter
will provide auditors with a framework for understanding that
will enhance an auditor’s pattern recognition. Consistent with
Hammersley (2006), our study contends that auditors need a
framework within which to identify and interpret patterns
revealed
by Big Data. Given the massive volumes of data and their
widely varying veracity, Big Data is noisy and can easily
17. contribute
to information overload for the auditor (Brown-Liburd et al.
2015). In this complex and messy Big Data environment,
auditors
who understand the client’s financial issues and the results of
traditional audit tests and who are able to form an initial
expectation will develop a decision framework within which
they can be more professionally skeptical and better able to
identify relevant patterns presented in Big Data analyses.
Having at least one perspective or expectation about the
evidence under consideration allows the evaluator to envision
factors that support or refute this initial expectation
(Hammersley 2011). Auditors develop an expectation about the
client’s
financial statements by gathering initial audit evidence through
preliminary management inquiries, analytical procedures, and
other audit-related activities. Prior research contends that
finding meaningful patterns in Big Data is challenging because
the
datasets are so voluminous and varied that many different
patterns can emerge (e.g., Brown-Liburd et al. 2015; Carraway
2013), and having an initial perspective of the patterns that
would be consistent or inconsistent with prior expectations can
guide information search in a Big Data environment.
18. Given that having a decision framework and forming an initial
expectation facilitate pattern recognition (Hammersley
2006), we expect that auditors who evaluate visualizations of
Big Data after reviewing traditional audit evidence will better
recognize patterns of evidence. Pattern recognition will be
improved because the traditional evidence provides the decision
framework and information needed to form expectations. This
leads to our first hypothesis:
H1a: Auditors who examine Big Data visualizations after
reviewing preliminary analytical procedures will be more likely
to recognize important patterns of evidence than will auditors
who examine Big Data visualizations before
reviewing analytical procedures.
In addition to facilitating pattern recognition, we also expect
that presentation timing will influence auditor judgments and
decisions. Prior research in psychology indicates that the order
in which different types of information are received changes the
processing mode that decision makers employ to analyze the
data and ultimately form their judgments. An important prior
study in this area was conducted by De Martino, Kumaran,
Seymour, and Dolan (2006). Their experiments employ
functional
magnetic resonance imaging (fMRI) scans to determine what
portions of the brain are activated by framing biases and
different
19. decision processes. They find that individuals differ in their
tendencies to use intuitive versus deliberative regions of the
brain,
and that combining intuitive and deliberative judgment
processes produces superior judgments relative to intuition or
deliberative processing alone. In addition, and important to our
study, creating conflict between an intuitive judgment and a
deliberative process resulted in the activation of multiple
regions of the brain and the most effective judgment processes.
These
results suggest that Big Data analyses and visualizations offer
more benefits when these data create conflicts with intuitions
and
84 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
initial expectations. That is, some of the benefits of Big Data in
the audit will come from the capacity of the data to create
conflict, activate skepticism, and produce judgments that
combine intuition and deliberative reasoning.
To create this conflict, auditors need to form an initial
hypothesis or expectation about audit evidence, which then
allows
them to envision factors that could refute their expectation
20. (Hammersley 2011). We posit that examining Big Data
visualizations after forming expectations based on traditional
audit evidence will allow auditors to better integrate evidence
that
is contrary to management assertions into their decision
processes and employ professional skepticism. Auditors who
receive
Big Data evidence that differs from management assertions will
be more skeptical of the accuracy and reliability of accounting
disclosures when the visualizations are presented after auditors
have viewed traditional audit evidence and have formed initial
expectations:
H1b: Auditors who examine Big Data visualizations (containing
patterns of evidence that differ from management
assertions) after reviewing preliminary analytical procedures
will be more likely to believe that accounting figures
are misstated than will auditors who examine visualizations
before reviewing analytical procedures.
Further, auditors’ heightened evidence pattern recognition (i.e.,
resulting from reviewing traditional audit evidence before Big
Data visualizations), coupled with their increased skepticism
regarding management’s representations and accounting figures,
will lead auditors to request and examine more audit evidence.
This leads to our next hypothesis:
H1c: Auditors who examine Big Data visualizations (containing
21. patterns of evidence that differ from management
assertions) after reviewing preliminary analytical procedures
will increase their assessment of budgeted audit hours
more than will auditors who examine visualizations before
reviewing analytical procedures.
Effects of Processing Mode on Auditor Judgments and
Decisions
Psychology research indicates that individuals make decisions
using two generic modes of information processing: an
intuitive mode (‘‘thinking fast’’), in which decisions are made
using automatic and heuristic processes; and a deliberative or
analytical mode (‘‘thinking slow’’), which engages in more
controlled and systematic reasoning (Kahneman 2011). Intuitive
processing is fast and encompasses an emotive approach to
evaluation that is performed subconsciously (Dane and Pratt
2007).
Historically, researchers have suggested that intuitive and
heuristic processing leads to poor decision making fraught with
systematic errors (e.g., Tversky and Kahneman 1974), while
engaging in more effortful and deliberative processing improves
judgements and reduces individuals’ susceptibility to the
processing pitfalls observed in heuristic processing (Kahneman
2011).
Recent psychology research has given cause to rethink the
widely held belief that deliberative processing consistently
results in superior decision making relative to intuitive
processing. Research suggests that intuitive processing accesses
22. subconscious cognitive structures that provide a multi-faceted
evaluative approach to problem solving, whereas deliberative
processing crowds out intuition and leads people to only focus
on the salient factors in a given context (e.g., Zhong 2011;
Wilson and Schooler 1991). For example, Ambady and
Rosenthal (1993) find that students can make accurate
evaluations of a
person’s teaching performance in a very short period of time,
even though they may not be able to articulate the specific
reasons
or factors leading to their conclusion. Similarly, Zhong (2011)
finds that participants who exercise intuitive processing make
better moral decisions than those who exercise deliberative
processing. Zhong (2011) suggests that these findings are a
result of
the differential focus of deliberative versus intuitive processing.
He reasons that compared to intuitive processing, deliberative
processing focuses attention on the saliency of factors in the
given context with little consideration of whether these factors
could be legitimate causes of the effects observed.
Consequently, auditors are likely better able to identify
deviations in patterns when they apply a more intuitive
approach to
pattern evaluation. That is, when auditors engage in intuitive
processing (versus deliberative processing) before evaluating
23. audit
evidence, it is likely that they will be better able to recognize
anomalies and instances where evidence factors do not conform
to
expectations. In this way, auditors will be more skeptical during
evidence evaluation when they employ an intuitive approach to
evidence evaluation. Research also finds that processing mode
changes based on the decision context, and processing mode
can, therefore, be effectively primed (see, e.g., Hsee and
Rottenstreich 2004; Small, Loewenstein, and Slovic 2007;
Zhong
2011). Thus, there is an opportunity to influence auditors’
processing modes in order to enhance their ability to recognize
anomalies and important patterns in data.
Overall, research finds that intuitive processing engages
evidence evaluation from a more comprehensive perspective
(Dane and Pratt 2007), rather than focusing on salient factors
that may not be diagnostic of the environment. In our study,
auditors evaluate audit evidence related to a client’s gross
margin derived from traditional analytical methods, as well as
from
Big Data sources. Given that intuitive processing is an emotive
response (e.g., Tversky and Kahneman 1974) that operates by
subconsciously matching patterns in evaluated evidence with a
subconscious set of expectations (Lieberman 2000), we expect
24. that auditors who engage in intuitive processing will be more
sensitive to patterns that challenge expectations (Wolfe,
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 85
Journal of Information Systems
Volume 31, Number 3, 2017
Christensen, and Vandervelde 2016 ). As such, we expect that
auditors engaged in intuitive processing will be more concerned
about the potential for errors in the client’s gross margin
presentation, and will be more alert to multiple patterns of audit
evidence than will auditors who engage in deliberative
processing. This leads to our next set of hypotheses:
H2a: Auditors exposed to an intervention that increases the
level of intuitive processing will express more concerns about
potential problems with the client’s reported gross margin than
will auditors exposed to an intervention that
increases the level of deliberative processing.
H2b: Auditors exposed to an intervention that increases the
level of intuitive processing will be more likely to recognize
patterns of evidence in Big Data visualizations than will
auditors exposed to an intervention that increases the level
of deliberative processing.
Given that prior research finds that developing an initial
25. perspective about audit evidence allows auditors to
conceptualize
factors that could support or counter their initial expectation
(Hammersley 2011), we posit that developing an initial
framework
enhances auditors’ pattern recognition. Therefore, we predict
that auditors will be more likely to recognize patterns of
evidence
in visualizations when they review Big Data visualizations after
reviewing analytical procedures. We also expect that
employing an intuitive processing approach (versus a
deliberative processing approach) will allow auditors to engage
in more
comprehensive evidence evaluation (Dane and Pratt 2007) that
will amplify pattern recognition and cause auditors to be more
sensitive to multiple patterns of audit evidence (Wolfe et al.
2016 ). Taken together, available theory suggests that auditors
will
be most likely to recognize patterns of evidence in
visualizations when they engage in intuitive processing and
review Big Data
visualizations after reviewing analytical procedures. This leads
to the following interaction hypothesis:
H3: Auditors exposed to an intervention that increases the level
of intuitive processing (versus deliberative processing)
and who examine Big Data visualizations after (versus before)
reviewing preliminary analytical procedures will be
26. most likely to recognize patterns in Big Data visualizations.
III. TASK DESCRIPTION AND EXPERIMENT DESIGN
Analyses of Big Data can provide insights from operational,
financial, and other types of electronic data using sources that
are internal or external to the company. The information
garnered from these analyses may provide insights that are
historical,
real-time, or predictive in nature (KPMG 2012). Our study
examines the effects of timing of the use of Big Data from
external
third-party sources to augment traditional auditing procedures.
We examine best practices related to the process of analyzing
nonfinancial Big Data while completing preliminary analytical
procedures, and investigate two significant issues facing audit
firms as they evaluate applications of Big Data in practice: (1)
when to present the results of data analyses to audit teams in
order to maximize judgment benefits and minimize undesirable
biases, and (2) cognitive processing factors that may affect
auditors’ interpretation and incorporation of Big Data analyses.
We address our research questions by conducting an experiment
using professional auditors as participants (institutional
review board approval was received to conduct this
experiment). Our experimental design manipulates auditors’
processing
27. modes (intuitive versus deliberative) and the order in which Big
Data visualizations are presented (before or after reviewing
preliminary analytical procedures). We examine auditors’
evaluation of the client’s reporting of gross margin and their
intention
to collect additional data.
Participants
One hundred twenty-seven auditors completed the study and
were recruited from two Big 4 firms. The experimental task
was administered by multiple authors. Each participant had an
average of 2.7 years auditing experience, 2.3 years of
experience
conducting analytical reviews, and 55 percent were male.
Task Description
To complete the experimental task, participants first answered a
set of five questions that required them to make
mathematical calculations or provide emotional responses to a
list of statements. This task primes an intuitive or deliberative
processing mode. Participants then read a hypothetical case
describing a technology company that develops, manufactures,
and
markets gaming and wearable technology. Participants were
then presented with one of two sources of information: (1)
results
28. of traditional preliminary analytical procedures related to the
company’s gross margin, or (2) Big Data visualizations
collected
from corporate websites, product discussion forums, Twitter
feeds, and social networking sites.
There were four Big Data visualizations. We employ
visualization types with which our auditor participants are
already
familiar in order to reduce familiarity effects, and the auditor
participants have previously received training on the use of
86 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
graphical displays to identify patterns. Two of the
visualizations did not contain informative patterns; one
contained a pattern
that provided evidence that the large increase in gross margin
would be unexpected; and the other suggested that
management’s
explanation for the increase in gross margin could be
questionable.
1
The visualizations are presented in Appendix A.
29. The first uninformative visualization presented a hashtag
analysis that compared the number of social media messages
tagged with the audited firm’s name versus messages that were
tagged with the hashtags of four major competitors in the
industry. There were no discernable patterns in this
visualization that were relevant to the audit decisions in the
case. A second
uninformative visualization displayed the volume and sentiment
of online discussions related to the firm being audited during
the third quarter and, again, there were no relevant patterns. The
first informative visualization displayed the relationships
between tweeting activity and sales revenue, and the pattern for
the firm being audited was contrary to the pattern for all other
firms with increasing revenues (i.e., sales revenue increasing
while tweeting activity decreasing). The second informative
visualization presented a word cloud of terms used in social
media to describe the client’s products. The product responsible
for
the increased gross margin (a wearable fitness band) was
conspicuously absent from the word cloud, indicating that the
client’s
high-growth product is not being mentioned on social media.
Both of the informative visualizations contradict elements found
in other auditor evidence from preliminary analytical
procedures related to gross margin.
30. Included in the information provided for the preliminary
analytical procedures were results of company performance and
divisional performance for the current and the prior year.
Metrics related to net sales, cost of goods sold, gross margin,
and
gross margin percentage were provided. Participants also read a
summary of an interview with the CFO, who explained the
underlying factors behind the increase in gross margin. After
being exposed to either of the information sources, participants
answered dependent variable questions.
2
Participants next evaluated the alternate type of information set
(i.e., the Big Data visualizations or traditional preliminary
analytical procedures data). After examining the second
information set, participants answered the dependent variable
questions
again. Finally, participants completed the post-experimental and
demographic questionnaire.
Independent Variables
We employ a 2 3 2 between-subjects full-factorial design where
we manipulate auditors’ processing mode and the order in
which Big Data visualizations are presented. The first
independent variable, Processing Mode, is manipulated on two
levels
(Intuitive versus Deliberative). To operationalize auditors’
processing mode, we follow prior research in psychology (e.g.,
31. Zhong 2011) and ask participants to answer five questions in
order to prime a processing mode. Previous studies have
demonstrated that asking individuals to calculate math problems
versus reflect on their feelings effectively stimulates a
deliberative processing mode or an intuitive processing mode,
respectively (e.g., Hsee and Rottenstreich 2004; Small et al.
2007).
Before starting the study, participants in the Deliberative
condition answered five questions requiring mathematical
calculations (see all five questions in Appendix B). For
example:
If an object travels at five feet per minute, then by your
calculations how many feet will it travel in 360 seconds?
Participants in the Intuitive condition answered five questions
requiring them to examine their feelings (see all five questions
in
Appendix B). For example:
When you hear the name ‘‘Barack Obama,’’ what do you feel?
Please use one word to describe your predominant
feeling.
For the second independent variable, Big Data Order, we
manipulate the order in which participants are presented with
results of preliminary analytical procedures and visualizations
of Big Data analyses on two levels (Big Data Before versus Big
Data After). Big Data visualizations examine: (1) number of
hashtag mentions in social media; (2) tweets; (3) text analysis
of
words used to describe the company’s fitness device on social
32. media sites; and (4) text sentiment analysis. The analytical
procedures information consisted of company performance
measures in current and prior periods (i.e., net sales, cost of
goods
sold, gross margin, and gross margin percentage). Also included
in the depiction of the preliminary analytical procedures
results is a narrative explanation from the company’s CFO for
the current period’s gross margin results. Participants were also
presented with the company’s sales mix for each company
division for both the current and prior years.
1
We intentionally designed the experimental instrument to
contain two informative and two uninformative visualizations
related to the company’s gross
margin results. Discussion with practice professionals indicated
that Big Data visualizations typically contain information that is
relevant, as well as
irrelevant, to audit tests. Consequently, we chose this design
approach to maintain practice realism and reduce the likelihood
of participants discovering
the experiment’s purpose. This design choice also reduces the
likelihood of obtaining our hypothesized results.
2
The final experimental instrument was developed following
adjustments based on pilot testing we conducted with current
and former auditing
practitioners and the input from partners of one of the
participating firms.
When Should Audit Firms Introduce Analyses of Big Data Into
33. the Audit Process? 87
Journal of Information Systems
Volume 31, Number 3, 2017
Dependent Variables
The dependent variable for H1a (Recognize Patterns) captures
whether participants recognized the patterns in the two Big
Data visualizations. To provide evidence that participants had
recognized relevant patterns, we asked participants to indicate
any questions or concerns that they had about the gross margin
percentage. One author and one research assistant
independently coded participant responses without reference to
the treatment conditions.
3
A second author reviewed and
reconciled the coding without knowledge of the treatment
condition for each coded response. Example comments related
to the
pattern in the sales/tweets visualization included: ‘‘social media
seemed to lean more towards a competitor. What did the
competitor do in the period?’’ and ‘‘social media trends going
down but margin way up?’’ Comments related to the pattern in
the word cloud included: ‘‘social media buzz words related
more to gaming, so why such an inverse in future products?’’
and
‘‘fitness did not really appear in social media; why?’’ The
dependent variable, Recognize Patterns, is calculated as the
34. number
of patterns recognized (0, 1, or 2).
To test H1b, a question asked participants whether gross margin
is properly recorded or misstated (Misstatement). We
measure this variable after participants have reviewed both the
analytical procedures results and Big Data visualizations, using
an 11-point anchored scale where�5 represents their response
that gross margin is ‘‘very understated’’; 0 represents that gross
margin is correct; and þ5 represents that gross margin is ‘‘very
overstated.’’ For our next dependent variable, Audit Hours, we
ask auditors to indicate how many hours they would budget for
substantive testing of the sales account given that 100 hours
were budgeted in the prior year. The Audit Hours variable is
used to test H1c. To measure Total Concerns for the test of H2a,
we sum the number of unique concerns or questions that
participants documented regarding the change in the client’s
gross
margin percentage.
4
Finally, we use the dependent variable Recognize Patterns to
test H2b and H3.
IV. RESULTS AND ANALYSES
Attention, Manipulation, and Completion Checks
Participants could not fail to attend to the order of presentation
of evidence, and no attention check is needed to verify the
order manipulation. In order to determine whether participants
attended to the processing mode manipulation, we examined the
35. solutions to math problems and statements of emotional
response to verify that participants had completed the task
appropriately. All participants in the deliberative processing
mode treatment condition completed the math questions. Five
participants did not complete the emotional response questions
or used non-emotional response terms to complete the
questions; as such, these participants are not included in the
analyses.
Consistent with the prior research using these math and
emotional response items to prime processing mode, we do not
ask
participants how the task affected their processing mode
because the priming effects are primarily non-conscious, and
participants are not aware of the processing changes (e.g., Hsee
and Rottenstreich 2004; Small et al. 2007; Zhong 2011).
Because participants are not aware of the processing mode
changes, participants cannot be asked to indicate their
processing
mode. Thus, we confirm that participants appropriately
completed the processing mode task, which is the accepted and
validated approach for verifying this processing mode
manipulation. In addition, we verify that priming influenced
participants’
processing by examining beliefs about gross income made
immediately after the presentation of the first evidence set
36. (either the
traditional evidence or the Big Data visualization). Participants
in the intuitive processing mode were more likely to question
the gross margin change ( p ¼ 0.02; untabled), which is
consistent with the anticipated effects of a more intuitive
processing
mode. Finally, we excluded nine participants from the analyses
because they failed to complete the experiment and two
participants who provided responses that were extreme outliers.
Hypothesis Testing
In H1a, we predict that auditors who examine Big Data
visualizations after examining results of preliminary analytical
procedures are more likely to recognize relevant patterns in the
visualizations than are auditors who examine Big Data
visualizations before reviewing results of preliminary analytical
procedures. Consistent with this prediction, we find a main
effect for the examination order of audit evidence on auditors’
likelihood of recognizing patterns in data visualizations.
Specifically, we find that when Big Data visualizations are
presented after results of analytical procedures (mean ¼ 0.27),
relative to before (mean¼0.07), participants are better able to
identify patterns in Big Data visualizations (F [1, 90]¼4.40; p¼
0.02, one-tailed; Table 1, Panel B).
3
The Cohen’s Kappa for this set of codes was 0.75, which
indicates a high level of inter-coder agreement.
37. 4
We also asked participants to list specific audit procedures that
they would recommend. We did not find any differences in
these tests across treatment
conditions.
88 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
Mohammed Alzahrane
Mohammed Alzahrane
While the ANOVA is robust to the use of ordinal data, we
conduct a second test of pattern recognition to further support
the findings. Each participant is classified as having recognized
(1) or not having recognized (0) a pattern, and we employ
binary logistic regression with Big Data Order and Processing
Mode as independent variables. The effect of Big Data Order is
statistically significant in this model ( p ¼ 0.02), consistent
with the ANOVA results. Results from both ANOVA and
logistic
regression provide evidence that examining results of
preliminary analytical procedures first provides a framework
that helps
auditors to identify data patterns.
38. 5
In H1b, we predict that auditors who examine Big Data
visualizations after analytical procedures will be more likely to
assess that accounting figures are misstated. Results are
consistent with H1b. Controlling for participants’ experience
with
conducting analytical reviews, the ANCOVA model shows a
significant main effect for the order of audit evidence on
auditors’
belief that gross margin is misstated (F [1, 106]¼3.44; p¼0.03,
one-tailed; Table 2, Panel B), and auditors are more likely to
believe that gross margin is misstated when visualizations that
contain patterns of evidence contrary to patterns in traditional
sources of evidence are presented after (mean¼1.88), relative to
before (mean¼1.41), results of analytical procedures.6 These
TABLE 1
Auditor Pattern Recognition
Descriptive Statistics and H1a Tests
Panel A: Mean (Standard Deviation) [Number of Participants]
Big Data Visualization
Before Traditional
Big Data Visualization
After Traditional Total
Deliberative Processing 0.09 0.21 0.15
39. (0.43) (0.59) (0.51)
[22] [24] [46]
Intuitive Processing 0.04 0.33 0.19
(0.20) (0.56) (0.45)
[24] [24] [48]
Total 0.07 0.27 0.17
(0.27) (0.57) (0.48)
[46] [48] [94]
Panel B: ANOVA Results for Auditor Pattern Recognition
Factor df
Type III
Sum of Squares F-value p-value
a
Processing Mode 1 0.03 0.15 0.35
Big Data Order 1 0.98 4.40 0.02
Processing Mode 3 Big Data Order 1 0.18 0.79 0.19
Error 90 20.07
a
Reported p-values are one-tailed.
Variable Definitions:
Recognize Patterns ¼ dependent variable: participants’
comments and questions are coded as 0, 1, or 2 to indicate
whether they recognized either of the
40. two patterns of evidence in the Big Data visualizations;
Processing Mode ¼ participants are primed to engage in
intuitive processing or deliberative processing; and
Big Data Order ¼ auditors examine Big Data visualizations
after examining traditional audit evidence or before traditional
analytic audit evidence.
5
The sample size for tests of H1a is less than the final sample of
111 because some participants chose not to list any concerns,
and these participants are
not included in this analysis.
6
We also conduct a test using the difference between the first
measure of misstatement belief and the second measure of
misstatement belief as the
dependent variable. This captures the change in belief that
results from the different evidence orders. There is a marginally
significant effect of
presentation order on change in belief ( p ¼ 0.09, one-tailed)
such that auditors move more toward believing that
misstatement has occurred when
visualizations are presented after the analytical procedures data.
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 89
Journal of Information Systems
Volume 31, Number 3, 2017
Mohammed Alzahrane
41. Mohammed Alzahrane
Mohammed Alzahrane
findings suggest that auditors are less assured of management’s
representation when they review the visualizations after having
formed an initial expectation from reviewing traditional audit
data.
7
The means in Table 2 also suggest that auditors in all
conditions were somewhat concerned about overstatement of
gross margin, indicating that the case was effective in creating a
context where auditors would be concerned about the results of
analytical procedures.
Similarly, we find results consistent with H1c. In H1c, we
predict that auditors will budget more hours to conduct the
current year’s audit of the sales account when they examine the
Big Data visualizations after reviewing results of preliminary
analytical procedures. We find that auditors elect to budget
more time to conduct the current year’s audit of the sales
account
when Big Data visualizations with patterns that are contrary to
other evidence are examined after analytical procedures (132
hours), relative to before (123 hours) (F [1, 107]¼3.63; p¼0.03,
42. one-tailed; Table 3, Panel B). The increase in budgeted audit
hours suggests that examining Big Data visualizations after
forming an initial hypothesis may lead auditors to be more
skeptical
of management’s representations. This finding has implications
for improving the judgment framework auditors employ when
making decisions. Specifically, this order of evidence
evaluation may lead auditors to be more skeptical in their
evaluation of
overall audit evidence as they interpret the implications
contained in Big Data. Examining visualizations prior to
developing
expectations is less effective for inducing professional
skepticism.
TABLE 2
Auditor Perception of Gross Margin Misstatement
Descriptive Statistics and H1b Tests
Panel A: Mean (Standard Deviation) [Number of Participants]
Big Data Visualization
Before Traditional
Big Data Visualization
After Traditional Total
Deliberative Processing 1.27 1.82 1.53
(1.47) (1.71) (1.59)
43. [30] [27] [57]
Intuitive Processing 1.56 1.93 1.74
(1.32) (1.55) (1.44)
[27] [27] [54]
Total 1.41 1.88 1.64
(1.39) (1.62) (1.51)
[57] [54] [111]
Panel B: ANCOVA Results for Auditor Perception of Gross
Margin Misstatement
Factor df
Type III
Sum of Squares F-value p-value
a
Analytical Review Experience 1 6.92 3.07 0.04
Processing Mode 1 0.59 0.26 0.31
Big Data Order 1 7.76 3.44 0.03
Processing Mode 3 Big Data Order 1 0.08 0.03 0.43
Error 106 239.31
a
Reported p-values are one-tailed.
Variable Definitions:
Misstatement ¼ dependent variable: auditor response to the
question: ‘‘In your opinion, is gross margin properly
recorded?’’ Responses based on an 11-
44. point anchored scale where �5 represents their response that
gross margin is ‘‘very understated’’; 0 represents that gross
margin is correct; and þ5
represents that gross margin is ‘‘very overstated’’;
Processing Mode ¼ participants are primed to engage in
intuitive processing or deliberative processing;
Big Data Order ¼ auditors examine Big Data visualizations
after examining traditional audit evidence or before traditional
analytic audit evidence; and
Analytical Review Experience ¼ auditors’ report of their
experience conducting analytical reviews (in years).
7
Stronger recall of visualizations when they are presented second
cannot explain these results because the visualizations alone
lack context and do not
speak to the overstatement of gross margin. Without recall of
both the visualization and relevant financial information, the
visualizations in the
experiment cannot lead to conclusions about misstatements of
gross margin.
90 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
In H2a, we predict that employing an intuitive processing
approach to evidence evaluation will lead auditors to express
more unique concerns about management’s representation of
gross margin than will employing a deliberative processing
45. approach. Consistent with our predictions, we find that auditors
primed to engage in intuitive processing report more concerns
with management’s accounting numbers (mean ¼ 2.02) than
auditors engaged in deliberative processing (mean ¼ 1.44) (see
Table 4, Panel A).
8
We control for auditors’ cognitive reflection (Frederick 2005)
because prior research indicates that the
cognitive reflection scale reveals an individual’s propensity to
engage in intuitive versus deliberative processing.
9
We also
include a covariate for auditing experience because it was
statistically significant in preliminary tests. The ANCOVA
model
reveals a significant main effect for processing mode on the
total number of concerns that auditors indicate regarding the
client’s gross margin account (F [1, 105] ¼ 4.90; p ¼ 0.01, one-
tailed; Table 4, Panel B).
Next, we examine auditors’ pattern recognition to test H2b and
H3. In H2b, we predict that auditors exposed to an intuitive
processing (versus a deliberative processing) intervention will
be more likely to recognize patterns in visualizations. To
examine this hypothesis, we conduct an ANOVA and examine
the main effect of Processing Mode on Recognize Patterns.
46. Inconsistent with our expectations, we do not find a significant
main effect for Processing Mode (F [1, 90] ¼ 0.15; p ¼ 0.35,
one-tailed; Table 1, Panel B). Given the relatively low rates of
pattern recognition by the auditor participants (only 13 percent
TABLE 3
Budgeted Audit Hours
Descriptive Statistics and H1c Tests
Panel A: Mean (Standard Deviation) [Number of Participants]
Big Data Visualization
Before Traditional
Big Data Visualization
After Traditional Total
Deliberative Processing 121.97 134.28 127.91
(24.96) (30.61) (28.27)
[30] [28] [58]
Intuitive Processing 124.19 129.70 127.00
(15.98) (24.11) (20.54)
[26] [27] [53]
Total 123.00 132.04 127.48
(21.12) (27.46) (24.77)
[56] [55] [111]
47. Panel B: ANOVA Results for Budgeted Audit Hours
Factor df
Type III
Sum of Squares F-value p-value
a
Processing Mode 1 38.41 0.06 0.40
Big Data Order 1 2199.47 3.63 0.03
Processing Mode 3 Big Data Order 1 320.62 0.53 0.24
Error 107 64884.35
a
Reported p-values are one-tailed.
Variable Definitions:
Audit Hours ¼ dependent variable: auditor indication of the
number of hours they would budget for the current year’s audit
of the sales account;
Processing Mode ¼ participants are primed to engage in
intuitive processing or deliberative processing; and
Big Data Order ¼ auditors examine Big Data visualizations
after examining traditional audit evidence or before traditional
analytic audit evidence.
8
As we indicate in Section III, we obtain two measures of
auditors’ perception of gross margin misstatement
(Misstatement). We find evidence that
Processing Mode influenced auditors’ perceptions of gross
margin misstatement in the first collection of this measure.
Auditors exposed to the intuitive
processing intervention were more likely to believe that gross
margin was overstated compared with auditors exposed to the
deliberative processing
48. intervention ( p , 0.01, two-tailed). However, this effect
dissipates by the second collection of Misstatement, which
occurs after all forms of audit
evidence have been examined ( p ¼ 0.25, two-tailed). This
potentially suggests that the effect of the manipulation faded
after participants examined a
significant quantity of audit evidence, indicating that the
manipulation may not be robust enough to persist throughout
the task.
9
There are no significant differences in the cognitive reflection
scale measure between the processing mode manipulations ( p ¼
0.62).
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 91
Journal of Information Systems
Volume 31, Number 3, 2017
of auditor participants recognized any pattern), we conduct post
hoc analyses to examine the statistical power of this test
(Tabachnick and Fidell 2013). Results indicate that our model
likely has inadequate statistical power to detect the main effect
(observed power ¼ 0.07; alpha ¼ 0.05; two-tailed test; not
tabled) as a result of the low rates of pattern recognition.
Similar to other research that has examined auditor pattern
recognition, the auditors in our experiment often failed to
recognize relevant patterns (e.g., Libby 1985; Bedard and Biggs
1991; Bierstaker et al. 1999; Asare et al. 2000; O’Donnell and
49. Perkins 2011). Prior studies find that auditor fixation on
‘‘surface features’’ of a task (Bedard and Biggs 1991), as well
as the
format and organization of information (O’Donnell and Perkins
2011), can affect auditors’ pattern recognition. The low rates of
pattern recognition in our task limit effect and cell sizes and
make it difficult to test for effects of processing mode on
pattern
recognition rates. Using the approach in Tabachnick and Fidell
(2013), we investigate the size of the effect of Processing Mode
on Recognize Patterns and find partial g2 ¼ 0.001 with 90
percent confidence limits from 0.000 to 0.036.
In H3, we predict that auditors primed to engage in intuitive
processing and who examine Big Data visualizations after
they review preliminary analytical procedures will be most
likely to recognize relevant patterns compared to auditors in
other
conditions. To test this interaction hypothesis, we examine the
interactive effects of Processing Mode and Big Data Order on
Recognize Patterns. We do not find support for H3. The overall
ANOVA model indicates a non-significant interaction (F [1,
90]¼0.79; p¼0.19, one-tailed; Table 1, Panel B). Post hoc
power analysis (Tabachnick and Fidell 2013) again indicates
that
our ANOVA model does not have the statistical power to detect
an interactive effect (observed power ¼ 0.14; alpha ¼ 0.05;
two-tailed test; not tabled). We also investigate the size of the
interactive effect of Processing Mode and Big Data Order on
TABLE 4
50. Total Number of Concerns about Gross Margin
Descriptive Statistics and H2 Tests
Panel A: Mean (Standard Deviation) [Number of Participants]
Big Data Visualization
Before Traditional
Big Data Visualization
After Traditional Total
Deliberative Processing 1.27 1.63 1.44
(1.05) (1.04) (1.05)
[30] [27] [57]
Intuitive Processing 1.96 2.07 2.02
(1.34) (1.46) (1.39)
[27] [27] [54]
Total 1.60 1.85 1.72
(1.24) (1.28) (1.26)
[57] [54] [111]
Panel B: ANCOVA Results for Total Number of Concerns about
Gross Margin
Factor df
Type III
51. Sum of Squares F-value p-value
a
Cognitive Reflection 1 3.91 2.73 0.05
Auditing Experience 1 8.92 6.22 ,0.01
Processing Mode 1 7.04 4.90 0.01
Big Data Order 1 1.57 1.09 0.15
Processing Mode 3 Big Data Order 1 0.39 0.27 0.30
Error 105 150.70
a
Reported p-values are one-tailed.
Variable Definitions:
Total Concerns¼dependent variable: the total number of
concerns auditors indicate when asked to describe the concerns
or questions they have about the
change in the client’s gross margin percentage;
Processing Mode ¼ participants are primed to engage in
intuitive processing or deliberative processing;
Big Data Order ¼ auditors examine Big Data visualizations
after examining traditional audit evidence or before traditional
analytic audit evidence;
Cognitive Reflection ¼ score from the Cognitive Reflection
Scale (Frederick 2005); and
Auditing Experience ¼ the number of years of auditing
experience.
92 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
52. Recognize Patterns. For the interactive effect, partial g2 ¼
0.008 with 90 percent confidence limits from 0.000 to 0.065
(Tabachnick and Fidell 2013). As we found with the effect of
Processing Mode on Recognize Patterns in H2b, the interactive
effect of Processing Mode 3 Big Data Order on Recognize
Patterns is small and likely constrained by the limited number
of
participants who recognized seeded patterns in the
visualizations.
10
Supplemental Analyses
We conduct additional tests to support the inferences drawn
from the tests of hypotheses. In our study, we theorize that
examining traditional audit data before Big Data visualizations
allows auditors to better recognize patterns in visualizations. In
H1b, we predict that this increase in pattern recognition will
lead auditors to question management’s representations (i.e.,
management’s explanations) that are inconsistent with evidence
patterns, resulting in perceptions that the client’s accounting
numbers are misstated (i.e., gross margin). In further support of
this reasoning, we examine whether auditors’ belief in the
CFO’s explanation (CFO Explanation) mediates the relationship
between the order in which Big Data visualizations are
presented and auditors’ assessment that gross margin is
misstated.
11
53. Using the approach from Muller, Judd, and Yzerbyt (2005), we
test for mediation conditions with a series of regression
models. Results indicate that Big Data Order significantly
accounts for variations in CFO Explanation ( p ¼ 0.08); CFO
Explanation accounts for variations in Misstatement ( p , 0.01);
and the significance of the relationship between Big Data
Order and Misstatement is diminished when CFO Explanation is
included in the model (significance is reduced from p¼0.07
to p ¼ 0.21; untabled). Thus, our results do support a mediating
effect of CFO Explanation.
In H1c, we argue that auditors’ recognition of evidence patterns
in Big Data visualizations will lead them to believe that
additional audit evidence needs to be considered. Consequently,
auditors will increase the number of hours they budget for the
current year’s audit. To examine the effects of pattern
recognition on beliefs about evidence and budgeted hours, we
use a
mediation model to test whether auditors’ perception that more
evidence is needed mediates the relationship between Big Data
Order and auditors’ budgeted Audit Hours. We measure beliefs
about the need for additional evidence (Additional Evidence)
with a question where participants indicate whether they believe
‘‘additional evidence should be collected to explain the net
sales or cost of goods sold figure’’ (0¼No Additional Evidence,
to 100¼Significant Additional Evidence). Mediation analyses
indicate that Big Data Order significantly accounts for
variations in Additional Evidence ( p ¼ 0.05); Additional
Evidence
accounts for variations in Audit Hours ( p ¼ 0.03); and the
significance of the relationship between Big Data Order and
Audit
54. Hours is diminished when Additional Evidence is included in
the model (significance is reduced from p ¼ 0.05 to p ¼ 0.10;
untabled). Results again support a mediating effect of
Additional Evidence.
To examine participants’ perceptions of the visualizations, we
asked participants to rate the usefulness of the visualizations
and reliability of the data used to create the visualizations.
12
Participants generally did not find Big Data visualizations to be
very useful and did not believe that the underlying data were
reliable. For the tweets/sales visualizations, there were no
differences in perceptions across treatment conditions; the mean
usefulness rating was 38.21 (scale with anchors of 0 percent¼
Not Useful At All, and 100 percent¼Very Useful), and the mean
reliability rating was 36.54 (scale with anchors of 0 percent¼
Not Reliable At All, and 100 percent ¼ Completely Useful).
The usefulness ratings for the word cloud visualization were
similarly low, but there was a statistical difference ( p¼0.05,
two-tailed; untabled) between the visualization before (28.53)
and
visualization after (31.11) treatments. The mean reliability
rating was 31.32. Overall, participants do not see high levels of
value from the Big Data visualizations in the experiment, even
though these visualizations provided evidence directly related to
the management assertions being evaluated in the audit case.
10
We conduct further tests to investigate for interaction effects.
We convert Recognize Patterns into a binary variable where
55. each participant is indicated
as recognizing (1) or not recognizing (0) a pattern, and we
employ binary logistic regression with Big Data Order and
Processing Mode as independent
variables. Consistent with the ANOVA model, we do not find a
significant interactive effect ( p ¼ 0.55). We also use ANOVA
models to test for the
interactive effects of Big Data order and processing mode on
perceptions of misstatement and budgeted audit hours. The
interactions are not significant
in any of these models.
11
We measure CFO Explanation by asking participants to indicate
their belief that ‘‘the explanation provided by the CFO
adequately explains most (85
percent or more) of the increase in the gross margin
percentage.’’ Auditors indicate their response on a 100-point
anchored scale where 0¼ ‘‘Definitely
does not explain most of the increase,’’ and 100 ¼ ‘‘Definitely
explains most of the increase.’’
12
As indicated previously, we included both informative and
uninformative visualizations in the experiment. On average,
auditors found the
uninformative visualizations to be more useful ( p¼0.03) than
the informative visualizations, and there was no difference in
auditors’ perception of the
reliability of informative and uninformative visualizations (
p¼0.51). These results suggest that our participants did not
recognize that we had included
informative and uninformative visualizations in the experiment,
and they were not likely aware of the purpose of the
experiment. The results also
indicate that many auditors did not recognize the value of
56. informative visualizations, potentially because pattern
recognition rates are low. Further
research will be needed to better understand why auditors may
not differentiate between informative and uninformative
visualizations.
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 93
Journal of Information Systems
Volume 31, Number 3, 2017
V. CONCLUSIONS
Auditors are seeking methods to expand the audit approach and
improve risk assessments by examining new forms of
evidence from a variety of sources (Yoon et al. 2015).
Analytical tools that use Big Data sources that are internal or
external to
the client can provide useful insights to auditors by
supplementing existing substantive procedures (KPMG 2012).
In an
experiment with experienced auditors, we investigate how the
judgments and decisions of auditors are influenced by the
timing
of evaluation of Big Data visualizations. We also examine how
intuitive and deliberative modes of processing affect auditors’
judgments and decisions. Firms have already begun to adopt
visualizations of Big Data; visualization departments represent
57. one of the fastest growing practice areas for many large public
accounting firms; and some believe that visualizations are most
valuable when they are used to detect patterns prior to
evaluating other audit evidence. We examine the effects of
providing
visualizations to auditors before or after the auditors have
formed initial impressions from more traditional audit data
sources
and procedures. This is important because auditors could ignore
or fail to recognize the patterns in Big Data visualizations, and
the timing of presentation has the potential to significantly
influence the effects of these visualizations on auditor
judgment.
We find that auditors do not identify crucial patterns in Big
Data visualizations when they examine visualizations before
they have formed an initial expectation based on results of
analytical procedures. This finding indicates that it is beneficial
for
auditors to have a decision framework within which they can
develop expectations that facilitate the identification of
evidence
patterns in Big Data visualizations. We also find that the timing
of Big Data evaluation has implications for several factors that
contribute to audit planning and effectiveness. When auditors
review Big Data visualizations containing patterns that are
58. contrary to other evidence after examining results of traditional
audit procedures, they express more concerns about potential
misstatements and increase budgeted audit hours. The difference
was relatively small on an absolute scale (a change from 1.41
to 1.88 on a scale of �5 to 5), but was statistically significant.
Further, analyses reveal that these effects are mediated by the
auditors’ perceptions of the reliability of management’s
explanations and auditors’ belief that additional audit evidence
should
be collected to investigate management’s representations.
Our results have important implications because they further
our theoretical understanding of the effects of Big Data on
professional judgment and inform the practice debate about how
to best leverage Big Data visualizations. Some senior
practitioners propose that advances in Big Data will allow audit
planning to begin with a fresh slate, and auditors could examine
visualizations of Big Data to find patterns that will direct the
development of an audit plan before plans are biased by
traditional
data, client explanations, or prior-year findings. This
perspective assumes that visualizations of Big Data and other
complex
datasets are most beneficial to the audit when Big Data is
considered prior to other audit evidence and prior to
development of
hypotheses about the firm and its assertions. Contrary to this
59. assumption, we find that it is better to examine Big Data
visualizations after initial hypotheses are formed and relevant
patterns can be more readily detected to yield valuable insights.
While our results are specific to visualizations of Big Data
because auditors may rely more on visualizations of data
sources that
are more readily verifiable and reliable, we believe that our
primary finding that pattern identification will improve when
visualizations are presented after traditional audit evidence also
applies to visualizations of other data types. Overall, the
immense complexity and volumes of Big Data available to
practitioners suggest that endless patterns could be detected
through
any rigorous evaluation of these data. Our results indicate that
auditors may fail to identify the relevant patterns unless they
first
form expectations about what is relevant to the decision
context, even when the auditors are given a very limited number
of
visualizations and patterns that clearly relate to specific audit
objectives. We believe that the effects we have documented will
be more relevant and substantial in practice, where far more
visualizations and patterns are present. In such situations, the
timing of visualization use appears to be critical to audit
effectiveness.
60. REFERENCES
Alles, M. 2015. Drivers of the use and facilitators and obstacles
of the evolution of Big Data by the audit profession. Accounting
Horizons
29 (2): 439–449. doi:10.2308/acch-51067
Alles, M., and G. Gray. 2014. Developing a Framework for the
Role of Big Data in Auditing: A Synthesis of the Literature.
Working
paper, Rutgers, The State University of New Jersey.
Ambady, N., and R. Rosenthal. 1993. Half a minute: Predicting
teacher evaluations from thin slices of nonverbal behavior and
physical
attractiveness. Journal of Personality and Social Psychology 64
(3): 431–441. doi:10.1037/0022-3514.64.3.431
Asare, S., G. Trompeter, and A. Wright. 2000. The effect of
accountability and time budgets on auditors’ testing strategies.
Contemporary
Accounting Research 17 (4): 539–560. doi:10.1506/F1EG-9EJG-
DJ0B-JD32
Bedard, J. C., and S. F. Biggs. 1991. Pattern recognition,
hypotheses generation, and auditor performance in an analytical
task. The
Accounting Review 66 (3): 622–642.
Bierstaker, J. L., J. C. Bedard, and S. F. Biggs. 1999. The role
of problem representation shifts in auditor decision processes in
analytical
procedures. Auditing: A Journal of Practice & Theory 18 (1):
18–36. doi:10.2308/aud.1999.18.1.18
61. 94 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
dx.doi.org/10.2308/acch-51067
dx.doi.org/10.1037/0022-3514.64.3.431
dx.doi.org/10.1506/F1EG-9EJG-DJ0B-JD32
dx.doi.org/10.1506/F1EG-9EJG-DJ0B-JD32
dx.doi.org/10.2308/aud.1999.18.1.18
dx.doi.org/10.2308/aud.1999.18.1.18
Biggs, S. F., and J. J. Wild. 1985. An investigation of auditor
judgment in analytical review. The Accounting Review 60 (4):
607–633.
Borgida, E., and R. Nisbett. 1977. The differential impact of
abstract vs. concrete information on decisions. Journal of
Applied Social
Psychology 7 (3): 258–271. doi:10.1111/j.1559-
1816.1977.tb00750.x
Brown-Liburd, H., H. Issa, and D. Lombardi. 2015. Behavioral
implications of Big Data’s impact on audit judgment and
decision making
and future research directions. Accounting Horizons 29 (2):
451–468. doi:10.2308/acch-51023
Cao, M., R. Chychyla, and T. Stewart. 2015. Big Data analytics
in financial statement audits. Accounting Horizons 29 (2): 423–
429.
doi:10.2308/acch-51068
Carraway, R. 2013. Meeting the Big Data challenge: Don’t be
62. objective. Forbes. Available at:
http://www.forbes.com/sites/darden/2013/
02/01/meeting-the-big-data-challenge-dont-be-objective/
Coakley, J. R., and C. E. Brown. 1993. Artificial neural
networks applied to ratio analysis in the analytical review
process. Intelligent
Systems in Accounting, Finance and Management 2 (1): 19–39.
doi:10.1002/j.1099-1174.1993.tb00032.x
Dane, E., and M. G. Pratt. 2007. Exploring intuition and its role
in managerial decision making. Academy of Management
Review 32 (1):
33–54. doi:10.5465/AMR.2007.23463682
De Martino, B., D. Kumaran, B. Seymour, and R. J. Dolan.
2006. Frames, biases, and rational decision-making in the
human brain.
Science 313 (5787): 684–687. doi:10.1126/science.1128356
Fagerlin, A., C. Wang, and P. Ubel. 2005. Reducing the
influence of anecdotal reasoning on people’s health care
decisions: Is a picture
worth a thousand statistics? Medical Decision Making 25 (4):
398–405. doi:10.1177/0272989X05278931
Frederick, S. 2005. Cognitive reflection and decision making.
Journal of Economic Perspectives 19 (4): 25–42. doi:10.1257/
089533005775196732
Hamill, R., T. Wilson, and R. Nisbett. 1980. Insensitivity to
sample bias: Generalization from atypical cases. Journal of
Personality and
Social Psychology 39 (4): 578–589. doi:10.1037/0022-
63. 3514.39.4.578
Hammersley, J. S. 2006. Pattern identification and industry-
specialist auditors. The Accounting Review 81 (2): 309–336.
doi:10.2308/accr.
2006.81.2.309
Hammersley, J. S. 2011. A review and model of auditor
judgments in fraud-related planning tasks. Auditing: A Journal
of Practice &
Theory 30 (4): 101–128. doi:10.2308/ajpt-10145
Hsee, C. K., and Y. Rottenstreich. 2004. Music, pandas, and
muggers: On the affective psychology of value. Journal of
Experimental
Psychology: General 133 (1): 23–30. doi:10.1037/0096-
3445.133.1.23
Information Systems Audit and Control Association (ISACA).
2013. Big Data: Impacts and Benefits. Available at:
http://www.isaca.org/
knowledge-center/research/researchdeliverables/pages/big-data-
impacts-and-benefits.aspx
Kahneman, D. 2011. Thinking, Fast and Slow. New York, NY:
Farrar, Straus & Giroux.
Kida, T. 2006. Don’t Believe Everything You Think: The 6
Basic Mistakes We Make in Thinking. Amherst, NY:
Prometheus Books.
KPMG. 2011. Elevating Professional Judgment in Accounting
and Auditing: The KPMG Professional Judgment Framework.
Available
at:
https://www.researchgate.net/publication/258340692_Elevating
64. _Professional_Judgment_in_Auditing_and_Accounting_The_
KPMG_Professional_Judgment_Framework
KPMG. 2012. Leveraging Data Analytics and Continuous
Auditing Processes for Improved Audit Planning, Effectiveness,
and Efficiency.
Available at:
https://assets.kpmg.com/content/dam/kpmg/pdf/2016/05/Levera
ging-Data-Analytics.pdf
Libby, R. 1985. Availability and the generation of hypotheses in
analytical review. Journal of Accounting Research 23 (2): 648–
667.
doi:10.2307/2490831
Lieberman, M. D. 2000. Intuition: A social cognitive
neuroscience approach. Psychological Bulletin 126 (1): 109–
137. doi:10.1037/0033-
2909.126.1.109
Louwers, T. J., A. D. Blay, D. H. Sinason, J. R. Strawser, and J.
C. Thibodeau. 2018. Auditing & Assurance Services. 7th
edition. New
York, NY: McGraw-Hill/Irwin.
Muller, D., C. M. Judd, and V. Y. Yzerbyt. 2005. When
moderation is mediated and mediation is moderated. Journal of
Personality and
Social Psychology 89 (6 ): 852–863. doi:10.1037/0022-
3514.89.6.852
O’Donnell, E., and J. D. Perkins. 2011. Assessing risk with
analytical procedures: Do systems thinking tools help auditors
focus on
65. diagnostic patterns? Auditing: A Journal of Practice & Theory
30 (4): 273–283. doi:10.2308/ajpt-10148
PricewaterhouseCoopers (PwC). 2015. Data Driven: What
Students Need to Succeed in a Rapidly Changing Business
World. Available
at: http://www.pwc.com/us/en/faculty-resource/assets/pwc-data-
driven-paper-feb2015.pdf
Public Company Accounting Oversight Board (PCAOB). 2010.
Identifying and Assessing the Risks of Material Misstatement.
Auditing
Standard No. 12. Washington, DC: PCAOB. Available at:
http://pcaobus.org/Rules/Rulemaking/Docket%20026/Release_2
010-
004_Risk_Assessment.pdf
Selby, D. 2011. Can financial statement auditors identify risk
patterns in IT control evidence? International Journal of
Business,
Humanities and Technology 1 (3): 88–97.
Small, D. A., G. Loewenstein, and P. Slovic. 2007. Sympathy
and callousness: The impact of deliberative thought on
donations to
identifiable and statistical victims. Organizational Behavior and
Human Decision Processes 102 (2): 143–153. doi:10.1016/j.
obhdp.2006.01.005
Tabachnick, B. G., and L. S. Fidell. 2013. Using Multivariate
Statistics. Upper Saddle River, NJ: Pearson Education.
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 95
68. Yoon, K., L. Hoogduin, and L. Zhang. 2015. Big Data as
complementary audit evidence. Accounting Horizons 29 (2):
431–438. doi:10.
2308/acch-51076
Zhang, J., X. Yang, and D. Appelbaum. 2015. Toward effective
Big Data analysis in continuous auditing. Accounting Horizons
29 (2):
469–476. doi:10.2308/acch-51070
Zhong, C. 2011. The ethical dangers of deliberative decision
making. Administrative Science Quarterly 56 (1): 1–25.
doi:10.2189/asqu.
2011.56.1.001
APPENDIX A
Informative Visualization 1
The following visualization compares the number of tweets per
day related to fitness devices during the two weeks after
releasing the new fitness band to the number of tweets for
competitor fitness devices during the same period. The graph
also
displays the sales volumes of fitness bands over the same two-
week period. (Note: all visualizations were in color in the
experimental instrument, but are reproduced in black/white in
this appendix.)
96 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
70. Uninformative Visualization 2
The following visualization presents the volume and sentiment
of online discussions related to Absolute Tech during the
third quarter.
APPENDIX B
Questions Used to Prime Deliberative Mindset
1. If an object travels at five feet per minute, then by your
calculations, how many feet will it travel in 360 seconds?
Answer: ______________
2. Suppose a student bought a pen and a pencil for a total of
$11, and that the pen cost $10 more than the pencil. How
much was the pencil?
Answer: ______________
3. If a consumer bought 30 books for $540, then, on average,
how much did the consumer pay per book?
Answer: ______________
4. If a baker bought nine pounds of flour at $1.50 per pound,
then how much did the baker pay in total?
Answer: _______________
5. If a company bought 15 computers for $1200 each, then how
71. much did the company pay in total?
Answer: _______________
Questions used to Prime Intuitive Mindset
1. When you hear the name ‘‘Barack Obama,’’ what do you
feel? Please use one word to describe your predominant
feeling.
Answer: ___________
2. When you hear the name ‘‘George W. Bush,’’ what do you
feel? Please use one word to describe your predominant
feeling.
Answer: ___________
98 Rose, Rose, Sanderson, and Thibodeau
Journal of Information Systems
Volume 31, Number 3, 2017
3. When you hear the name ‘‘Johnny Depp,’’ what do you feel?
Please use one word to describe your predominant feeling.
Answer: ___________
4. When you hear the words ‘‘9/11,’’ what do you feel? Please
use one word to describe your predominant feeling.
Answer: ___________
5. When you hear the word ‘‘baby,’’ what do you feel? Please
use one word to describe your predominant feeling.
Answer: ___________
72. Source: Adapted from Hsee and Rottenstreich (2004).
When Should Audit Firms Introduce Analyses of Big Data Into
the Audit Process? 99
Journal of Information Systems
Volume 31, Number 3, 2017
Sample Paper Critique
Article Title: "The Association between Computer Literacy and
Training on Clinical Productivity and User Satisfaction in Using
the Electronic Medical Record"
Authors: May Alasmary, Ashraf El Metwally and Mowafa
Housch
Published: 24 June 2014; Journal of Medical Systems
Main Results of the Article
This study looks to explore whether or not computer literacy
and training have an effect on clinical productivity and
satisfaction of Electronic Medical Records (EMRs). The setting
is one hospital in Saudi Arabia, where the hospital had recently
implemented an EMR system. The subjects being studied were
nurses and physicians who were current users of the system.
The study involved the use of a 40 question survey. Interviews
were also conducted to validate the results of the survey and to
offer insights into the statistical results.
According the research paper, the following results were noted:
• Majority of participants were generally satisfied with the
system
• Satisfaction scores were higher among physicians
• Physicians were more satisfied with training
73. • Most nurses and physicians agreed that the system increased
perceived clinical productivity
• A statistically significant weak positive correlation exists
between age and satisfaction
• Years of experience could not predict system satisfaction
• Correlation between system productivity and system
satisfaction was statistically significant.
• Statistically significant medium positive correlation exists
between computer literacy and satisfaction
• Statistically significant differences between mean satisfaction
of systems; with age (older is more satisfied), performance,
comparison with paper systems and computer literacy.
The conclusions noted were that the EMR systems seems to be
effective and highly appreciated by its users. Increasing
productivity and EMR user satisfaction could be an ultimate
goal of any health care organization. The study identified that
high computer literacy has a positive impact on user
satisfaction.
Critique of Methodology Used and the Author’s Interpretation
of Results
First, there is no formal hypothesis in this research paper. The
researcher identifies the following research question: “Do
computer literacy, training have an impact on clinical
productivity and satisfaction of Electronic Medical Record?”
This question does not predict directionality and the terms are
not clearly defined. From the survey, one might infer how the
researcher measured computer literacy and/or “perceived
clinical productivity” but is not clearly defined in the research
paper.
The research paper goes on to state the following study
objectives:
• To investigate end users (physicians and nurses), satisfaction
74. levels of the newly implemented EMR
• To investigate clinical productivity of the new EMR
• To investigate the association between computer literacy and
EMR user satisfaction perceptions
• To investigate the association between training and users
satisfaction
The researcher should have postulated the directionality of the
variables in the form of a hypothesis. In addition, the researcher
concludes on findings which were not indicated in the research
question or objectives. As such, the research paper outlines a
process that appears more exploratory in nature.
The sampling method was a convenience sample of all
physicians and nurses at the hospital who utilize the system. A
total of 123 completed the survey although 12 results were
removed as those individuals had participated in a pilot survey.
As such, there was no random assignment or randomized
method performed. An ANOVA, which is used to analyze some
of the data, assumes that the dependent variables follow a
normal distribution and that variance matrices must be equal for
all treatment groups. There was no check of these assumptions.
In this case, more nurses took the survey then physicians which
could skew the outcome of the data. Additionally, since all
participants came from the same hospital and were using the
same system; this data may not be generalizable to other
populations even within the same region.
The researcher documents multiple statistical techniques used:
descriptive statistics, t-tests, regression analysis and one-way
ANOVA. The fact that there was no formal hypothesis seems to
have led to a hodgepodge of analysis to find correlations and
meaning from the data. The danger in doing this is that it can
lead to spurious correlations – correlations that just don’t mean
anything or just aren’t real. There is no way to know if there are
other factors that could have driven satisfaction. In addition,
conducting different types of tests increases the experiment
75. wise error rate. The experiment should be designed and the
statistical method should be chosen to reduce the experiment
wise error rate and in this case it was not.
According to the article, “One-way ANOVA showed that there
was statistically significant difference between means
satisfaction of system with age (p=.011; older staff [showed]
higher levels of satisfaction), performance satisfaction (p=0.26)
and the comparison with paper systems (p=0.008) and computer
literacy (p<0.01). “
The above sentence really doesn’t explain anything but it
appears that the researcher may have run an ANOVA where the
DV was satisfaction of the system and the IVs included age,
performance satisfaction, comparison of paper systems and
computer literacy. The researcher did not define his/her terms
so it is difficult to tell whether or not he conducted the ANOVA
correctly. The DV should be quantitative and as such the
researcher could have converted satisfaction scores into some
type of ratio scale. The IVs should be qualitative in nature. The
Likert scales used in the survey are nonmetric and could be used
or could be converted to (Hi, Med, Lo) type variables. The IV
“age” would need to be converted to a qualitative score as well,
such as (old, young).
Assuming all the above was handled appropriately, the bolded
quote above continues to have flaws. First, it identifies
performance satisfaction as significant at p=.26. At an alpha
level of .05 or even .10, this would not be considered
significant. My best guess is that the researcher discovered that
the overall model showed some significant effect and he/she is
simply listing the p-values of all the variables
in the model. The researcher should have looked to the
difference in means for the effect “satisfaction” and determined
which variable(s) were the most significant in driving
satisfaction with the system.
76. Does the analysis answer the research question posed?
Research Question: Do computer literacy, training have an
impact on clinical productivity and satisfaction of Electronic
Medical Record?
1. None of the results compared computer literacy to clinical
productivity.
2. None of the results compared training to clinical
productivity.
3. There was a result that compared productivity to system
satisfaction (but that wasn’t the question)
4. The researcher’s conclusion on training’s impact on
satisfaction is discussed below.
5. The researcher’s conclusion on computer literacy’s impact on
satisfaction is discussed below.
As it relates to whether or not training had an impact on
satisfaction, the researcher failed to find a statistically
significant correlation between training and satisfaction.
Training did not appear to be considered in the one-way
ANOVA as an independent variable but the lack of detail in the
paper makes this unclear. As it relates to whether or not
computer literacy has an impact on satisfaction, the analysis
may answer the question for that specific hospital but could not
be generalized to other populations. Even within their own
hospital system, the search to find correlations as opposed to
setting forth a formal hypothesis and seeking to test it could
have created a situation where there may be a correlation but it
may be a spurious one. I would argue that the researcher has not
answered the question.
Recommendation
In revising the research, the first thing I would do is to create
the formal hypothesis. The research question as stated by the
77. researcher was “Do computer literacy, training have an impact
on clinical productivity and satisfaction of Electronic Medical
Record?” An example might be:
H1: Computer literacy (IV) and training (IV) have a positive
effect on user satisfaction of an EMR (DV)
H2: Computer literacy (IV) and training (IV) have a positive
effect on perceived clinical productivity (DV).
With two DVs that are likely to be correlated, the researcher
should conduct a MANOVA. Also, with two IVs, the researcher
should investigate possible interaction between literacy and
training.
Next, I would clearly define each of the variables and the
manner in which they would be measured and evaluated in the
statistical analysis. I’d also select the most appropriate
statistical technique (MANOVA) that would assist in lowering
the experiment wise error rate. I would refrain from exploratory
methods to find correlations as those could lead to spurious
correlations.
Finally, rather than a convenience sample, I would attempt to
construct a random sample and would use more than one
hospital location in the region or country (depending on the
population of interest).