SlideShare a Scribd company logo
1 of 102
FEATURE MANAGEMENT IS MUCH MORE THAN A
SCIENCE
MANAGEMENT
IS MUCH MORE
THAN A SCIENCE
THE LIMITS OF DATA-DRIVEN DECISION MAKING
BY ROGER L. MARTIN AND TONY GOLSBY-SMITH
U
nderlying the practice and study of business is the belief that
management is a science
and that business decisions must be driven by rigorous analysis
of data. The explosion
of big data has reinforced this idea. In a recent EY survey, 81%
of executives said they
believed that “data should be at the heart of all decision-
making,” leading EY to enthu-
siastically proclaim that “big data can eliminate reliance on ‘gut
feel’ decision-making.”
Managers find this notion appealing. Many have a background
in applied sciences. Even if
they don’t, chances are, they have an MBA—a degree that
originated in the early 20th century,
when Frederick Winslow Taylor was introducing “scientific
management.”
MBA programs now flood the business world with graduates—
more than 150,000 a year in the
United States alone. These programs have been trying to turn
management into a hard science
for most of the past six decades. In large measure this effort
began in response to scathing reports
on the state of business education in America issued by the Ford
and Carnegie Foundations in
1959. In the view of the report writers—all economists—
business programs were filled with un-
derqualified students whose professors resisted the
methodological rigor of the hard sciences,
which other social sciences had embraced. In short, business
education wasn’t scientific enough.
ILLUSTRATIONS BY MASA
UNDERSTANDING MANAGEMENT’S VALUE
SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS
REVIEW 129
therefore inquire, present us with alternative possi-
bilities.…All our actions have a contingent character;
hardly any of them are determined by necessity,” he
wrote. He believed that this realm of possibilities was
driven not by scientific analysis but by human inven-
tion and persuasion.
We think this is particularly true when it comes
to decisions about business strategy and innovation.
You can’t chart a course for the future or bring about
change merely by analyzing history. We would sug-
gest, for instance, that the behavior of customers will
never be transformed by a product whose design is
based on an analysis of their past behavior.
Yet transforming customer habits and experiences
is what great business innovations do. Steve Jobs,
Steve Wozniak, and other computing pioneers created
a brand-new device that revolutionized how people
interacted and did business. The railroad, the motor
car, and the telephone all introduced enormous be-
havioral and social shifts that an analysis of prior data
could not have predicted.
To be sure, innovators often incorporate scientific
discoveries in their creations, but their real genius lies
in their ability to imagine products or processes that
simply never existed before.
The real world is not merely an outcome deter-
mined by ineluctable laws of science, and acting as
if it is denies the possibility of genuine innovation. A
scientific approach to business decision making has
limitations, and managers need to figure out where
those limitations lie.
CAN OR CANNOT?
Most situations involve some elements you can change and
some you cannot. The critical skill is spotting the differ-ence.
You need to ask, Is the situation dominated by possibility (that
is, things
we can alter for the better) or by necessity (elements
we cannot change)?
Suppose you plan to build a bottling line for plastic
bottles of springwater. The standard way to set one up
is to take “forms” (miniature thick plastic tubes), heat
them, use air pressure to mold them to full bottle size,
cool them until they’re rigid, and finally fill them with
water. Thousands of bottling lines around the world
are configured this way.
Some of this cannot be other than it is: how hot the
form has to be to stretch; the amount of air pressure
required to mold the bottle; how fast the bottle can
be cooled; how quickly the water can fill the bottle.
These are determined by the laws of thermodynam-
ics and gravity—which executives cannot do a thing
to change.
Still, there’s an awful lot they can change. While
the laws of science govern each step, the steps them-
selves don’t have to follow the sequence that has
It was in part to remedy this shortcoming that the
Ford Foundation supported the creation of academic
journals and funded the establishment of doctoral
programs at Harvard Business School, the Carnegie
Institute of Technology (the predecessor of Carnegie
Mellon), Columbia, and the University of Chicago.
But is it true that management is a science? And
is it right to equate intellectual rigor with data anal-
ysis? If the answers to those questions are no and
no—as we will suggest in the following pages—then
how should managers arrive at their decisions? We’ll
set out an alternative approach for strategy making
and innovation—one that relies less on data analy-
sis and more on imagination, experimentation, and
communication.
But first let’s take a look back at where—or rather
with whom—science started.
IS BUSINESS A SCIENCE?
What we think of as science began with Aristotle, who as a
student of Plato was the first to write about cause and effect and
the methodology for demonstrat-ing it. This made
“demonstration,” or
proof, the goal of science and the final criterion for
“truth.” As such, Aristotle was the originator of the
approach to scientific exploration, which Galileo,
Bacon, Descartes, and Newton would formalize as
“the Scientific Method” 2,000 years later.
It’s hard to overestimate the impact of sc i-
ence on society. The scientific discoveries of the
Enlightenment—deeply rooted in the Aristotelian
methodology—led to the Industrial Revolution and
the global economic progress that followed. Science
solved problems and made the world a better place.
Small wonder that we came to regard great scientists
like Einstein as latter-day saints. And even smaller
wonder that we came to view the scientific method as
a template for other forms of inquiry and to speak of
“social sciences” rather than “social studies.”
But Aristotle might question whether we’ve al-
lowed our application of the scientific method to go
too far. In defining his approach, he set clear bound-
aries around what it should be used for, which was
understanding natural phenomena that “cannot be
other than they are.” Why does the sun rise every day,
why do lunar eclipses happen when they do, why do
objects always fall to the ground? These things are
beyond the control of any human, and science is the
study of what makes them occur.
However, Aristotle never claimed that all events
were inevitable. To the contrary, he believed in
free will and the power of human agency to make
choices that can radically change situations. In other
words, if people choose, a great many things in the
world can be other than they are. “Most of the things
about which we make decisions, and into which we
IN BRIEF
THE PROBLEM
The big-data revolution has
reinforced the belief that all
business decisions should be
reached through scientific
analysis. But this approach
has its limits, and it tends to
narrow strategic options and
hinder innovation.
WHY IT HAPPENS
The scientific method is
designed to understand
natural phenomena that
cannot be changed—the sun
will always rise tomorrow.
It is not an effective way to
evaluate things that do not
yet exist.
THE SOLUTION
To make decisions about
what could be, managers
should devise narratives
about possible futures,
applying the tools of
metaphor, logic, and
emotion first described by
Aristotle. Then they must
hypothesize what would
have to be true for those
narratives to happen and
validate their hypotheses
through prototyping.
130 HARVARD BUSINESS REVIEW SEPTEMBER–
OCTOBER 2017
FEATURE MANAGEMENT IS MUCH MORE THAN A
SCIENCE
dominated bottling for decades. A company called
LiquiForm demonstrated that after asking, Why can’t
we combine two steps into one by forming the bottle
with pressure from the liquid we’re putting into it,
rather than using air? And that idea turned out to be
utterly doable.
Executives need to deconstruct ever y dec i-
sion-making situation into cannot and can parts and
then test their logic. If the initial hypothesis is that
an element can’t be changed, the executive needs to
ask what laws of nature suggest this. If the rationale
for cannot is compelling, then the best
approach is to apply a methodology that
will optimize the status quo. In that case let science be
the master and use its tool kits of data and analytics
to drive choices.
In a similar way, executives need to test the logic
behind classifying elements as cans. What suggests
that behaviors or outcomes can be different from what
they have been? If the supporting rationale is strong
enough, let design and imagination be the master and
use analytics in their service.
It’s important to realize that the presence
of data is not sufficient proof that outcomes
cannot be different. Data is not logic. In fact,
many of the most lucrative business moves
come from bucking the evidence. Lego chair-
man Jørgen Vig Knudstorp offers a case in
point. Back in 2008, when he was the com-
pany’s CEO, its data suggested that girls
were much less interested in its toy bricks
than boys were: 85% of Lego players were
boys, and every attempt to attract more
girls had failed. Many of the firm’s man-
agers, therefore, believed that girls were
inherently less likely to play with the
bricks—they saw it as a cannot situation.
But Knudstorp did not. The problem,
he thought, was that Lego had
not yet figured out how to get
girls to play with construc-
tion toys. His hunch was
borne out with the launch
o f t h e s u c c e s s f u l L e g o
Friends line, in 2012.
The Lego case illustrates
that data is no more than
ev idence, and it’s not always
obvious what it is evidence of.
Moreover, the absence of data
does not preclude possibil-
ity. If you are talking about
new outcomes and behav-
iors, then naturally there is
no prior evidence. A truly rigorous
thinker, therefore, considers
not only what the data sug-
gests but also what within
the bounds of possibility
could happen. And that
requires the exerc ise
of imagination—a very
different process from
analysis.
A l s o, t h e d iv i s i o n
between can and can-
not is more fluid than most
people think. Innovators
w ill push that boundar y
more than most, challenging
the cannot.
DATA IS NOT
LOGIC. IN
FACT, MANY
OF THE MOST
LUCRATIVE
BUSINESS
MOVES COME
FROM BUCKING
THE EVIDENCE.
SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS
REVIEW 131
BREAKING THE FRAME
The imagination of new possibilities
first requires an act of unframing. The
status quo often appears to be the only
way things can be, a perception that’s
hard to shake.
We recently came across a good example of the sta-
tus quo trap while advising a consulting firm whose
clients are nonprofit organizations. The latter face
a “starvation cycle,” in which they get generously
funded for the direct costs of specific programs but
struggle to get support for their indirect costs. A large
private foundation, for instance, may fully fund the
expansion of a charity’s successful Latin American
girls’ education program to sub-Saharan Africa, yet
underwrite only a small fraction of the associated op-
erational overhead and of the cost of developing the
program in the first place. This is because donors typ-
ically set low and arbitrary levels for indirect costs—
usually allowing only 10% to 15% of grants to go to-
ward them, even though the true indirect costs make
up 40% to 60% of the total tab for most programs.
The consulting firm accepted this framing of the
problem and believed that the strategic challenge was
figuring out how to persuade donors to increase the
percentage allocated to indirect costs. It was consid-
ered a given that donors perceived indirect costs to
be a necessary evil that diverted resources away from
end beneficiaries.
We got the firm’s partners to test that belief by lis-
tening to what donors said about costs rather than
selling donors a story about the need to raise re-
imbursement rates. What the partners heard surprised
them. Far from being blind to the starvation cycle, do-
nors hated it and understood their own role in causing
it. The problem was that they didn’t trust their grant-
ees to manage indirect costs. Once the partners were
liberated from their false belief, they soon came up
with a wide range of process-oriented solutions that
could help nonprofits build their competence at cost
management and earn their donors’ confidence.
Although listening to and empathizing with stake-
holders might not seem as rigorous or systematic
as analyzing data from a formal survey, it is in fact a
tried-and-true method of gleaning insights, familiar
to anthropologists, ethnographers, sociologists, psy-
chologists, and other social scientists. Many business
leaders, particularly those who apply design thinking
and other user-centric approaches to innovation, rec-
ognize the importance of qualitative, observational
research in understanding human behavior. At Lego,
for example, Knudstorp’s initial questioning of gen-
der assumptions triggered four years of ethnographic
studies that led to the discovery that girls are more
interested in collaborative play than boys are, which
suggested that a collaborative construction toy could
appeal to them.
Powerful tool though it is, ethnographic research
is no more than the starting point for a new frame.
Ultimately, you have to chart out what could be and
get people on board with that vision. To do that, you
need to create a new narrative that displaces the old
frame that has confined people. And the story-mak-
ing process has principles that are entirely different
from the principles of natural science. Natural science
explains the world as it is, but a story can describe a
world that does not yet exist.
CONSTRUCTING PERSUASIVE NARRATIVES
It may seem unlikely, but Aristotle, the same philosopher who
gave us the sci-entific method, also set out methods for creating
compelling narratives. In The Art of Rhetoric he describes a
system of
persuasion that has three drivers:
• Ethos: the will and character to change the current
situation. To be effective, the author of the narrative
must possess credibility and authenticity.
• Logos: the logical structure of the argument. This
must provide a rigorous case for transforming prob-
lems into possibilities, possibilities into ideas, and
ideas into action.
• Pathos: the capacity to empathize. To be capable
of inspiring movement on a large scale, the author
must understand the audience.
A multibillion-dollar merger of two large insurance
companies offers an example of how to use ethos, lo-
gos, and pathos. The two firms were longtime compet-
itors. There were winners and losers in the deal, and
employees at all levels were nervous and unsettled. To
complicate matters, both firms had grown by acquisi-
tion, so in effect this was a merger of 20 or 30 different
cultures. These smaller legacy groups had been inde-
pendent and would resist efforts to integrate them to
capture synergies. On top of that, the global financial
crisis struck just after the merger, shrinking the indus-
try by 8%. So the merged enterprise’s leaders faced a
double challenge: a declining market and a skeptical
organizational culture.
The normal approach to postmerger integration
is rational and reductionist: Analyze the current cost
structures of the two organizations and combine them
into one smaller structure—with the attendant layoffs
of “redundant” employees. However, the leader of the
merged companies did not want to follow the usual
drill. Rather, he wanted to build a new organization
from the ground up. He supplied the ethos by articu-
lating the goal of accomplishing something bigger and
better than a standard merger integration.
However, he needed the logos—a powerful and
compelling case for a future that was different. He built
one around the metaphor of a thriving city. Like a city,
the new organization would be a diverse ecosystem
NATURAL
SCIENCE
EXPLAINS
THE WORLD
AS IT IS, BUT
A STORY CAN
DESCRIBE A
WORLD THAT
DOES NOT
YET EXIST.
132 HARVARD BUSINESS REVIEW SEPTEMBER–
OCTOBER 2017
FEATURE MANAGEMENT IS MUCH MORE THAN A
SCIENCE
that would grow in both planned and unplanned
ways. Everybody would be part of that growth and
contribute to the city. The logic of a thriving city cap-
tured the imagination of employees enough for them
to lean into the task and imagine possibilities for
themselves and their part of the organization.
The effort also required pathos—forging an emo-
tional connection that would get employees to com-
mit to building this new future together. To enlist
them, the leadership group took a new approach
to communication. Typically, executives com-
municate postmerger integration plans with
town halls, presentations, and e-mails that
put employees on the receiving end of mes-
sages. Instead the leadership group set up a
series of collaborative sessions in which units
in the company held conversations about the
thriving-city metaphor and used it to explore
challenges and design the work in their sphere
of activity. How would the claims department look
different in the thriving city? What would finance
look like? In effect, employees were creating their
own mini-narratives within the larger narrative the
leaders had constructed. This approach required
courage because it was so unusual and playful for
such a large organization in a conservative industry.
The approach was a resounding success. Within
six months, employee engagement scores had risen
from a dismal 48% to a spectacular 90%. That trans-
lated into performance: While the industry shrank,
the company’s business grew by 8%, and its customer
satisfaction scores rose from an average of 6 to
9 (on a scale of 1 to 10).
This case illustrates the importance of
another rhetorical tool: a strong metaphor
that captures the arc of your narrative in a
sentence. A well-crafted metaphor reinforces
all three elements of persuasion. It makes lo-
gos, the logical argument, more compelling
and strengthens pathos by helping the audience
connect to that argument. And finally, a more com-
pelling and engaging argument enhances the moral
authority and credibility of the leader—the ethos.
WHY METAPHORS MATTER
We all know that good stories are an-
c h o r e d b y p o w e r f u l m e t a p h o r s .
Aristotle himself observed, “Ordinary
words convey only what we know al-
ready; it is from metaphor that we can
best get hold of something fresh.” In fact, he believed
that mastery of metaphor was the key to rhetorical
success: “To be a master of metaphor is the greatest
thing by far. It is…a sign of genius,” he wrote.
It’s perhaps ironic that this proposition about
an unscientific construct has been scientifically
SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS
REVIEW 133
confirmed. Research in cognitive science has demon-
strated that the core engine of creative synthesis is
“associative fluency”—the mental ability to connect
two concepts that are not usually linked and to forge
them into a new idea. The more diverse the concepts,
the more powerful the creative association and the
more novel the new idea.
With a new metaphor, you compare two things
that aren’t usually connected. For instance, when
Hamlet says to Rosencrantz, “Denmark’s a prison,”
he is associating two elements in an unusual way.
Rosencrantz knows what “Denmark” means, and he
knows what “a prison” is. However, Hamlet presents
a new concept to him that is neither the Denmark he
knows nor the prisons he knows. This third element
is the novel idea or creative synthesis produced by
the unusual combination.
When people link unrelated concepts, product inno-
vations often result. Samuel Colt developed the revolv-
ing bullet chamber for his famous pistol after working
on a ship as a young man and becoming fascinated by
the vessel’s wheel and the way it could spin or be locked
by means of a clutch. A Swiss engineer was inspired
to create the hook-and-loop model of Velcro after walk-
ing in the mountains and noticing the extraordinary
adhesive qualities of burrs that stuck to his clothing.
Metaphor also aids the adoption of an innovation
by helping consumers understand and relate to it. The
automobile, for instance, was initially described as “a
horseless carriage,” the motorcycle as “a bicycle
with a motor.” The snowboard was simply “a
skateboard for the snow.” The very first step in
the evolution that has made the smartphone
a ubiquitous and essential device was the
launch in 1999 of Research in Motion’s
BlackBerry 850. It was sold as a pager
that could also receive and send
e-mails—a comforting meta-
phor for initial users.
One needs only to look
at the failure of the Segway
to see how much harder it is to
devise a compelling narrative
without a good metaphor. The
machine, developed by superstar
inventor Dean Kamen and hyped as
the next big thing, was financed by
hundreds of millions in venture
capital. Although it’s a brilliant
application of advanced tech-
nology, hardly anyone uses it.
Many rationalizations can be
made for its failure—the high
price point, the regulatory
restrictions—but we would
argue that a key reason is
that the Segway is analogous
with absolutely nothing at all.
It is a little wheeled platform on
which you stand upright and largely
motionless while moving forward.
People couldn’t relate to it. You don’t
sit, as you do in a car, or pedal, as you
do on a bicycle, or steer it with handles,
as you do a motorcycle. Think of the last
time you saw a Segway in use. You proba-
bly thought the rider looked laughably geeky
on the contraption. Our minds don’t take to the
Segway because there is no positive experience
to compare it to.
FEATURE MANAGEMENT IS MUCH MORE THAN A
SCIENCE
134 HARVARD BUSINESS REVIEW SEPTEMBER–
OCTOBER 2017
We’re not saying that an Aristotelian argument can’t
be made without a metaphor; it is just much harder. A
horseless carriage is easier to sell than the Segway.
CHOOSING THE RIGHT NARRATIVE
When you’re facing decisions in the realm of possibilities, it’s
useful to come up with three or four compelling narra-tives,
each with a strong metaphor, and then put them through a
testing process
that will help you reach consensus around which
one is best. What does that entail? In the cannot
world, careful analysis of data leads to the optimal
decision. But in the can world, where we are seeking
to bring something into existence, there is no data
to analyze. To evaluate your options, you need to
do the following:
Clarify the conditions. While we have no way of
proving that a proposed change will have the desired
effect, we can specify what we think would have to be
true about the world for it to work. By considering this
rather than debating what is true about the world as it
is, innovators can work their way toward a consensus.
The idea is to have the group agree on whether it can
make most of those conditions a reality—and will take
responsibility for doing so.
This was the approach pursued many years ago by
a leading office furniture company that had developed
a new chair. Although it was designed to be radically
superior to anything else on the market, the chair was
expensive to make and would need to be sold at twice
an office chair’s typical price. The quantitative market
research showed that customers reacted tepidly to
the new product. Rather than giving up, the company
asked what would have to be true to move customers
from indifference to passion. It concluded that if cus-
tomers actually tried the chair, they would experience
its breakthrough performance and become enthusi-
astic advocates. The company went to market with a
launch strategy based on a customer trial process, and
the chair has since become the world’s most profitable
and popular office chair.
Soon after, the company’s managers asked them-
selves the same question about a new office design
concept that eliminated the need to build walls and
install either flooring or ceilings to create office spaces.
This product could be installed into the raw space of a
new building, dramatically simplifying and lowering
the cost of building out office space. It was clear that
the company’s customers, building tenants, would be
interested. But for the new system to succeed, land-
lords would also have to embrace it. Unfortunately,
the new system would eliminate the revenues they
typically made on office build-outs, so it was unlikely
that they would cooperate in applying it, despite its
advantages to the tenants. The project was killed.
Create new data. The approach to experimenta-
tion in the can world is fundamentally different from
the one in the cannot world. In the cannot world,
the task is to access and compile the relevant data.
Sometimes that involves simply looking it up—from
a table in the Bureau of Labor Statistics database, for
example. Other times, it means engaging in an effort
to uncover it—such as through a survey. You may also
have to apply accepted statistical tests to determine
whether the data gathered demonstrates that the prop-
osition—say, that consumers prefer longer product life
to greater product functionality—is true or false.
In the can world, the relevant data doesn’t exist
because the future hasn’t happened yet. You have to
create the data by prototyping—giving users some-
thing they haven’t seen before and observing and re-
cording their reactions. If users don’t respond as you
expected, you plumb for insights into how the proto-
type could be improved. And then repeat the process
until you have generated data that demonstrates your
innovation will succeed.
Of course, some prototyped ideas are just plain
bad. That’s why it’s important to nurture multiple nar-
ratives. If you develop a clear view of what would have
to be true for each and conduct prototyping exercises
for all of them, consensus will emerge about which
narrative is most compelling in action. And involve-
ment in the process will help the team get ready to as-
sume responsibility for putting the chosen narrative
into effect.
THE FACT THAT scientific analysis of data has made the
world a better place does not mean that it should drive
every business decision. When we face a context in
which things cannot be other than they are, we can
and should use the scientific method to understand
that immutable world faster and more thoroughly
than any of our competitors. In this context the
development of more-sophisticated data analytics
and the enthusiasm for big data are unalloyed assets.
But when we use science in contexts in which things
can be other than they are, we inadvertently convince
ourselves that change isn’t possible. And that will leave
the field open to others who invent something better—
and we will watch in disbelief, assuming it’s an anom-
aly that will go away. Only when it is too late will we
realize that the insurgent has demonstrated to our for-
mer customers that things indeed can be different. That
is the price of applying analytics to the entire business
world rather than just to the appropriate part of it.
HBR Reprint R1705L
ROGER L. MARTIN is the director of the Martin Prosperity
Institute and a former dean of the Rotman School of
Management in Toronto, and a coauthor of Playing to Win:
How Strategy Really Works (Harvard Business Review Press,
2013). TONY GOLSBY-SMITH is the CEO and founder of
Second Road,
a consulting firm based in Sydney, Australia, that is now part
of Accenture Strategy.
IN THE CAN
WORLD, THE
RELEVANT DATA
DOESN’T EXIST
BECAUSE THE
FUTURE HASN’T
HAPPENED YET.
YOU HAVE TO
CREATE IT BY
PROTOTYPING.
SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS
REVIEW 135
Copyright 2017 Harvard Business Publishing. All Rights
Reserved. Additional restrictions
may apply including the use of this content as assigned course
material. Please consult your
institution's librarian about any restrictions that might apply
under the license with your
institution. For more information and teaching resources from
Harvard Business Publishing
including Harvard Business School Cases, eLearning products,
and business simulations
please visit hbsp.harvard.edu.
Risk Management Insight
FAIR
(FACTOR ANALYSIS OF INFORMATION RISK)
Basic Risk Assessment Guide
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
NOTE: Before using this assessment guide…
Using this guide effectively requires a solid understanding of
FAIR concepts
‣ As with any high-level analysis method, results can depend
upon variables that may not be accounted for at
this level of abstraction
‣ The loss magnitude scale described in this section is adjusted
for a specific organizational size and risk
capacity. Labels used in the scale (e.g., “Severe”, “Low”, etc.)
may need to be adjusted when analyzing
organizations of different sizes
‣ This process is a simplified, introductory version that may not
be appropriate for some analyses
Basic FAIR analysis is comprised of ten steps in four stages:
Stage 1 – Identify scenario components
1. Identify the asset at risk
2. Identify the threat community under consideration
Stage 2 – Evaluate Loss Event Frequency (LEF)
3. Estimate the probable Threat Event Frequency (TEF)
4. Estimate the Threat Capability (TCap)
5. Estimate Control strength (CS)
6. Derive Vulnerability (Vuln)
7. Derive Loss Event Frequency (LEF)
Stage 3 – Evaluate Probable Loss Magnitude (PLM)
8. Estimate worst-case loss
9. Estimate probable loss
Stage 4 – Derive and articulate Risk
10. Derive and articulate Risk
Risk
Loss Event
Frequency
Probable Loss
Magnitude
Threat Event
Frequency
Vulnerability
Contact Action
Control
Strength
Threat
Capability
Primary Loss
Factors
Secondary
Loss Factors
Asset Loss
Factors
Threat Loss
Factors
Organizational
Loss Factors
External Loss
Factors
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Stage 1 – Identify Scenario Components
Step 1 – Identify the Asset(s) at risk
In order to estimate the control and value characteristics within
a risk analysis, the analyst must first identify the asset
(object) under evaluation. If a multilevel analysis is being
performed, the analyst will need to identify and evaluate the
primary asset (object) at risk and all meta-objects that exist
between the primary asset and the threat community. This
guide is intended for use in simple, single level risk analysis,
and does not describe the additional steps required for a
multilevel analysis.
Asset(s) at risk:
_____________________________________________________
_
Step 2 – Identify the Threat Community
In order to estimate Threat Event Frequency (TEF) and Threat
Capability (TCap), a specific threat community must first be
identified. At minimum, when evaluating the risk associated
with malicious acts, the analyst has to decide whether the
threat community is human or malware, and internal or external.
In most circumstances, it’s appropriate to define the
threat community more specifically – e.g., network engineers,
cleaning crew, etc., and characterize the expected nature
of the community. This document does not include guidance in
how to perform broad-spectrum (i.e., multi-threat
community) analyses.
Threat community:
_____________________________________________________
_
Characterization
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Stage 2 – Evaluate Loss Event Frequency
Step 3 – Threat Event Frequency (TEF)
The probable frequency, within a given timeframe, that a threat
agent will act against an asset
Contributing factors: Contact Frequency, Probability of Action
Very High (VH) > 100 times per year
High (H) Between 10 and 100 times per year
Moderate (M) Between 1 and 10 times per year
Low (L) Between .1 and 1 times per year
Very Low (VL) < .1 times per year (less than once every ten
years)
Rationale
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Step 4 – Threat Capability (Tcap)
The probable level of force that a threat agent is capable of
applying against an asset
Contributing factors: Skill, Resources
Very High (VH) Top 2% when compared against the overall
threat population
High (H) Top 16% when compared against the overall threat
population
Moderate (M) Average skill and resources (between bottom 16%
and top 16%)
Low (L) Bottom 16% when compared against the overall threat
population
Very Low (VL) Bottom 2% when compared against the overall
threat population
Rationale
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Step 5 – Control strength (CS)
The expected effectiveness of controls, over a given timeframe,
as measured against a baseline
level of force
Contributing factors: Strength, Assurance
Very High (VH) Protects against all but the top 2% of an avg.
threat population
High (H) Protects against all but the top 16% of an avg. threat
population
Moderate (M) Protects against the average threat agent
Low (L) Only protects against bottom 16% of an avg. threat
population
Very Low (VL) Only protects against bottom 2% of an avg.
threat population
Rationale
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Step 6 – Vulnerability (Vuln)
The probability that an asset will be unable to resist the actions
of a threat agent
Tcap (from step 4):
CS (from step 5):
Vulnerability
VH VH VH VH H M
H VH VH H M L
Tcap M VH H M L VL
L H M L VL VL
VL M L VL VL VL
VL L M H VH
Control Strength
Vuln (from matrix above):
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Step 7 – Loss Event Frequency (LEF)
The probable frequency, within a given timeframe, that a threat
agent will inflict harm upon an
asset
TEF (from step 3):
Vuln (from step 6):
Loss Event Frequency
VH M H VH VH VH
H L M H H H
TEF M VL L M M M
L VL VL L L L
VL VL VL VL VL VL
VL L M H VH
Vulnerability
LEF (from matrix above):
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Stage 3 – Evaluate Probable Loss Magnitude
Step 8 – Estimate worst-case loss
Estimate worst-case magnitude using the following three steps:
‣ Determine the threat action that would most likely result in a
worst-case outcome
‣ Estimate the magnitude for each loss form associated with that
threat action
‣ “Sum” the loss form magnitudes
Loss Forms
Threat Actions Productivity Response Replacement
Fine/Judgments Comp. Adv. Reputation
Access
Misuse
Disclosure
Modification
Deny Access
Magnitude Range Low End Range High End
Severe (SV) $10,000,000 --
High (H) $1,000,000 $9,999,999
Significant (Sg) $100,000 $999,999
Moderate (M) $10,000 $99,999
Low (L) $1,000 $9,999
Very Low (VL) $0 $999
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Step 9 – Estimate probable loss
Estimate probable loss magnitude using the following three
steps:
‣ Identify the most likely threat community action(s)
‣ Evaluate the probable loss magnitude for each loss form
‣ “Sum” the magnitudes
Loss Forms
Threat Actions Productivity Response Replacement
Fine/Judgments Comp. Adv. Reputation
Access
Misuse
Disclosure
Modification
Deny Access
Magnitude Range Low End Range High End
Severe (SV) $10,000,000 --
High (H) $1,000,000 $9,999,999
Significant (Sg) $100,000 $999,999
Moderate (M) $10,000 $99,999
Low (L) $1,000 $9,999
Very Low (VL) $0 $999
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Stage 4 – Derive and Articulate Risk
Step 10 – Derive and Articulate Risk
The probable frequency and probable magnitude of future loss
Well-articulated risk analyses provide decision-makers with at
least two key pieces of information:
‣ The estimated loss event frequency (LEF), and
‣ The estimated probable loss magnitude (PLM)
This information can be conveyed through text, charts, or both.
In most circumstances, it’s advisable to also provide the
estimated high-end loss potential so that the decision-maker is
aware of what the worst-case scenario might look like.
Depending upon the scenario, additional specific information
may be warranted if, for example:
‣ Significant due diligence exposure exists
‣ Significant reputation, legal, or regulatory considerations exist
Risk
Severe H H C C C
High M H H C C
PLM Significant M M H H C
Moderate L M M H H
Low L L M M M
Very Low L L M M M
VL L M H VH
LEF
LEF (from step 7):
PLM (from step 9):
WCLM (from step 8):
Key Risk Level
C Critical
H High
M Medium
L Low
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Code Galore Caselet:
Using COBIT® 5 for Information Security
Company Profile – Code Galore
Background Information
The Problems
Your Role
Your Tasks
Figures
Notes
Questions
2
Agenda
© 2013 ISACA. All rights reserved.
Profile
Start-up company founded in 2005
One office in Sunnyvale, California, USA
10 remote salespeople and a few with space at resellers’ offices
Approximately 100 total staff; about one-third work in
engineering
3
Company Profile – Code Galore
4
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Background Information
Building a comprehensive business function automation
software that performs many functions (decision making in
approaching new initiatives, goal setting and tracking, financial
accounting, a payment system, and much more).
The software is largely the joint brainchild of the Chief
Technology Officer (CTO) and a highly visionary Marketing
Manager who left the company a year ago
5
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Background Information – What We Do
Financed 100% by investors who are extremely anxious to make
a profit.
Investors have invested more than US $35 million since
inception and have not received any returns.
The organization expected a small profit in the last two
quarters. However, the weak economy led to the cancellation of
several large orders. As a result, the organization was in the red
each quarter by approximately US $250,000.
6
Background Information – Financials
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Code Galore is a privately held company with a budget of US
$15 million per year. Sales last year totaled US $13.5 million
(as mentioned earlier, the company came within US $250,000 of
being profitable each of the last two quarters).
The investors hold the preponderance of the company’s stock;
share options are given to employees in the form of stock
options that can be purchased for US $1 per share if the
company ever goes public.
Code Galore spends about five percent of its annual budget on
marketing. Its marketing efforts focus on portraying other
financial function automation applications as ‘point solutions’
in contrast to Code Galore’s product.
7
Background Information – Financials
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
8
Background Information – Org. Structure
Figure 1—Code Galore Organisational Chart
CEO
CSO
VP, Finance
VP, Business
CTO
VP, Human Resources
Security
Administrator
Sales Mgr
Accounting
Dir.
Sr. Financial
Analyst
Infrastructure
Mgr.
Sys. Dev. Mgr.
HR Manager
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
The board of directors:
Consists of seasoned professionals with many years of
experience in the software industry
Is scattered all over the world and seldom meets, except by
teleconference
Is uneasy with Code Galore being stretched so thin financially,
and a few members have tendered their resignations within the
last few months
9
Background Information – Org. Structure
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
The CEO:
Is the former chief financial officer (CFO) of Code Galore that
replaced the original CEO who resigned to pursue another
opportunity two years ago
Has a good deal of business knowledge, a moderate amount of
experience as a C-level officer, but no prior experience as a
CEO
As a former CFO, tends to focus more on cost cutting than on
creating a vision for developing more business and getting
better at what Code Galore does best
Background Information – Org. Structure
10
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Engineers perform code installations. The time to get the
product completely installed and customized to the customer’s
environment can exceed one month with costs higher than US
$60,000 to the customer.
Labour and purchase costs are too high for small and medium-
sized businesses. So far, only large companies in the US and
Canada have bought the product.
C-level officers and board members know that they have
developed a highly functional, unique product for which there is
really no competition. They believe that, in time, more
companies will become interested in this product, but the
proverbial time bomb is ticking. Investors have stretched
themselves to invest US $35 million in the company, and are
unwilling to invest much more.
11
Background Information – Operational
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Business function automation software is a profitable area for
many software vendors because it automates tasks that
previously had to be performed manually or that software did
not adequately support.
The business function automation software arena has many
products developed by many vendors. However, Code Galore is
a unique niche player that does not really compete (at least on
an individual basis) with other business automation software
companies.
Background Information – Industry
12
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
The product is comprehensive—at least four other software
products would have to be purchased and implemented to cover
the range of functions that Code Galore’s product covers.
Additionally, the product integrates information and statistics
throughout all functions—each function is aware of what is
occurring in the other functions and can adjust what it does
accordingly, leading to better decision aiding.
Background Information – Products
13
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Sales have been slower than expected, mainly due to a
combination of the economic recession and the high price and
complexity of the product.
The price is not just due to the cost of software development; it
also is due to the configuration labour required to get the
product running suitably for its customers.
Background Information – Sales
14
What we do
Org. Structure
Operational
Industry
Products
Sales
Financials
Acquisition
Code Galore is in many ways fighting for its life, and the fact
that, four months ago, the board of directors made the decision
to acquire a small software start-up company, Skyhaven
Software, has not helped the cash situation.
Skyhaven consists of approximately 15 people, mostly
programmers who work at the company’s small office in
Phoenix, Arizona, USA. Originally, the only connection
between your network and Skyhaven’s was an archaic public
switched telephone network (PSTN).
Setting up a WAN
Two months ago, your company’s IT director was tasked with
setting up a dedicated wide area network (WAN) connection to
allow the former Skyhaven staff to remotely access Code
Galore’s internal network and vice versa.
You requested that this implementation be delayed until the
security implications of having this new access route into your
network were better understood, but the CEO denied your
request on the grounds that it would delay a critical business
initiative, namely getting Skyhaven’s code integrated into Code
Galore’s.
15
The Problems
Information Security
More recently, you have discovered that the connection does not
require a password for access and that, once a connection to the
internal network is established from outside the network, it is
possible to connect to every server within the network,
including the server that holds Code Galore’s source code and
software library and the server that houses employee payroll,
benefits and medical insurance information.
Fortunately, access control lists (ACLs) limit the ability of
anyone to access these sensitive files, but a recent vulnerability
scan showed that both servers have vulnerabilities that could
allow an attacker to gain unauthorised remote privileged access.
You have told the IT director that these vulnerabilities need to
be patched, but because of the concern that patching them may
cause them to crash or behave unreliably and because Code
Galore must soon become profitable or else, you have granted
the IT director a delay of one month in patching the servers.
16
The Problems – Overview
Bots
What now really worries you is that, earlier today, monitoring
by one of the security engineers who does some work for you
has shown that several hosts in Skyhaven’s network were found
to have bots installed in them.
Source Code
Furthermore, one of the Skyhaven programmers has told you
that Skyhaven source code (which is to be integrated into Code
Galore’s source code as soon as the Skyhaven programmers are
through with the release on which they are currently working) is
on just about every Skyhaven machine, regardless of whether it
is a workstation or server.
17
The Problems – Overview
Code Galore vs. Skyhaven Employee knowledge
Code Galore employees are, in general, above average in their
knowledge and awareness of information security, due in large
part to an effective security awareness programme that you set
up two months after you started working at Code Galore and
have managed ever since.
You offer monthly brown bag lunch events in a large conference
room, display posters reminding employees not to engage in
actions such as opening attachments that they are not expecting,
and send a short monthly newsletter informing employees of the
direction in which the company is going in terms of security and
how they can help.
Very few incidents due to bad user security practices occurred
until Skyhaven Software was acquired. Skyhaven’s employees
appear to have almost no knowledge of information security.
You also have discovered that the Skyhaven employee who
informally provides technical assistance does not make backups
and has done little in terms of security configuration and patch
management.
18
The Problems – Overview
19
Your Role
Hired two years ago as the only Chief Security Officer (CSO)
this company has ever had.
Report directly to the Chief Executive Officer (CEO).
Attend the weekly senior management meeting in which goals
are set, progress reports are given and issues to be resolved are
discussed.
The Information Security Department consists of just you; two
members of the security engineering team from software are
available eight hours each week.
10 years of experience as an information security manager, five
of which as a CSO, but you have no previous experience in the
software arena.
Four years of experience as a junior IT auditor.
Undergraduate degree in managing information systems and
have earned many continuing professional education credits in
information security, management and audit areas.
Five years ago, you earned your CISM certification.
The focus here is not on a business unit, but rather on Code
Galore as a whole, particularly on security risk that could
cripple the business.
Due primarily to cost-cutting measures the CEO has put in
place, your annual budget has been substantially less than you
requested each year.
Frankly, you have been lucky that no serious incident has
occurred so far. You know that in many ways your company has
been tempting fate.
You do the best you can with what you have, but levels of
unmitigated risk in some critical areas are fairly high.
Your Role and the Business Units
20
Mr. Wingate’s focus on cost cutting is a major reason that you
have not been able to obtain more resources for security risk
mitigation measures.
He is calm and fairly personable, but only a fair communicator,
something that results in your having to devote extra effort in
trying to learn his expectations of your company’s information
security risk mitigation effort and keeping him advised of risk
vectors and major developments and successes of this effort.
21
Your Role and the CEO, Ernest Wingate
Code Galore’s IT director is Carmela Duarte. She has put a
system of change control into effect for all IT activities
involving hardware and software.
This system is almost perfect for Code Galore—it is neither
draconian nor too lax and very few employees have any
complaints against it.
You have an excellent working relationship with her, and
although she is under considerable pressure from her boss, the
CTO, and the rest of C-level management to take shortcuts, she
usually tries to do what is right from a security control
perspective.
She is working hard to integrate the Skyhaven Software network
into Code Galore’s, but currently, there are few resources
available to do a very thorough job. She would also do more for
the sake of security risk mitigation if she had the resources.
Carmela has worked with Code Galore since 2006, and she is
very much liked and respected by senior management and the
employees who work for her.
22
Your Role and the IT Director, Carmela Duarte
You believe that Code Galore’s (but not Skyhaven Software’s)
security risk is well within the risk appetite of the CEO and the
board of directors.
You have a good security policy (including acceptable use
provisions) and standards in place, and you keep both of them
up to date.
You have established a yearly risk management cycle that
includes asset valuation, threat and vulnerability assessment,
risk analysis, controls evaluation and selection, and controls
effectiveness assessment, and you are just about ready to start a
controls evaluation when you suddenly realise that something
more important needs to be done right away (outlined in The
Problem section).
23
Your Tasks
© 2013 ISACA. All rights reserved.
Using the figure 4 template, you need to modify the qualitative
risk analysis that you performed six months ago to take into
account the risk related to Skyhaven Software. The major risk
events identified during this risk analysis are shown in figure 2.
You must not only head this effort, but for all practical
purposes, you will be the only person from Code Galore who
works on this effort.
24
Your Tasks – Qualitative Risk Analysis
© 2013 ISACA. All rights reserved.
Your revision of the last risk analysis will not only bring Code
Galore up to date concerning its current risk landscape, but will
also provide the basis for your requesting additional resources
to mitigate new, serious risk and previously unmitigated or
unsuitably mitigated risk.
You may find that some risk events are lower in severity than
before, possibly to the point that allocating further resources to
mitigate them would not be appropriate. This may help optimise
your risk mitigation investments.
To the degree that you realistically and accurately identify new
and changed risk, you will modify the direction of your
information security practice in a manner that, ideally, lowers
the level of exposure of business processes to major risk and
facilitates growth of the business.
Failure to realistically and accurately identify new and changed
risk will result in blindness to relevant risk that will lead to
unacceptable levels of unmitigated risk.
25
Your Tasks – Qualitative Risk Analysis
© 2013 ISACA. All rights reserved.
You must revise the most recent risk analysis, not only by
reassessing all the currently identified major risk, but also by
adding at least three risk events that were not previously
identified.
COBIT 5 provides tools that might be helpful in determining the
best approach reassessing and prioritising the major risk events,
in EDM03, Ensure risk optimisation.
You must also provide a clear and complete rationale for the
risk events, their likelihood, and impacts (outlined in the
section Alternatives With Pros and Cons of Each section).
26
Your Tasks – Qualitative Risk Analysis
© 2013 ISACA. All rights reserved.
The rationale for each security-related risk that you select must
include a discussion of the pros and cons associated with
identifying and classifying each as a medium-low risk or higher.
For example, suppose that you decide that a prolonged IT
outage is no longer a medium- to low-level risk, but instead is
now a low risk.
The pros (purely hypothetical in this case) may be that outage-
related risk events are now much lower than before due to, for
example, the implementation of a new backup and recovery
system that feeds data into an alternative data center (not true in
this caselet).
In this case allocating additional resources would therefore be a
waste of time and money.
27
Your Tasks – Pros and Cons
© 2013 ISACA. All rights reserved.
On the con side, lowering the severity of a prolonged IT outage
risk may result in underestimation of this source of risk, which
could result in failing to allocate resources and in a much higher
amount of outage-related loss and disruption than Code Galore
could take, given its somewhat precarious state.
28
Your Tasks – Pros and Cons
© 2013 ISACA. All rights reserved.
Exhibits – Major Risk
29
© 2013 ISACA. All rights reserved.
Figure 2—Major Risk
Figure 3—Network Diagram
30
© 2013 ISACA. All rights reserved.
31
Figure 4—Risk Analysis Template
© 2013 ISACA. All rights reserved.
Since Code Galore is in the business function automation
software arena it should be consider using business process
automation (BPA), a strategy an business uses to automate
processes in order to contain costs. It consists of integrating
applications, restructuring labor resources and using software
applications throughout the organization.
Code Galore is in a very difficult situation. Its existence is
uncertain, and money is critical right now.
Yet, this company has opened itself up to significant levels of
security risk because of acquiring Skyhaven Software and the
need for former Skyhaven programmers to access resources
within the corporate network.
Worse yet, even if the chief security officer (CSO) in this
scenario correctly identifies and assesses the magnitude of
security risk from acquiring Skyhaven and opening the Code
Galore network to connections from the Skyhaven network and
prescribes appropriate controls, given Code Galore’s cash
crunch, not many resources (money and labour) are likely to be
available for these controls.
32
Notes
© 2013 ISACA. All rights reserved.
All the CSO may be able to do is document the risk and make
prioritised recommendations for controls, waiting for the right
point in time when the company’s financial situation gets better.
If an information security steering committee exists, the CSO
must keep this committee fully apprised of changes in risk and
solicit input concerning how to handle this difficult situation.
At the same time, the CSO should initiate an ongoing effort (if
no such effort has been initiated so far) to educate senior
management and key stockholders concerning the potential
business impact of the new risk profile. (Note: The kind of
situation described in this caselet is not uncommon in real-
world settings.)
33
Notes
© 2013 ISACA. All rights reserved.
What are the most important business issues and goals for Code
Galore?
What are the factors affecting the problem related to this case?
What are the managerial, organizational, and technological
issues and resources related to this case?
What role do different decision makers play in the overall
planning, implementing and managing of the information
technology/security applications?
What are some of the emerging IT security technologies that
should be considered in solving the problem related to the case?
34
Discussion Questions 1-5
© 2013 ISACA. All rights reserved.
In what major ways and areas can information security help the
business in reaching its goals?
Which of the confidentiality, integrity and availability (CIA)
triad is most critical to Code Galore’s business goals, and why?
Change leads to risk, and some significant changes have
occurred. Which of these changes lead to the greatest risk?
Imagine that three of the greatest risk events presented
themselves in worst-case scenarios. What would be some of
these worst-case scenarios?
How can the CSO in this scenario most effectively communicate
newly and previously identified risk events that have grown
because of the changes to senior management?
35
Discussion Questions 6-10
© 2013 ISACA. All rights reserved.
DATA
The Best Approach to
Decision Making Combines
Data and Managers’
Expertise
by Paolo Gaudiano
JUNE 20, 2017
Data is now the critical tool for managing many corporate
functions, including marketing, pricing,
supply chain, operations, and more. This movement is being
further fueled by the promise of
artificial intelligence and machine learning, and by the ease of
collecting and storing data about every
facet of our daily lives.
2COPYRIGHT © 2017 HARVARD BUSINESS SCHOOL
PUBLISHING CORPORATION. ALL RIGHTS RESERVED.
But is the pendulum starting to swing too far? As a practitioner
and teacher of predictive analytics,
my greatest concern is what I call the “big data, little brain”
phenomenon: managers who rely
excessively on data to guide their decisions, abdicating their
knowledge and experience.
In a typical big data project, a manager engages an internal or
external team to collect and process
data, hoping to extract insights related to a particular business
problem. The big data team has the
expertise needed to wrangle raw data into usable form and to
select algorithms that can identify
statistically significant patterns. The results are then presented
to the manager through charts,
visualizations, and other types of reports. This scenario is
problematic because most managers are
not experts in data science, and most data scientists are not
business experts. Addressing this
dichotomy requires individuals who can “serve as liaisons”
between the two, as Todd Clark and Dan
Wiesenfeld suggested in a recent HBR article.
This, however, is simply a palliative that does not resolve the
underlying problem. As Tom Davenport
wrote in HBR in 2006, the year before publishing his seminal
book, Competing on Analytics, “For
analytics-minded leaders, then, the challenge boils down to
knowing when to run with the numbers
and when to run with their guts.” Rather than reducing reliance
on intuition, the advanced
methodologies of big data require managers to use even more
intuition to make sense of the growing
number of outputs and recommendations being generated by
data models.
Furthermore, the predictive models created by big data
methodologies do not incorporate the
manager’s unique knowledge of the business. This is tantamount
to someone collecting a lot of data
and then deciding to throw away half of it — except in this case
you are arguably throwing away the
more valuable half, because the manager has specific knowledge
of the business, while the data
science approaches are generic.
How can we effectively combine data science and business
expertise? In a 2002 HBR article titled
“Predicting the Unpredictable,” my business partner Eric
Bonabeau introduced the concept of agent-
based simulation (ABS), which at that time was a relatively
novel approach to solving complex
business problems through computer simulations. Fifteen years
later, Icosystem (Bonabeau’s
company, which I am still a core member of) and a number of
others have demonstrated the power of
ABS as a business management tool.
For example, Bonabeau’s article described a project with Eli
Lilly to develop a new way of managing
drug development pipelines. In 2008 Bonabeau and two
members of the Eli Lilly R&D leadership
published an HBR article in which they reported that the new
approach had been able to deliver
molecules to Phase II trials “at almost twice the speed and less
than a third of the cost of the standard
process.”
Although ABS was first created as a tool for social science
research about four decades ago, it is only
now starting to gain widespread adoption because of the
dramatic increase in available computing
power. For instance, Icosystem developed a simulation of the
daily behavior of more than 300,000
3COPYRIGHT © 2017 HARVARD BUSINESS SCHOOL
PUBLISHING CORPORATION. ALL RIGHTS RESERVED.
https://hbr.org/2017/06/3-things-are-holding-back-your-
analytics-and-technology-isnt-one-of-them
https://hbr.org/2006/01/competing-on-analytics
https://www.amazon.com/Competing-Analytics-New-Science-
Winning/dp/1422103323/
https://hbr.org/2002/03/predicting-the-unpredictable
https://hbr.org/2008/03/a-more-rational-approach-to-new-
product-development
sailors in the U.S. Navy from recruitment to retirement. This
type of 20-year simulation can run on a
laptop in less than one minute, and it’s enabled the Navy to test
in one day more scenarios than they
would normally be able to test in one year.
But what about the “big data, little brain” problem? One of the
most appealing aspects of ABS is that
it combines domain expertise and data. The domain expertise is
used to define the structure of the
simulation, which captures the day-to-day behaviors and
interactions unique to each business
problem. The data is used partly to refine the details of the
simulation and partly to ensure that, as
the simulation runs, the resulting outcomes match real-world
results. With this approach, the
manager’s expertise regains the primary role, and the results of
the simulation can be analyzed by the
manager and data scientist together, as they both understand the
workings of the simulation.
Besides increasing transparency, combining domain expertise
and data also increases predictive
accuracy. Back in 2014 a leading automaker worked with an
ABS marketing analytics platform to plan
the launch of a new model. The ABS recommended launching
the new model six months earlier than
the client had planned. In 2016 the automaker launched the new
model as recommended; a year later
it found that ABS had predicted monthly sales for the first year
with 93% accuracy.
By combining data and the manager’s expertise into a predictive
model, ABS solves complex
problems in a transparent way with a high degree of predictive
accuracy. The increased availability of
commercial ABS tools and didactic materials suggest that this
new approach is poised to
revolutionize business management.
Paolo Gaudiano is president and chief technology officer of
Icosystem Corporation, a leader in the theory and
application of complexity science, and he recently co-founded
Aleria, which uses the same methodology to help
organizations quantify the link between diversity and
performance. He also teaches a graduate course on Business
Complexity at the City College of New York. Follow him on
Twitter @icopaolo.
4COPYRIGHT © 2017 HARVARD BUSINESS SCHOOL
PUBLISHING CORPORATION. ALL RIGHTS RESERVED.
https://twitter.com/icopaolo?lang=en
Copyright 2017 Harvard Business Publishing. All Rights
Reserved. Additional restrictions
may apply including the use of this content as assigned course
material. Please consult your
institution's librarian about any restrictions that might apply
under the license with your
institution. For more information and teaching resources from
Harvard Business Publishing
including Harvard Business School Cases, eLearning products,
and business simulations
please visit hbsp.harvard.edu.
Police decision making:
an examination of
conflicting theories
Scott W. Phillips and James J. Sobol
Criminal Justice Department, Buffalo State College, Buffalo,
New York, USA
Abstract
Purpose – The purpose of this paper is to compare two
conflicting theoretical frameworks that
predict or explain police decision making. Klinger’s ecological
theory proposes that an increased level
of serious crimes in an area decreases the likelihood an officer
will deal with order-maintenance issues,
while Fagan and Davies suggest an increase in low-level
disorder will increase order maintenance
behavior of police officers.
Design/methodology/approach – Using a vignette research
design, the authors examines factors
that may contribute to police officers’ decision to make a traffic
stop in four jurisdictions with varying
levels of serious crime. Ordered logistic regression with robust
standard errors was used in the
analysis.
Findings – Analysis of the findings demonstrates that officers
who work in higher crime areas are
less likely to stop a vehicle, as described in the vignettes.
Additional predictors of police decision to
stop include vehicles driven by teenaged drivers and drivers
who were speeding in a vehicle.
Research limitations/implications – The current research is
limited to an adequate but fairly
small sample size (n¼204), and research design that examines
hypothetical scenarios of police
decision making. Further data collection across different
agencies with more officers and more
variation in crime levels is necessary to extend the current
findings.
Originality/value – This paper adds to the literature in two
primary ways. First, it compares two
competing theoretical claims to examine a highly discretionary
form of police behavior and second,
it uniquely uses a vignette research design to tap into an area of
police behavior that is difficult to
study (e.g. the decision not to stop).
Keywords United States of America, Police, Policing, Decision
making, Workload, Traffic stops,
Vignettes
Paper type Research paper
Introduction
Police work requires officers to deal with a substantial amount
of non-criminal activity,
such as resolving disputes (Johnson and Rhodes, 2009) or
dealing with problems or
very low-level offenses that fall into the broad description
“order maintenance
activities” (Walker and Katz, 2005). There are potential benefits
when officers deal with
these low-level problems or offenses, such as reducing the
chance of further crime or
increasing the officer’s environmental knowledge to improve
problem solving (Walker
and Katz, 2005). One of the most common types of order
maintenance activities takes
place in the form of a traffic stop (Walker and Katz, 2005),
which could be seen as
“a form of order maintenance where the officer has taken action
against a suspected
individual in order to prevent crime” (Vito and Walsh, 2008, p.
93).
Many traffic stop studies were conducted in the past decade to
determine if police
officers were using race in their decision to stop a vehicle (e.g.
Gaines, 2006; Meehan
The current issue and full text archive of this journal is
available at
www.emeraldinsight.com/1363-951X.htm
Received 25 February 2011
Revised 21 June 2011
Accepted 7 July 2011
Policing: An International Journal of
Police Strategies & Management
Vol. 35 No. 3, 2012
pp. 551-565
r Emerald Group Publishing Limited
1363-951X
DOI 10.1108/13639511211250794
The authors would like to thank Sean P. Varano for his
invaluable assistance in the vignette
construction process.
551
Police decision
making
and Ponder, 2002a, b; Mosher et al., 2008; Petrocelli et al.,
2003; Schafer et al., 2004;
Smith and Petrocelli, 2001; Withrow, 2004a, b). Some of these
studies included
neighborhood characteristics in their examination of police
decision making, such as
an officer’s perception of the racial makeup of a patrol area
(Albert et al., 2005) and the
crime rate of a neighborhood (Petrocelli et al., 2003; Withrow,
2004a, b). Where many
studies of traffic stop decision making were unguided by theory
(Engel et al., 2002), the
work of Klinger (1997) and Fagan and Davies (2000) provide
theoretical guidance to
predict or explain how patrol area can impact an officer’s
decision to stop a person.
Both theoretical frameworks, however, offer different
predictions based on the type of
criminal activity in the patrol area. A postulate within Klinger’s
(1997) ecological
theory assumes that workload influences when formal legal
authority is applied.
That is, police officers reserve their attention for more serious
crimes in areas with
higher-crime rates or more serious criminal behavior.
Conversely, Fagan and Davies
(2000) theory asserts that officers are more aggressive in
response to low-level order
maintenance problems.
This research sought to examine the relationship between the
work location of a
police officer and its impact on a police officer’s judgment to
stop a vehicle to determine
which theoretical framework is supported. There were at least
two justifications for
this study. First, because police department policy is often
based on theoretically
framed research (Engel et al., 2002), it is important to assess
components of these two
competing theoretical frameworks to determine which is
empirically sound when
explaining police behavior. Further, if theories are necessarily
incomplete (Bernard and
Ritti, 1990), the present study may uncover conditions unique to
each so they can be
appropriately adjusted.
Theory
Researchers argued that studies of traffic stop decision making
should not be viewed
as scientific research because they failed to explicitly state the
guiding theory of their
research (Engel et al., 2002). To address this concern, the
present inquiry attempted to
shed light on police officer behavior during their routine patrol
duties using two
conflicting theoretical perspectives. One suggests that low-level
offenses increase the
likelihood that police officers will stop a person in an effort to
deal proactively with
bigger problems. The second proposes that officers are less
likely to deal with low-level
offenses and reserve their limited time and resources to deal
with more serious crimes.
Order maintenance policing
Fagan and Davies (2000) provide a detailed discussion of police
decision making,
explaining why the 1982 Broken Windows theory of Wilson and
Kelling, which
focussed on police response in disorderly places, morphed into
a policing tactic that
focussed on people. It had been theorized that social
disorganization in the form of an
increased poverty rate, predominately decreased age distribution
(i.e. younger
population), and population turnover lead to increased crime
rates across
neighborhoods. Fagan and Davies (2000) explained that while
social disorganization
predicted rates of disorder in an area (e.g. loitering, public
drinking), social
disorganization does not predict homicide rates and only weakly
predicted robbery
rates. Therefore, efforts to control serious crime via disorder
policing are unlikely to be
effective.
Based on the notion that proactive enforcement of minor crimes
and disorder would
reduce serious crime, the New York City Police Department
(NYPD) increased their use
552
PIJPSM
35,3
of tactics dealing with order maintenance issues in
neighborhoods with the highest
levels of disorder crime. Order maintenance policing was
intended to focus on “quality
of life” issues, such as public drinking or panhandling, under
the assumption that
police enforcement of laws against these types of crime would
reduce more serious
criminal behavior. Fagan and Davies (2000) reported that the
NYPD order maintenance
policing policy was intended to address gun-crimes as one of
the more serious criminal
behaviors that could be deterred through aggressive
enforcement of disorder crimes[1].
A successful order maintenance policing approach would
require a “pro-active
interdiction” (Fagan and Davies, 2000, p. 475) of anyone
suspected of violating even
minor offenses. This tactic, however, led to an increased use of
“pretextual” stops,
where police officers would scrutinize a person for any type of
minor offense in order to
establish a minimal level of reasonable suspicion to stop and
frisk the person in the
hopes of discovering a more serious offense (Fagan and Davies,
2000).
Ecological workload
A neighborhood attribute that likely has a direct impact on
police officers’ decision
making is the actual workload of the officers who patrol a
neighborhood. Klinger (1997)
suggested that as the number of calls-for-service and deviance
levels of a work location
increase, officers have less time to deal with citizens’
complaints. When an officer has
less time available for dealing with the work in their patrol
area, officers must manage
their time by prioritizing the tasks they focus on. This formula
pushes the officer’s
“towards leniency as deviance increases” (Klinger, 1997, p.
293). When a police officer
works in a location that has fewer calls-for-service or lower
levels of deviance, the
officer is free to use as much time as needed to deal with an
incident (Klinger, 1997).
Klinger (1997) relies on the work of Donald Black to build his
discussion of
“leniency,” which Klinger described as the amount of law a
police officer applies to an
incident. For example, a police officer could spend a substantial
amount of time
stopping vehicles but never issue a citation. Strictly speaking,
not issuing a ticket
would be considered a lenient response by the officer. Still, the
workload aspect of
Klinger’s (1997) ecological theory clearly implies that
“leniency” is the actual attention
or effort that a police officer devotes toward a problem. As a
result, when Klinger stated
police officers will be lenient “for increasingly serious crimes
as levels of district
deviance increases” (Klinger, 1997, p. 293), it is reasonable to
assume that police officers
who patrol areas with more serious crime would be less likely to
focus their attention
on traffic stops.
Klinger’s work has yet to receive consistent empirical support.
For example, Sobol
(2010) examined postulates of Klinger’s theory and
conceptualized workload as the
amount of time officers had “assigned” vs “unassigned” to
explain the vigor with
which the police used their formal authority. Surprisingly,
Sobol found that workload
and district crime were negatively correlated (r¼�0.16) and
that workload did not
significantly affect the vigor with which the police used their
formal legal authority.
Other research shows that neighborhood characteristics
influence an officer’s decision
to “translate” a call-for-service into an official crime report;
however, “neighborhood
influences vary by crime type” (Varano et al., 2009, p. 560).
Looking at studies on traffic
stop specifically, to date, no study has included a workload
variable in their analysis,
but a few studies offer what might be considered reasonable
surrogates that help build
a foundation for this approach. Contrary to what might be
expected within Klinger’s
(1997) ecological theory, Roh and Robinson (2009) reported
that patrol beats with more
crime (i.e. hot spots) are related to an increased likelihood of a
traffic stop. In addition,
553
Police decision
making
Phillips (2009a) found that sheriff’s deputies were less likely to
stop a vehicle than
officers in two small township police agencies. Although
Phillips does not speculate,
it may be that sheriff’s deputies have substantially more area to
cover and are more
likely to conserve their time resources by engaging in fewer
traffic stops.
Literature review
Background: traffic stop research
A number of different factors might influence the decision to
stop a vehicle:
neighborhood aspects, characteristics of the driver, organization
influences, and legal
factors. Each will be briefly discussed below in order to frame
an understanding
of the present research.
Scholars have advised that a greater understanding of traffic
stop behavior is
limited because many studies rely on data from one large police
department or
jurisdiction (Mosher et al., 2008; Novak, 2004; Parker et al.,
2004). Police behavior often
occurs in a beat or neighborhood context and the use of race in
the decision to stop
a vehicle “could possibly be more prevalent in racially
homogeneous communities”
(Novak, 2004, p. 73). Smith and Petrocelli (2001) found that the
Part-I crime rate of an
area was not related to the police decision to stop a vehicle.
Later, Petrocelli et al. (2003)
examined multiple neighborhood characteristics in the decision
to stop a vehicle,
including percent black population of the neighborhood, percent
of families below
poverty line, percent unemployed in neighborhood, mean family
income, and Part-I
crimes per 1,000 population. They found that police tended to
make more stops in
neighborhoods with higher-crime rates. Alpert et al. (2007)
examined the racial makeup
of neighborhoods where traffic stops occurred and found no
connection between racial
composition and police stops. Withrow (2004a) found that
drivers stopped during the
night and driving in higher-crime areas were more likely to be
black drivers. Similarly,
when using in-car computer queries as a measure of
surveillance, Meehan and Ponder
(2002b) found that officer scrutiny “significantly increases as
[African Americans]
travel farther from ‘black’ communities and into whiter
neighborhoods” (p. 422).
Some research has found that a driver’s race is related to the
police officer’s decision
to stop a vehicle. Several studies reported a relationship
between black drivers and the
decision to stop a vehicle (Miller, 2008; Warren et al., 2006),
while others have found
only a weak (Novak, 2004) or no relationship (Phillips, 2009a)
between black drivers
and the decision to stop a vehicle. Driver age, however, was
found to be related to the
decision to stop (Miller, 2008). Further, the driver’s gender (i.e.
male) was significantly
related to police decision to stop a driver (Miller, 2008; Warren
et al., 2006).
Early research into the influence of police organizations on an
officer’s decision
making suggested management style and agency size may
impact traffic stop
behavior. Wilson (1978) posited that officers who worked in
agencies with a legalistic
management style “will issue more traffic tickets at a higher
rate” (p. 172). Others
(Brown, 1981; Mastrofski et al., 1987) found that police
officers working in agencies
of differing size behave differently in traffic stop situations.
More recently, Mosher et al.
(2008) reported that most prior research of police decision
making in traffic stop
situations takes place in only one jurisdiction. This drawback
does not allow
researchers to determine if organizational characteristics
influence the decision making
of police officers. Phillips (2009a) analyzed the responses of
police officers in two small
agencies against sheriff’s deputies and found that sheriff’s
deputies were significantly
less likely to stop a vehicle. His study is limited because he
collected data in only three
law enforcement agencies and the number of officers in this
study was small.
554
PIJPSM
35,3
When legal considerations were included in the research,
Withrow (2004b) stated
most traffic stops occur when a driver commits a more serious
traffic offenses (e.g.
moving violations) than for less serious traffic offenses (e.g.
equipment violations).
Others suggest that drivers who are speeding (Phillips, 2009a),
commit moving
violations (Warren et al., 2006), or equipment violations (Alpert
et al., 2007) are likely to
be stopped. Novak (2004) reported that white drivers are more
likely to be stopped for
moving violations, unsafe driving, and speeding.
It has been suggested that a measure of vehicle characteristics
or quality that is
involved in a traffic stop should be studied because some cars
may be customized
in a manner the draws the attention of police officers (Batton
and Kadleck, 2004;
Ramirez et al., 2000). While vehicle quality has never been
clearly operationalized in
prior studies, it is suggested that “car effect” (Batton and
Kadleck, 2004) could include
a poor quality vehicle (Engel and Calnon, 2004) or an older
vehicle (Miller, 2008;
Warren et al., 2006), Alpert et al. (2007) found that vehicle age
had no impact on
the decision to stop a vehicle; other research indicated older
vehicles were related to the
decision to stop a vehicle (Miller, 2008). Phillips (2009a),
however, found that a newer
vehicle was related to the decision to stop the vehicle.
This study
As the literature review demonstrated, the decision making of
street-level police
officers in traffic stop incidents may be influenced by different
factors. The few studies
that incorporated a neighborhood crime-rate variable (Petrocelli
et al., 2003; Withrow,
2004a) found a positive relationship between this dimension and
the decision to stop a
vehicle. These results tended to support the framework provided
by Fagan and Davies
(2000). Such findings, however, may be difficult to generalize
since their data were
collected from one large urban police department (e.g. NYPD).
In addition, Klinger’s
(1997) discussion provides a general theoretical framework for
police behavior, but
does not consider the other variables that may mediate the
influence of area, such as
organizational size, the agencies management style, or the type
of law enforcement
agency (i.e. local, county, or state).
This study sought to examine assumptions from the two
competing theoretical
models to explain police decision making in traffic stop
situations. It offers an empirical
examination of the influence of “neighborhood,” as suggested
by both Fagan and
Davies and Klinger, on the judgment of police officers in traffic
stop situations
while controlling for various aspects of the incident, including
driver characteristics
and legal aspects. Two features of this study contribute to our
understanding of
police decision making. First, data were collected in multiple
police agencies of
varying sizes, which can help minimize the problem of
“aggregation bias” in most
other studies of only one large jurisdiction (Mosher et al., 2008,
p. 46). Like other
studies, however, the respondents do comprise a convenience
sample. Second,
a vignette research design is used (Rossi, 1979; Rossi and
Anderson, 1982), allowing
the inclusion of multiple variables into vignettes to examine the
decision making
of a police officer to stop a vehicle. An additional benefit to the
vignette research
design is that it may minimize. Withrow (2004b) stated
“because there is no record of
the individuals not stopped,” most designs cannot determine the
influence any
variable on getting stopped (p. 229, emphasis in original). The
vignette research
design minimizes this problem because the design allows for the
inclusion of
multiple variables and can control for those cases where a
person is not stopped
by officers.
555
Police decision
making
Data and methods
Study location
Data used in this study were collected from police officers in
four police agencies in
New York State. Table I provides general information on the
agencies and jurisdiction.
The study locations can be roughly divided into two groups.
One is the work district of
a large urban police agency with neighborhoods of concentrated
population and higher
levels of serious crime and disorder, while the other three study
locations consist of two
small township agencies and a county sheriff’s department with
very low crime levels.
Including officers from two small agencies and a county
sheriff’s department
distinguishes this research from other studies of the police and
traffic stop decision
making because the studies cited in this paper used data
primarily collected in large
agencies or suburban areas near larger cities.
The Lower Town Police Department and the Upper Town Police
Departments (all
department names are pseudonyms) serve townships and employ
part-time and
full-time police officers. These townships border each other, as
well as a city of
approximately 50,000 people (not part of this study). Police
officers in the townships
furnish routine patrol services, are dispatched to calls by the
county sheriffs’
department, and provide no special services, such as detectives.
The township police
agencies offer a fairly diverse working environment for officers,
with traditional style
neighborhoods laid out in a grid pattern that include single
family and apartment
housing, shopping plazas with department stores, grocery
stores, small shops,
secondary highways with extensive commuter and commercial
traffic, and rural areas
with farms and rural housing. The third agency is the Lake
County Sheriff’s
Department. Deputies provide patrol services for a sizable rural
area as well as several
small towns and villages that employ no other police services.
All three agencies serve
a fairly homogenous population, and have few violent index
crimes.
The large police agency that participated in this study was the
River City Police
Department, specifically the North District (River City has five
patrol districts). This
agency is also located in Upstate New York. As indicated in
Table I, North District
is densely populated and is considered fairly common as large
city areas go.
Jurisdiction
Square
miles
Patrol
officers
Violent index crimes
(2007)
Property crimes
(burglary,
larceny,
car theft)
Population
served
Race (white,
African
American,
Others)
%
Lower Town
P.D.
64 14 17 (2 rapes, 1 robbery, 14
aggravated assaults)
177 8,978 93, 3, 4
Upper Town
P.D.
9 17 16 (4 robberies, 12
aggravated assaults)
334 19,038 97, 1, 2
Lake Co. 552 60 79 (15 rapes, 14
robberies, 50 aggravated
assaults)
1,336 108,714a 90, 6, 4
North District 9.6 96 1,063 (14 murders, 53
rapes, 533 robberies, 462
aggravated assault)
c
5,230 78,700 44, 34, 7b
Notes:
aDoes not include the population (111,134) of three cities
within the county that employ their
own police agencies; bUS Census data for 2000 for all of River
City; cdepartment data
Table I.
Description of research
locations (US Census data
and New York State
Division of Criminal
Justice Services crime
data, US Census Bureau)
556
PIJPSM
35,3
The population of North District is racially diverse compared to
the smaller
jurisdictions, and has a substantially higher number of violent
index and property
crimes than the other agencies.
Research design
A vignette research design employ aspects of a random
experiment by incorporating
each variable as a unique dimension within the vignette, and
randomly vary the level
of each dimension between vignettes (Rossi, 1979; Rossi and
Anderson, 1982).
Vignettes are then randomly assigned to respondents. This
design measures
respondents’ judgment or decision making as the level of each
dimension changes.
That is, as the level of one dimension changes, its influence in
the judgment or
decision-making process may shift in relation to another
dimension. The vignettes
used for this study were constructed along several variables
(discussed below), and
vignettes have been successfully used to examine police opinion
and decision making
in other work situations (Eterno, 2003; Hickman et al., 2001;
Phillips, 2009b; Phillips
and Sobol, 2010).
Vignettes possess aspects of a controlled, random experiment
and, therefore,
provide a benefit in studying the judgment of police officers in
traffic stop incidents:
collecting data on vehicles not stopped. When there is an
absence of data regarding
citizens not stopped, as is the case in almost all prior traffic
stop research, untangling
the significant aspects of those who are stopped from those who
are not is unworkable,
making it impossible to discover which dynamics explain
variations in police officer
decision making. Further, using vignettes provided a unique
opportunity to study
multiple factors that may influence a police officers’ decision to
stop a vehicle prior to
actually stopping a vehicle. Most studies of traffic stop decision
making collects data
after the stop has occurred.
Data collection
A total of 100 survey packets, each of which included randomly
constructed vignettes
exploring different activities police officers’ encounter
(domestic violence incidents, use
of force incidents, traffic stop incidents), were constructed.
Each packet contained two
randomly selected vignettes describing a driver and vehicle that
they encounter during
routine patrol. Police officers in the sample agencies were
provided with a randomly
selected survey packet. Several methods were used to improve
the validity of responses
because police officers may be reluctant to respond to outsiders
who ask questions
about their behavior. A cover letter informed the respondents
that their answers
would not be seen by police management. Second, officer
identities would be kept
anonymous.
Two methods were used to collect data in the smaller agencies
during the summer of
2005. First, survey packets were passed out to patrol deputies in
the Lake County
Sheriff’s Department during all roll-call periods over the course
of several days.
Deputies completed the surveys during that time and returned
them in a sealed
envelope to the researcher. In total, 39 surveys were passed out
and 38 were returned
completed. The Upper Town Police Department does not have a
routine roll-call period;
however, during the data collection period the department had
scheduled a department
staff meeting. The police chief allowed the researcher to
distribute surveys to police
officers during this meeting. A total of 13 survey packets were
distributed to the
available officers and all were returned completed. The second
method for collecting
data was used in the Lower Town Police Department because
Lower Town does not
557
Police decision
making
have a routine roll-call period. Survey packets were left for the
officers in their
departmental mailboxes. Officers returned the surveys to the
police chief in a sealed
envelope, and they were returned in bulk to the researcher. In
total, ten surveys
were distributed and nine were completed.
The second data collection period occurred during the summer
of 2006. A graduate
student who works as an officer in River City distributed newly
constructed survey
packets to patrol officers in North District during all roll-call
periods where they were
completed and returned in a sealed envelope. Other than a brief
verbal explanation of
the study and the anonymity of the respondents, the graduate
student had no
interaction with the officers. The packets contained traffic stop
vignettes constructed
in an identical fashion as those used in the smaller agencies. In
total, 45 survey packets
were distributed and 42 were completed. A total of 102 police
officers completed
two vignettes and each completed vignette represented a case in
the data file. The total
number of complete vignettes from all respondents in the four
police agencies thus was
204. Table II provides a description of the variables used in this
study.
Dependent variable
Many studies of traffic stop decision making use multiple
dependent variables, such
as the original decision to stop a vehicle, the decision to search
the vehicle, and
how the stop ended (i.e. no action, warning, citation) (Engel and
Calnon, 2004; Petrocelli
et al., 2003). One deficiency when using a vignette design is
that it is difficult to include
“contingency” questions that would elicit subsequent decisions
as an incident
progresses through time. As a result, this study used only one
dependent variable: a
police officer’s self-reported likelihood of stopping a vehicle on
a five-point Likert scale
(1¼very unlikely to stop traffic; 5¼very likely to stop traffic).
Independent variables – vignette dimensions and officer
characteristics
The following is a review of the vignette dimensions used in
this study. For a detailed
discussion of the justification for these dimensions, see Phillips
(2009a). Research
vignettes described three driver characteristics. The first
dimension was the driver’s
Variables Range M SD
Dependent variables
Stop 1-5 3.61 1.01
Independent variables
Sheriff 0-1 0.37 0.48
Upper Town 0-1 0.12 0.33
Lower Town 0-1 0.08 0.28
Black 0-1 0.34 0.47
Hispanic 0-1 0.32 0.46
Sex 0-1 0.53 0.49
Age teen 0-1 0.36 0.48
Age_20 0-1 0.30 0.46
Vehicle type 0-1 0.50 0.50
Tint 0-1 0.50 0.50
Cell phone 0-1 0.33 0.47
Speeding 0-1 0.39 0.48
Experience 0-35 10.17 6.78
Table II.
Variable description
558
PIJPSM
35,3
race: white, black, and Hispanic. The second dimension was the
driver’s gender and
a third dimension is the driver’s age. The driver’s age is an
ordinal-level variable
describing a driver who appears to be in their late teens, late
20s, or late 30s (the
reference category). This description is intentionally vague
because police perception
of a driver, not the actual age of the driver, is considered
important to a police officer’s
decision to stop a vehicle (Ramirez et al., 2000). These age
categories were selected
because it was believed that police officers would be much less
likely to stop older
drivers (i.e. those who appear at least 40 years old), and a pre-
teen driver would almost
certainly be stopped.
The first vehicle characteristic was type of vehicle: a “new
SUV” or an “old 4-door
sedan.” A second vehicle characteristic that might draw the
attention of a police officer
is window tinting (Batton and Kadleck, 2004). This dimension
was dichotomized here:
the vehicle had tinted windows, or the dimension will be left
blank in the vignette, an
acceptable method for varying the level of a dimension ( Jacoby
and Cullen, 1999).
A specific traffic violation was included in all vignettes in order
to establish a legal
justification for the stop. Ramirez et al. (2000) argued that it
may be helpful to
include different types of violations to understand the role
traffic offenses play in police
decision making. Three traffic violation levels were used here
in the vignette
dimensions. First, a traffic violation will be indicated as
“speeding.” A specific speed
was not included. Not all police vehicles are equipped with a
RADAR system to
determine the exact speed of a car, and it is anticipated that
simply indicating to a
police officer that a person is speeding will satisfy the amount
of information
necessary to establish probable cause for a stop. Second, the
2002 legislation in
New York State made it a traffic offense to talk on a hand-held
cell phone while
driving a vehicle. This offense was included as an intermediate-
level violation. The
third dimension described a broken tail light, a minor equipment
violation. A sample
vignette and dimension levels can be found in the Appendix.
Because the small police agencies involved in this study
employed almost no female
or minority officers, it was decided that asking additional
questions of a personal
nature in these agencies would threaten confidentiality and
might result in a reduced
response rate. The only officer characteristic that was collected
was the years of
experience.
Analytic strategy
Because each police officer completed two vignettes, the data
may have a clustered
structure. Clustering of observations may violate the assumption
of independence in
the variables, causing an artificially deflated standard error and
making it easier to
find significance effects (Williams, 2000). For this reason the
“cluster robust standard
error” option in STATA was utilized. This option provides a
more robust estimate of
the standard error because it adjusts for the potential clustering
of observations.
Findings
As seen in Table III, police officers serving in the two smaller
townships were
significantly more likely to report stopping a vehicle described
in the vignettes
compared to officers who worked in the larger city area (North
District was the
reference group in the analysis). Although patrol deputies who
worked for the county
sheriff’s department were not significantly different in their
responses to vignettes than
officers in North District, these findings suggest that workload
dimensions may shape
police decision making in traffic stop incidents. That is, the
police officers working in
559
Police decision
making
North District, an area with higher levels of serious crimes
when compared to the other
jurisdictions in this study, do not appear very concerned with
stopping vehicles for
traffic violations. Klinger’s (1997) suggestion that officers who
work in higher-
workload neighborhoods focus less on minor offenses appears to
be supported when
examined in the context of traffic stop situations described in
the vignettes.
Two other vignette dimensions were also related to the decision
to stop a vehicle.
First, if the vehicle was speeding, police officers were
significantly more likely to stop
the vehicle. This was the most serious traffic offense described
in the vignettes, and the
finding is interesting because the offense was simply described
in the vignette with
no supporting information (i.e. the speed was not confirmed
with RADAR). Second,
officers were more likely to indicate they would stop a teen-
aged driver when compared
to a driver who appeared to be in their 30s. None of the other
driver or vehicle
characteristics described in the vignettes were related to the
officer’s decision to stop a
vehicle.
Conclusion and discussion
This study was constructed in response to the body of research
suggesting that
neighborhood context may influence police decision making,
and the fact there are two
conflicting theories to explain variation of police behavior
across those contexts.
Klinger’s (1997) ecological theory posits that police officers
respond to components of
their work environment, including the area workload, and that
they must manage their
time more effectively. Fagan and Davies (2000) explained that
officers are more
aggressive when dealing with a neighborhood’s order
maintenance issues in order to
address more serious crimes in those areas. The findings from
this investigation
suggest that officers assigned to high-crime areas would be less
likely to deal with low-
level traffic violations described in vignettes, lending support to
Klinger’s framework.
Fagan and Davies (2000) order maintenance explanation of
police officer decision
making should not be dismissed. They described police behavior
that was influenced
not simply by the environment, but also the police organization.
The New York City
Police Department administration expected aggressive street
intervention by street
officers. A second latent component of their study, which was
never explicitly
Variable Coefficient Robust SE Odds ratio
Sheriff 0.20 0.49 1.22
Upper Town 1.02* 0.43 2.77
Lower Town 1.96** 0.30 7.09
Black 0.00 0.37 1.00
Hispanic 0.10 0.20 1.10
Sex �0.38 0.33 0.68
Age teen 0.35* 0.17 1.42
Age_20 0.33 0.21 1.39
Vehicle type 0.25 0.15 1.28
Tint 0.49 0.38 1.63
Cell phone 0.58 0.33 1.79
Speeding 0.78** 0.12 2.18
Experience 0.00 0.02 1.00
Pseudo R2 0.05
Notes: *po0.05, **po0.01
Table III.
Ordered logistic
regression for likelihood
of traffic stop (N¼204)
560
PIJPSM
35,3
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis
How managers can make better decisions by going beyond data analysis

More Related Content

Similar to How managers can make better decisions by going beyond data analysis

1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx
1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx
1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docxmoggdede
 
Tradeline 2016
Tradeline 2016Tradeline 2016
Tradeline 2016NBBJDesign
 
The Essential Data Ingredient
The Essential Data IngredientThe Essential Data Ingredient
The Essential Data IngredientRich Cooper
 
Creativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyCreativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyPetteriTeikariPhD
 
RESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docx
RESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docxRESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docx
RESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docxronak56
 
Radical innovation in mining management austmine
Radical innovation in mining management austmineRadical innovation in mining management austmine
Radical innovation in mining management austmineHendrik Lourens
 
Ict와 사회과학지식간 학제간 연구동향(23 march2013)
Ict와 사회과학지식간 학제간 연구동향(23 march2013)Ict와 사회과학지식간 학제간 연구동향(23 march2013)
Ict와 사회과학지식간 학제간 연구동향(23 march2013)Han Woo PARK
 
INVENTION VS INNOVATION
INVENTION VS INNOVATION INVENTION VS INNOVATION
INVENTION VS INNOVATION Royal Priyankar
 
The Impact of Futures Technology
The Impact of Futures TechnologyThe Impact of Futures Technology
The Impact of Futures TechnologyLisa Faulkner
 
Neuroentrepreneurship symposium 2015 Academy of Management
Neuroentrepreneurship symposium 2015 Academy of ManagementNeuroentrepreneurship symposium 2015 Academy of Management
Neuroentrepreneurship symposium 2015 Academy of ManagementNorris Krueger
 
2012 pip futureof internetyoungbrains
2012 pip futureof internetyoungbrains2012 pip futureof internetyoungbrains
2012 pip futureof internetyoungbrainsDustianne North
 

Similar to How managers can make better decisions by going beyond data analysis (14)

1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx
1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx
1308 226 PMDESIGNING QUALITATIVE RESEARCH PROPOSALSPage.docx
 
Tradeline 2016
Tradeline 2016Tradeline 2016
Tradeline 2016
 
The Essential Data Ingredient
The Essential Data IngredientThe Essential Data Ingredient
The Essential Data Ingredient
 
Creativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyCreativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technology
 
Plagiarism Essays.pdf
Plagiarism Essays.pdfPlagiarism Essays.pdf
Plagiarism Essays.pdf
 
RESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docx
RESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docxRESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docx
RESOURCESRequired ResourcesText· Drown, E., & Sole, K. (20.docx
 
Radical innovation in mining management austmine
Radical innovation in mining management austmineRadical innovation in mining management austmine
Radical innovation in mining management austmine
 
Ict와 사회과학지식간 학제간 연구동향(23 march2013)
Ict와 사회과학지식간 학제간 연구동향(23 march2013)Ict와 사회과학지식간 학제간 연구동향(23 march2013)
Ict와 사회과학지식간 학제간 연구동향(23 march2013)
 
Innovation
InnovationInnovation
Innovation
 
Datasciencehandbook sample
Datasciencehandbook sampleDatasciencehandbook sample
Datasciencehandbook sample
 
INVENTION VS INNOVATION
INVENTION VS INNOVATION INVENTION VS INNOVATION
INVENTION VS INNOVATION
 
The Impact of Futures Technology
The Impact of Futures TechnologyThe Impact of Futures Technology
The Impact of Futures Technology
 
Neuroentrepreneurship symposium 2015 Academy of Management
Neuroentrepreneurship symposium 2015 Academy of ManagementNeuroentrepreneurship symposium 2015 Academy of Management
Neuroentrepreneurship symposium 2015 Academy of Management
 
2012 pip futureof internetyoungbrains
2012 pip futureof internetyoungbrains2012 pip futureof internetyoungbrains
2012 pip futureof internetyoungbrains
 

More from lmelaine

Jan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docx
Jan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docxJan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docx
Jan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docxlmelaine
 
Jan 10, 20141.Definition of law A set of rules and proced.docx
Jan 10, 20141.Definition of law A set of rules and proced.docxJan 10, 20141.Definition of law A set of rules and proced.docx
Jan 10, 20141.Definition of law A set of rules and proced.docxlmelaine
 
James RiverJewelryProjectQuesti.docx
James RiverJewelryProjectQuesti.docxJames RiverJewelryProjectQuesti.docx
James RiverJewelryProjectQuesti.docxlmelaine
 
Jacob claims the employer violated his rights. In your opinion, what.docx
Jacob claims the employer violated his rights. In your opinion, what.docxJacob claims the employer violated his rights. In your opinion, what.docx
Jacob claims the employer violated his rights. In your opinion, what.docxlmelaine
 
Ive been promised A+ papers in the past but so far I have not seen .docx
Ive been promised A+ papers in the past but so far I have not seen .docxIve been promised A+ papers in the past but so far I have not seen .docx
Ive been promised A+ papers in the past but so far I have not seen .docxlmelaine
 
It’s easy to dismiss the works from the Dada movement as silly. Cons.docx
It’s easy to dismiss the works from the Dada movement as silly. Cons.docxIt’s easy to dismiss the works from the Dada movement as silly. Cons.docx
It’s easy to dismiss the works from the Dada movement as silly. Cons.docxlmelaine
 
Its meaning is still debated. It could be a symbol of the city of Fl.docx
Its meaning is still debated. It could be a symbol of the city of Fl.docxIts meaning is still debated. It could be a symbol of the city of Fl.docx
Its meaning is still debated. It could be a symbol of the city of Fl.docxlmelaine
 
Jaffe and Jordan want to use financial planning models to prepar.docx
Jaffe and Jordan want to use financial planning models to prepar.docxJaffe and Jordan want to use financial planning models to prepar.docx
Jaffe and Jordan want to use financial planning models to prepar.docxlmelaine
 
Ive got this assinment due and was wondering if anyone has done any.docx
Ive got this assinment due and was wondering if anyone has done any.docxIve got this assinment due and was wondering if anyone has done any.docx
Ive got this assinment due and was wondering if anyone has done any.docxlmelaine
 
It is thought that a metabolic waste product produced by a certain g.docx
It is thought that a metabolic waste product produced by a certain g.docxIt is thought that a metabolic waste product produced by a certain g.docx
It is thought that a metabolic waste product produced by a certain g.docxlmelaine
 
it is not the eassay it is about anwering the question with 2,3 pa.docx
it is not the eassay it is about anwering the question with 2,3 pa.docxit is not the eassay it is about anwering the question with 2,3 pa.docx
it is not the eassay it is about anwering the question with 2,3 pa.docxlmelaine
 
It is now time to select sources and take some notes. You will nee.docx
It is now time to select sources and take some notes. You will nee.docxIt is now time to select sources and take some notes. You will nee.docx
It is now time to select sources and take some notes. You will nee.docxlmelaine
 
Its a linear equations question...Neilsen Media Research surveys .docx
Its a linear equations question...Neilsen Media Research surveys .docxIts a linear equations question...Neilsen Media Research surveys .docx
Its a linear equations question...Neilsen Media Research surveys .docxlmelaine
 
itively impact job satisfactionWeek 3 - Learning Team Paper - Due .docx
itively impact job satisfactionWeek 3 - Learning Team Paper - Due .docxitively impact job satisfactionWeek 3 - Learning Team Paper - Due .docx
itively impact job satisfactionWeek 3 - Learning Team Paper - Due .docxlmelaine
 
IT205 Management of Information SystemsHello, I am looking for he.docx
IT205 Management of Information SystemsHello, I am looking for he.docxIT205 Management of Information SystemsHello, I am looking for he.docx
IT205 Management of Information SystemsHello, I am looking for he.docxlmelaine
 
It is not an online course so i cannot share any login details. No d.docx
It is not an online course so i cannot share any login details. No d.docxIt is not an online course so i cannot share any login details. No d.docx
It is not an online course so i cannot share any login details. No d.docxlmelaine
 
IT Strategic Plan, Part 1Using the case provided, analyze the busi.docx
IT Strategic Plan, Part 1Using the case provided, analyze the busi.docxIT Strategic Plan, Part 1Using the case provided, analyze the busi.docx
IT Strategic Plan, Part 1Using the case provided, analyze the busi.docxlmelaine
 
It should be in API format.Research paper should be on Ethernet .docx
It should be in API format.Research paper should be on Ethernet .docxIt should be in API format.Research paper should be on Ethernet .docx
It should be in API format.Research paper should be on Ethernet .docxlmelaine
 
IT Strategic Plan, Part 2Using the case provided, build on Part .docx
IT Strategic Plan, Part 2Using the case provided, build on Part .docxIT Strategic Plan, Part 2Using the case provided, build on Part .docx
IT Strategic Plan, Part 2Using the case provided, build on Part .docxlmelaine
 
It seems most everything we buy these days has the label made in Ch.docx
It seems most everything we buy these days has the label made in Ch.docxIt seems most everything we buy these days has the label made in Ch.docx
It seems most everything we buy these days has the label made in Ch.docxlmelaine
 

More from lmelaine (20)

Jan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docx
Jan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docxJan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docx
Jan 18, 2013 at 217pmNo unread replies.No replies.Post yo.docx
 
Jan 10, 20141.Definition of law A set of rules and proced.docx
Jan 10, 20141.Definition of law A set of rules and proced.docxJan 10, 20141.Definition of law A set of rules and proced.docx
Jan 10, 20141.Definition of law A set of rules and proced.docx
 
James RiverJewelryProjectQuesti.docx
James RiverJewelryProjectQuesti.docxJames RiverJewelryProjectQuesti.docx
James RiverJewelryProjectQuesti.docx
 
Jacob claims the employer violated his rights. In your opinion, what.docx
Jacob claims the employer violated his rights. In your opinion, what.docxJacob claims the employer violated his rights. In your opinion, what.docx
Jacob claims the employer violated his rights. In your opinion, what.docx
 
Ive been promised A+ papers in the past but so far I have not seen .docx
Ive been promised A+ papers in the past but so far I have not seen .docxIve been promised A+ papers in the past but so far I have not seen .docx
Ive been promised A+ papers in the past but so far I have not seen .docx
 
It’s easy to dismiss the works from the Dada movement as silly. Cons.docx
It’s easy to dismiss the works from the Dada movement as silly. Cons.docxIt’s easy to dismiss the works from the Dada movement as silly. Cons.docx
It’s easy to dismiss the works from the Dada movement as silly. Cons.docx
 
Its meaning is still debated. It could be a symbol of the city of Fl.docx
Its meaning is still debated. It could be a symbol of the city of Fl.docxIts meaning is still debated. It could be a symbol of the city of Fl.docx
Its meaning is still debated. It could be a symbol of the city of Fl.docx
 
Jaffe and Jordan want to use financial planning models to prepar.docx
Jaffe and Jordan want to use financial planning models to prepar.docxJaffe and Jordan want to use financial planning models to prepar.docx
Jaffe and Jordan want to use financial planning models to prepar.docx
 
Ive got this assinment due and was wondering if anyone has done any.docx
Ive got this assinment due and was wondering if anyone has done any.docxIve got this assinment due and was wondering if anyone has done any.docx
Ive got this assinment due and was wondering if anyone has done any.docx
 
It is thought that a metabolic waste product produced by a certain g.docx
It is thought that a metabolic waste product produced by a certain g.docxIt is thought that a metabolic waste product produced by a certain g.docx
It is thought that a metabolic waste product produced by a certain g.docx
 
it is not the eassay it is about anwering the question with 2,3 pa.docx
it is not the eassay it is about anwering the question with 2,3 pa.docxit is not the eassay it is about anwering the question with 2,3 pa.docx
it is not the eassay it is about anwering the question with 2,3 pa.docx
 
It is now time to select sources and take some notes. You will nee.docx
It is now time to select sources and take some notes. You will nee.docxIt is now time to select sources and take some notes. You will nee.docx
It is now time to select sources and take some notes. You will nee.docx
 
Its a linear equations question...Neilsen Media Research surveys .docx
Its a linear equations question...Neilsen Media Research surveys .docxIts a linear equations question...Neilsen Media Research surveys .docx
Its a linear equations question...Neilsen Media Research surveys .docx
 
itively impact job satisfactionWeek 3 - Learning Team Paper - Due .docx
itively impact job satisfactionWeek 3 - Learning Team Paper - Due .docxitively impact job satisfactionWeek 3 - Learning Team Paper - Due .docx
itively impact job satisfactionWeek 3 - Learning Team Paper - Due .docx
 
IT205 Management of Information SystemsHello, I am looking for he.docx
IT205 Management of Information SystemsHello, I am looking for he.docxIT205 Management of Information SystemsHello, I am looking for he.docx
IT205 Management of Information SystemsHello, I am looking for he.docx
 
It is not an online course so i cannot share any login details. No d.docx
It is not an online course so i cannot share any login details. No d.docxIt is not an online course so i cannot share any login details. No d.docx
It is not an online course so i cannot share any login details. No d.docx
 
IT Strategic Plan, Part 1Using the case provided, analyze the busi.docx
IT Strategic Plan, Part 1Using the case provided, analyze the busi.docxIT Strategic Plan, Part 1Using the case provided, analyze the busi.docx
IT Strategic Plan, Part 1Using the case provided, analyze the busi.docx
 
It should be in API format.Research paper should be on Ethernet .docx
It should be in API format.Research paper should be on Ethernet .docxIt should be in API format.Research paper should be on Ethernet .docx
It should be in API format.Research paper should be on Ethernet .docx
 
IT Strategic Plan, Part 2Using the case provided, build on Part .docx
IT Strategic Plan, Part 2Using the case provided, build on Part .docxIT Strategic Plan, Part 2Using the case provided, build on Part .docx
IT Strategic Plan, Part 2Using the case provided, build on Part .docx
 
It seems most everything we buy these days has the label made in Ch.docx
It seems most everything we buy these days has the label made in Ch.docxIt seems most everything we buy these days has the label made in Ch.docx
It seems most everything we buy these days has the label made in Ch.docx
 

Recently uploaded

ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 

Recently uploaded (20)

ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 

How managers can make better decisions by going beyond data analysis

  • 1. FEATURE MANAGEMENT IS MUCH MORE THAN A SCIENCE MANAGEMENT IS MUCH MORE THAN A SCIENCE THE LIMITS OF DATA-DRIVEN DECISION MAKING BY ROGER L. MARTIN AND TONY GOLSBY-SMITH U nderlying the practice and study of business is the belief that management is a science and that business decisions must be driven by rigorous analysis of data. The explosion of big data has reinforced this idea. In a recent EY survey, 81% of executives said they believed that “data should be at the heart of all decision- making,” leading EY to enthu- siastically proclaim that “big data can eliminate reliance on ‘gut feel’ decision-making.” Managers find this notion appealing. Many have a background in applied sciences. Even if they don’t, chances are, they have an MBA—a degree that originated in the early 20th century, when Frederick Winslow Taylor was introducing “scientific management.” MBA programs now flood the business world with graduates—
  • 2. more than 150,000 a year in the United States alone. These programs have been trying to turn management into a hard science for most of the past six decades. In large measure this effort began in response to scathing reports on the state of business education in America issued by the Ford and Carnegie Foundations in 1959. In the view of the report writers—all economists— business programs were filled with un- derqualified students whose professors resisted the methodological rigor of the hard sciences, which other social sciences had embraced. In short, business education wasn’t scientific enough. ILLUSTRATIONS BY MASA UNDERSTANDING MANAGEMENT’S VALUE SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS REVIEW 129 therefore inquire, present us with alternative possi- bilities.…All our actions have a contingent character; hardly any of them are determined by necessity,” he wrote. He believed that this realm of possibilities was driven not by scientific analysis but by human inven- tion and persuasion. We think this is particularly true when it comes to decisions about business strategy and innovation. You can’t chart a course for the future or bring about change merely by analyzing history. We would sug- gest, for instance, that the behavior of customers will never be transformed by a product whose design is
  • 3. based on an analysis of their past behavior. Yet transforming customer habits and experiences is what great business innovations do. Steve Jobs, Steve Wozniak, and other computing pioneers created a brand-new device that revolutionized how people interacted and did business. The railroad, the motor car, and the telephone all introduced enormous be- havioral and social shifts that an analysis of prior data could not have predicted. To be sure, innovators often incorporate scientific discoveries in their creations, but their real genius lies in their ability to imagine products or processes that simply never existed before. The real world is not merely an outcome deter- mined by ineluctable laws of science, and acting as if it is denies the possibility of genuine innovation. A scientific approach to business decision making has limitations, and managers need to figure out where those limitations lie. CAN OR CANNOT? Most situations involve some elements you can change and some you cannot. The critical skill is spotting the differ-ence. You need to ask, Is the situation dominated by possibility (that is, things we can alter for the better) or by necessity (elements we cannot change)? Suppose you plan to build a bottling line for plastic bottles of springwater. The standard way to set one up is to take “forms” (miniature thick plastic tubes), heat them, use air pressure to mold them to full bottle size,
  • 4. cool them until they’re rigid, and finally fill them with water. Thousands of bottling lines around the world are configured this way. Some of this cannot be other than it is: how hot the form has to be to stretch; the amount of air pressure required to mold the bottle; how fast the bottle can be cooled; how quickly the water can fill the bottle. These are determined by the laws of thermodynam- ics and gravity—which executives cannot do a thing to change. Still, there’s an awful lot they can change. While the laws of science govern each step, the steps them- selves don’t have to follow the sequence that has It was in part to remedy this shortcoming that the Ford Foundation supported the creation of academic journals and funded the establishment of doctoral programs at Harvard Business School, the Carnegie Institute of Technology (the predecessor of Carnegie Mellon), Columbia, and the University of Chicago. But is it true that management is a science? And is it right to equate intellectual rigor with data anal- ysis? If the answers to those questions are no and no—as we will suggest in the following pages—then how should managers arrive at their decisions? We’ll set out an alternative approach for strategy making and innovation—one that relies less on data analy- sis and more on imagination, experimentation, and communication. But first let’s take a look back at where—or rather with whom—science started.
  • 5. IS BUSINESS A SCIENCE? What we think of as science began with Aristotle, who as a student of Plato was the first to write about cause and effect and the methodology for demonstrat-ing it. This made “demonstration,” or proof, the goal of science and the final criterion for “truth.” As such, Aristotle was the originator of the approach to scientific exploration, which Galileo, Bacon, Descartes, and Newton would formalize as “the Scientific Method” 2,000 years later. It’s hard to overestimate the impact of sc i- ence on society. The scientific discoveries of the Enlightenment—deeply rooted in the Aristotelian methodology—led to the Industrial Revolution and the global economic progress that followed. Science solved problems and made the world a better place. Small wonder that we came to regard great scientists like Einstein as latter-day saints. And even smaller wonder that we came to view the scientific method as a template for other forms of inquiry and to speak of “social sciences” rather than “social studies.” But Aristotle might question whether we’ve al- lowed our application of the scientific method to go too far. In defining his approach, he set clear bound- aries around what it should be used for, which was understanding natural phenomena that “cannot be other than they are.” Why does the sun rise every day, why do lunar eclipses happen when they do, why do objects always fall to the ground? These things are beyond the control of any human, and science is the study of what makes them occur. However, Aristotle never claimed that all events
  • 6. were inevitable. To the contrary, he believed in free will and the power of human agency to make choices that can radically change situations. In other words, if people choose, a great many things in the world can be other than they are. “Most of the things about which we make decisions, and into which we IN BRIEF THE PROBLEM The big-data revolution has reinforced the belief that all business decisions should be reached through scientific analysis. But this approach has its limits, and it tends to narrow strategic options and hinder innovation. WHY IT HAPPENS The scientific method is designed to understand natural phenomena that cannot be changed—the sun will always rise tomorrow. It is not an effective way to evaluate things that do not yet exist. THE SOLUTION To make decisions about what could be, managers should devise narratives about possible futures, applying the tools of metaphor, logic, and
  • 7. emotion first described by Aristotle. Then they must hypothesize what would have to be true for those narratives to happen and validate their hypotheses through prototyping. 130 HARVARD BUSINESS REVIEW SEPTEMBER– OCTOBER 2017 FEATURE MANAGEMENT IS MUCH MORE THAN A SCIENCE dominated bottling for decades. A company called LiquiForm demonstrated that after asking, Why can’t we combine two steps into one by forming the bottle with pressure from the liquid we’re putting into it, rather than using air? And that idea turned out to be utterly doable. Executives need to deconstruct ever y dec i- sion-making situation into cannot and can parts and then test their logic. If the initial hypothesis is that an element can’t be changed, the executive needs to ask what laws of nature suggest this. If the rationale for cannot is compelling, then the best approach is to apply a methodology that will optimize the status quo. In that case let science be the master and use its tool kits of data and analytics to drive choices. In a similar way, executives need to test the logic
  • 8. behind classifying elements as cans. What suggests that behaviors or outcomes can be different from what they have been? If the supporting rationale is strong enough, let design and imagination be the master and use analytics in their service. It’s important to realize that the presence of data is not sufficient proof that outcomes cannot be different. Data is not logic. In fact, many of the most lucrative business moves come from bucking the evidence. Lego chair- man Jørgen Vig Knudstorp offers a case in point. Back in 2008, when he was the com- pany’s CEO, its data suggested that girls were much less interested in its toy bricks than boys were: 85% of Lego players were boys, and every attempt to attract more girls had failed. Many of the firm’s man- agers, therefore, believed that girls were inherently less likely to play with the bricks—they saw it as a cannot situation. But Knudstorp did not. The problem, he thought, was that Lego had not yet figured out how to get girls to play with construc- tion toys. His hunch was borne out with the launch o f t h e s u c c e s s f u l L e g o
  • 9. Friends line, in 2012. The Lego case illustrates that data is no more than ev idence, and it’s not always obvious what it is evidence of. Moreover, the absence of data does not preclude possibil- ity. If you are talking about new outcomes and behav- iors, then naturally there is no prior evidence. A truly rigorous thinker, therefore, considers not only what the data sug- gests but also what within the bounds of possibility could happen. And that requires the exerc ise of imagination—a very different process from analysis. A l s o, t h e d iv i s i o n between can and can- not is more fluid than most people think. Innovators w ill push that boundar y more than most, challenging the cannot.
  • 10. DATA IS NOT LOGIC. IN FACT, MANY OF THE MOST LUCRATIVE BUSINESS MOVES COME FROM BUCKING THE EVIDENCE. SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS REVIEW 131 BREAKING THE FRAME The imagination of new possibilities first requires an act of unframing. The status quo often appears to be the only way things can be, a perception that’s hard to shake. We recently came across a good example of the sta- tus quo trap while advising a consulting firm whose clients are nonprofit organizations. The latter face a “starvation cycle,” in which they get generously funded for the direct costs of specific programs but struggle to get support for their indirect costs. A large private foundation, for instance, may fully fund the expansion of a charity’s successful Latin American girls’ education program to sub-Saharan Africa, yet underwrite only a small fraction of the associated op- erational overhead and of the cost of developing the program in the first place. This is because donors typ- ically set low and arbitrary levels for indirect costs—
  • 11. usually allowing only 10% to 15% of grants to go to- ward them, even though the true indirect costs make up 40% to 60% of the total tab for most programs. The consulting firm accepted this framing of the problem and believed that the strategic challenge was figuring out how to persuade donors to increase the percentage allocated to indirect costs. It was consid- ered a given that donors perceived indirect costs to be a necessary evil that diverted resources away from end beneficiaries. We got the firm’s partners to test that belief by lis- tening to what donors said about costs rather than selling donors a story about the need to raise re- imbursement rates. What the partners heard surprised them. Far from being blind to the starvation cycle, do- nors hated it and understood their own role in causing it. The problem was that they didn’t trust their grant- ees to manage indirect costs. Once the partners were liberated from their false belief, they soon came up with a wide range of process-oriented solutions that could help nonprofits build their competence at cost management and earn their donors’ confidence. Although listening to and empathizing with stake- holders might not seem as rigorous or systematic as analyzing data from a formal survey, it is in fact a tried-and-true method of gleaning insights, familiar to anthropologists, ethnographers, sociologists, psy- chologists, and other social scientists. Many business leaders, particularly those who apply design thinking and other user-centric approaches to innovation, rec- ognize the importance of qualitative, observational research in understanding human behavior. At Lego, for example, Knudstorp’s initial questioning of gen-
  • 12. der assumptions triggered four years of ethnographic studies that led to the discovery that girls are more interested in collaborative play than boys are, which suggested that a collaborative construction toy could appeal to them. Powerful tool though it is, ethnographic research is no more than the starting point for a new frame. Ultimately, you have to chart out what could be and get people on board with that vision. To do that, you need to create a new narrative that displaces the old frame that has confined people. And the story-mak- ing process has principles that are entirely different from the principles of natural science. Natural science explains the world as it is, but a story can describe a world that does not yet exist. CONSTRUCTING PERSUASIVE NARRATIVES It may seem unlikely, but Aristotle, the same philosopher who gave us the sci-entific method, also set out methods for creating compelling narratives. In The Art of Rhetoric he describes a system of persuasion that has three drivers: • Ethos: the will and character to change the current situation. To be effective, the author of the narrative must possess credibility and authenticity. • Logos: the logical structure of the argument. This must provide a rigorous case for transforming prob- lems into possibilities, possibilities into ideas, and ideas into action. • Pathos: the capacity to empathize. To be capable of inspiring movement on a large scale, the author
  • 13. must understand the audience. A multibillion-dollar merger of two large insurance companies offers an example of how to use ethos, lo- gos, and pathos. The two firms were longtime compet- itors. There were winners and losers in the deal, and employees at all levels were nervous and unsettled. To complicate matters, both firms had grown by acquisi- tion, so in effect this was a merger of 20 or 30 different cultures. These smaller legacy groups had been inde- pendent and would resist efforts to integrate them to capture synergies. On top of that, the global financial crisis struck just after the merger, shrinking the indus- try by 8%. So the merged enterprise’s leaders faced a double challenge: a declining market and a skeptical organizational culture. The normal approach to postmerger integration is rational and reductionist: Analyze the current cost structures of the two organizations and combine them into one smaller structure—with the attendant layoffs of “redundant” employees. However, the leader of the merged companies did not want to follow the usual drill. Rather, he wanted to build a new organization from the ground up. He supplied the ethos by articu- lating the goal of accomplishing something bigger and better than a standard merger integration. However, he needed the logos—a powerful and compelling case for a future that was different. He built one around the metaphor of a thriving city. Like a city, the new organization would be a diverse ecosystem NATURAL SCIENCE EXPLAINS
  • 14. THE WORLD AS IT IS, BUT A STORY CAN DESCRIBE A WORLD THAT DOES NOT YET EXIST. 132 HARVARD BUSINESS REVIEW SEPTEMBER– OCTOBER 2017 FEATURE MANAGEMENT IS MUCH MORE THAN A SCIENCE that would grow in both planned and unplanned ways. Everybody would be part of that growth and contribute to the city. The logic of a thriving city cap- tured the imagination of employees enough for them to lean into the task and imagine possibilities for themselves and their part of the organization. The effort also required pathos—forging an emo- tional connection that would get employees to com- mit to building this new future together. To enlist them, the leadership group took a new approach to communication. Typically, executives com- municate postmerger integration plans with town halls, presentations, and e-mails that put employees on the receiving end of mes- sages. Instead the leadership group set up a series of collaborative sessions in which units in the company held conversations about the thriving-city metaphor and used it to explore challenges and design the work in their sphere
  • 15. of activity. How would the claims department look different in the thriving city? What would finance look like? In effect, employees were creating their own mini-narratives within the larger narrative the leaders had constructed. This approach required courage because it was so unusual and playful for such a large organization in a conservative industry. The approach was a resounding success. Within six months, employee engagement scores had risen from a dismal 48% to a spectacular 90%. That trans- lated into performance: While the industry shrank, the company’s business grew by 8%, and its customer satisfaction scores rose from an average of 6 to 9 (on a scale of 1 to 10). This case illustrates the importance of another rhetorical tool: a strong metaphor that captures the arc of your narrative in a sentence. A well-crafted metaphor reinforces all three elements of persuasion. It makes lo- gos, the logical argument, more compelling and strengthens pathos by helping the audience connect to that argument. And finally, a more com- pelling and engaging argument enhances the moral authority and credibility of the leader—the ethos. WHY METAPHORS MATTER We all know that good stories are an- c h o r e d b y p o w e r f u l m e t a p h o r s . Aristotle himself observed, “Ordinary words convey only what we know al- ready; it is from metaphor that we can best get hold of something fresh.” In fact, he believed that mastery of metaphor was the key to rhetorical
  • 16. success: “To be a master of metaphor is the greatest thing by far. It is…a sign of genius,” he wrote. It’s perhaps ironic that this proposition about an unscientific construct has been scientifically SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS REVIEW 133 confirmed. Research in cognitive science has demon- strated that the core engine of creative synthesis is “associative fluency”—the mental ability to connect two concepts that are not usually linked and to forge them into a new idea. The more diverse the concepts, the more powerful the creative association and the more novel the new idea. With a new metaphor, you compare two things that aren’t usually connected. For instance, when Hamlet says to Rosencrantz, “Denmark’s a prison,” he is associating two elements in an unusual way. Rosencrantz knows what “Denmark” means, and he knows what “a prison” is. However, Hamlet presents a new concept to him that is neither the Denmark he knows nor the prisons he knows. This third element is the novel idea or creative synthesis produced by the unusual combination. When people link unrelated concepts, product inno- vations often result. Samuel Colt developed the revolv- ing bullet chamber for his famous pistol after working on a ship as a young man and becoming fascinated by the vessel’s wheel and the way it could spin or be locked
  • 17. by means of a clutch. A Swiss engineer was inspired to create the hook-and-loop model of Velcro after walk- ing in the mountains and noticing the extraordinary adhesive qualities of burrs that stuck to his clothing. Metaphor also aids the adoption of an innovation by helping consumers understand and relate to it. The automobile, for instance, was initially described as “a horseless carriage,” the motorcycle as “a bicycle with a motor.” The snowboard was simply “a skateboard for the snow.” The very first step in the evolution that has made the smartphone a ubiquitous and essential device was the launch in 1999 of Research in Motion’s BlackBerry 850. It was sold as a pager that could also receive and send e-mails—a comforting meta- phor for initial users. One needs only to look at the failure of the Segway to see how much harder it is to devise a compelling narrative without a good metaphor. The machine, developed by superstar inventor Dean Kamen and hyped as the next big thing, was financed by hundreds of millions in venture
  • 18. capital. Although it’s a brilliant application of advanced tech- nology, hardly anyone uses it. Many rationalizations can be made for its failure—the high price point, the regulatory restrictions—but we would argue that a key reason is that the Segway is analogous with absolutely nothing at all. It is a little wheeled platform on which you stand upright and largely motionless while moving forward. People couldn’t relate to it. You don’t sit, as you do in a car, or pedal, as you do on a bicycle, or steer it with handles, as you do a motorcycle. Think of the last time you saw a Segway in use. You proba- bly thought the rider looked laughably geeky on the contraption. Our minds don’t take to the Segway because there is no positive experience to compare it to. FEATURE MANAGEMENT IS MUCH MORE THAN A SCIENCE 134 HARVARD BUSINESS REVIEW SEPTEMBER– OCTOBER 2017
  • 19. We’re not saying that an Aristotelian argument can’t be made without a metaphor; it is just much harder. A horseless carriage is easier to sell than the Segway. CHOOSING THE RIGHT NARRATIVE When you’re facing decisions in the realm of possibilities, it’s useful to come up with three or four compelling narra-tives, each with a strong metaphor, and then put them through a testing process that will help you reach consensus around which one is best. What does that entail? In the cannot world, careful analysis of data leads to the optimal decision. But in the can world, where we are seeking to bring something into existence, there is no data to analyze. To evaluate your options, you need to do the following: Clarify the conditions. While we have no way of proving that a proposed change will have the desired effect, we can specify what we think would have to be true about the world for it to work. By considering this rather than debating what is true about the world as it is, innovators can work their way toward a consensus. The idea is to have the group agree on whether it can make most of those conditions a reality—and will take responsibility for doing so. This was the approach pursued many years ago by a leading office furniture company that had developed a new chair. Although it was designed to be radically superior to anything else on the market, the chair was expensive to make and would need to be sold at twice an office chair’s typical price. The quantitative market
  • 20. research showed that customers reacted tepidly to the new product. Rather than giving up, the company asked what would have to be true to move customers from indifference to passion. It concluded that if cus- tomers actually tried the chair, they would experience its breakthrough performance and become enthusi- astic advocates. The company went to market with a launch strategy based on a customer trial process, and the chair has since become the world’s most profitable and popular office chair. Soon after, the company’s managers asked them- selves the same question about a new office design concept that eliminated the need to build walls and install either flooring or ceilings to create office spaces. This product could be installed into the raw space of a new building, dramatically simplifying and lowering the cost of building out office space. It was clear that the company’s customers, building tenants, would be interested. But for the new system to succeed, land- lords would also have to embrace it. Unfortunately, the new system would eliminate the revenues they typically made on office build-outs, so it was unlikely that they would cooperate in applying it, despite its advantages to the tenants. The project was killed. Create new data. The approach to experimenta- tion in the can world is fundamentally different from the one in the cannot world. In the cannot world, the task is to access and compile the relevant data. Sometimes that involves simply looking it up—from a table in the Bureau of Labor Statistics database, for example. Other times, it means engaging in an effort to uncover it—such as through a survey. You may also have to apply accepted statistical tests to determine whether the data gathered demonstrates that the prop-
  • 21. osition—say, that consumers prefer longer product life to greater product functionality—is true or false. In the can world, the relevant data doesn’t exist because the future hasn’t happened yet. You have to create the data by prototyping—giving users some- thing they haven’t seen before and observing and re- cording their reactions. If users don’t respond as you expected, you plumb for insights into how the proto- type could be improved. And then repeat the process until you have generated data that demonstrates your innovation will succeed. Of course, some prototyped ideas are just plain bad. That’s why it’s important to nurture multiple nar- ratives. If you develop a clear view of what would have to be true for each and conduct prototyping exercises for all of them, consensus will emerge about which narrative is most compelling in action. And involve- ment in the process will help the team get ready to as- sume responsibility for putting the chosen narrative into effect. THE FACT THAT scientific analysis of data has made the world a better place does not mean that it should drive every business decision. When we face a context in which things cannot be other than they are, we can and should use the scientific method to understand that immutable world faster and more thoroughly than any of our competitors. In this context the development of more-sophisticated data analytics and the enthusiasm for big data are unalloyed assets. But when we use science in contexts in which things can be other than they are, we inadvertently convince ourselves that change isn’t possible. And that will leave
  • 22. the field open to others who invent something better— and we will watch in disbelief, assuming it’s an anom- aly that will go away. Only when it is too late will we realize that the insurgent has demonstrated to our for- mer customers that things indeed can be different. That is the price of applying analytics to the entire business world rather than just to the appropriate part of it. HBR Reprint R1705L ROGER L. MARTIN is the director of the Martin Prosperity Institute and a former dean of the Rotman School of Management in Toronto, and a coauthor of Playing to Win: How Strategy Really Works (Harvard Business Review Press, 2013). TONY GOLSBY-SMITH is the CEO and founder of Second Road, a consulting firm based in Sydney, Australia, that is now part of Accenture Strategy. IN THE CAN WORLD, THE RELEVANT DATA DOESN’T EXIST BECAUSE THE FUTURE HASN’T HAPPENED YET. YOU HAVE TO CREATE IT BY PROTOTYPING. SEPTEMBER–OCTOBER 2017 HARVARD BUSINESS REVIEW 135 Copyright 2017 Harvard Business Publishing. All Rights
  • 23. Reserved. Additional restrictions may apply including the use of this content as assigned course material. Please consult your institution's librarian about any restrictions that might apply under the license with your institution. For more information and teaching resources from Harvard Business Publishing including Harvard Business School Cases, eLearning products, and business simulations please visit hbsp.harvard.edu. Risk Management Insight FAIR (FACTOR ANALYSIS OF INFORMATION RISK) Basic Risk Assessment Guide FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC NOTE: Before using this assessment guide… Using this guide effectively requires a solid understanding of FAIR concepts ‣ As with any high-level analysis method, results can depend upon variables that may not be accounted for at this level of abstraction
  • 24. ‣ The loss magnitude scale described in this section is adjusted for a specific organizational size and risk capacity. Labels used in the scale (e.g., “Severe”, “Low”, etc.) may need to be adjusted when analyzing organizations of different sizes ‣ This process is a simplified, introductory version that may not be appropriate for some analyses Basic FAIR analysis is comprised of ten steps in four stages: Stage 1 – Identify scenario components 1. Identify the asset at risk 2. Identify the threat community under consideration Stage 2 – Evaluate Loss Event Frequency (LEF) 3. Estimate the probable Threat Event Frequency (TEF) 4. Estimate the Threat Capability (TCap) 5. Estimate Control strength (CS) 6. Derive Vulnerability (Vuln) 7. Derive Loss Event Frequency (LEF) Stage 3 – Evaluate Probable Loss Magnitude (PLM) 8. Estimate worst-case loss 9. Estimate probable loss
  • 25. Stage 4 – Derive and articulate Risk 10. Derive and articulate Risk Risk Loss Event Frequency Probable Loss Magnitude Threat Event Frequency Vulnerability Contact Action Control Strength Threat Capability Primary Loss Factors Secondary Loss Factors Asset Loss Factors Threat Loss Factors
  • 26. Organizational Loss Factors External Loss Factors FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Stage 1 – Identify Scenario Components Step 1 – Identify the Asset(s) at risk In order to estimate the control and value characteristics within a risk analysis, the analyst must first identify the asset (object) under evaluation. If a multilevel analysis is being performed, the analyst will need to identify and evaluate the primary asset (object) at risk and all meta-objects that exist between the primary asset and the threat community. This guide is intended for use in simple, single level risk analysis, and does not describe the additional steps required for a multilevel analysis. Asset(s) at risk: _____________________________________________________ _ Step 2 – Identify the Threat Community
  • 27. In order to estimate Threat Event Frequency (TEF) and Threat Capability (TCap), a specific threat community must first be identified. At minimum, when evaluating the risk associated with malicious acts, the analyst has to decide whether the threat community is human or malware, and internal or external. In most circumstances, it’s appropriate to define the threat community more specifically – e.g., network engineers, cleaning crew, etc., and characterize the expected nature of the community. This document does not include guidance in how to perform broad-spectrum (i.e., multi-threat community) analyses. Threat community: _____________________________________________________ _ Characterization FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Stage 2 – Evaluate Loss Event Frequency Step 3 – Threat Event Frequency (TEF) The probable frequency, within a given timeframe, that a threat agent will act against an asset
  • 28. Contributing factors: Contact Frequency, Probability of Action Very High (VH) > 100 times per year High (H) Between 10 and 100 times per year Moderate (M) Between 1 and 10 times per year Low (L) Between .1 and 1 times per year Very Low (VL) < .1 times per year (less than once every ten years) Rationale FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Step 4 – Threat Capability (Tcap) The probable level of force that a threat agent is capable of applying against an asset Contributing factors: Skill, Resources Very High (VH) Top 2% when compared against the overall threat population
  • 29. High (H) Top 16% when compared against the overall threat population Moderate (M) Average skill and resources (between bottom 16% and top 16%) Low (L) Bottom 16% when compared against the overall threat population Very Low (VL) Bottom 2% when compared against the overall threat population Rationale FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Step 5 – Control strength (CS) The expected effectiveness of controls, over a given timeframe, as measured against a baseline level of force Contributing factors: Strength, Assurance Very High (VH) Protects against all but the top 2% of an avg. threat population High (H) Protects against all but the top 16% of an avg. threat population
  • 30. Moderate (M) Protects against the average threat agent Low (L) Only protects against bottom 16% of an avg. threat population Very Low (VL) Only protects against bottom 2% of an avg. threat population Rationale FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Step 6 – Vulnerability (Vuln) The probability that an asset will be unable to resist the actions of a threat agent Tcap (from step 4): CS (from step 5): Vulnerability VH VH VH VH H M H VH VH H M L Tcap M VH H M L VL L H M L VL VL
  • 31. VL M L VL VL VL VL L M H VH Control Strength Vuln (from matrix above): FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Step 7 – Loss Event Frequency (LEF) The probable frequency, within a given timeframe, that a threat agent will inflict harm upon an asset TEF (from step 3): Vuln (from step 6): Loss Event Frequency VH M H VH VH VH H L M H H H TEF M VL L M M M L VL VL L L L
  • 32. VL VL VL VL VL VL VL L M H VH Vulnerability LEF (from matrix above): FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Stage 3 – Evaluate Probable Loss Magnitude Step 8 – Estimate worst-case loss Estimate worst-case magnitude using the following three steps: ‣ Determine the threat action that would most likely result in a worst-case outcome ‣ Estimate the magnitude for each loss form associated with that threat action ‣ “Sum” the loss form magnitudes Loss Forms Threat Actions Productivity Response Replacement Fine/Judgments Comp. Adv. Reputation Access Misuse Disclosure
  • 33. Modification Deny Access Magnitude Range Low End Range High End Severe (SV) $10,000,000 -- High (H) $1,000,000 $9,999,999 Significant (Sg) $100,000 $999,999 Moderate (M) $10,000 $99,999 Low (L) $1,000 $9,999 Very Low (VL) $0 $999 FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Step 9 – Estimate probable loss Estimate probable loss magnitude using the following three steps: ‣ Identify the most likely threat community action(s) ‣ Evaluate the probable loss magnitude for each loss form ‣ “Sum” the magnitudes Loss Forms
  • 34. Threat Actions Productivity Response Replacement Fine/Judgments Comp. Adv. Reputation Access Misuse Disclosure Modification Deny Access Magnitude Range Low End Range High End Severe (SV) $10,000,000 -- High (H) $1,000,000 $9,999,999 Significant (Sg) $100,000 $999,999 Moderate (M) $10,000 $99,999 Low (L) $1,000 $9,999 Very Low (VL) $0 $999 FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Stage 4 – Derive and Articulate Risk Step 10 – Derive and Articulate Risk
  • 35. The probable frequency and probable magnitude of future loss Well-articulated risk analyses provide decision-makers with at least two key pieces of information: ‣ The estimated loss event frequency (LEF), and ‣ The estimated probable loss magnitude (PLM) This information can be conveyed through text, charts, or both. In most circumstances, it’s advisable to also provide the estimated high-end loss potential so that the decision-maker is aware of what the worst-case scenario might look like. Depending upon the scenario, additional specific information may be warranted if, for example: ‣ Significant due diligence exposure exists ‣ Significant reputation, legal, or regulatory considerations exist Risk Severe H H C C C High M H H C C PLM Significant M M H H C Moderate L M M H H Low L L M M M Very Low L L M M M VL L M H VH
  • 36. LEF LEF (from step 7): PLM (from step 9): WCLM (from step 8): Key Risk Level C Critical H High M Medium L Low FAIR™ Basic Risk Assessment Guide All Content Copyright Risk Management Insight, LLC Code Galore Caselet: Using COBIT® 5 for Information Security Company Profile – Code Galore Background Information The Problems Your Role Your Tasks Figures
  • 37. Notes Questions 2 Agenda © 2013 ISACA. All rights reserved. Profile Start-up company founded in 2005 One office in Sunnyvale, California, USA 10 remote salespeople and a few with space at resellers’ offices Approximately 100 total staff; about one-third work in engineering 3 Company Profile – Code Galore 4 What we do Org. Structure Operational Industry Products Sales Financials Background Information Building a comprehensive business function automation software that performs many functions (decision making in approaching new initiatives, goal setting and tracking, financial accounting, a payment system, and much more). The software is largely the joint brainchild of the Chief Technology Officer (CTO) and a highly visionary Marketing Manager who left the company a year ago
  • 38. 5 What we do Org. Structure Operational Industry Products Sales Financials Background Information – What We Do Financed 100% by investors who are extremely anxious to make a profit. Investors have invested more than US $35 million since inception and have not received any returns. The organization expected a small profit in the last two quarters. However, the weak economy led to the cancellation of several large orders. As a result, the organization was in the red each quarter by approximately US $250,000. 6 Background Information – Financials What we do Org. Structure Operational Industry Products Sales Financials Code Galore is a privately held company with a budget of US $15 million per year. Sales last year totaled US $13.5 million (as mentioned earlier, the company came within US $250,000 of being profitable each of the last two quarters). The investors hold the preponderance of the company’s stock; share options are given to employees in the form of stock
  • 39. options that can be purchased for US $1 per share if the company ever goes public. Code Galore spends about five percent of its annual budget on marketing. Its marketing efforts focus on portraying other financial function automation applications as ‘point solutions’ in contrast to Code Galore’s product. 7 Background Information – Financials What we do Org. Structure Operational Industry Products Sales Financials 8 Background Information – Org. Structure Figure 1—Code Galore Organisational Chart CEO CSO VP, Finance VP, Business CTO VP, Human Resources Security Administrator
  • 40. Sales Mgr Accounting Dir. Sr. Financial Analyst Infrastructure Mgr. Sys. Dev. Mgr. HR Manager What we do Org. Structure Operational Industry Products Sales Financials The board of directors: Consists of seasoned professionals with many years of experience in the software industry Is scattered all over the world and seldom meets, except by teleconference Is uneasy with Code Galore being stretched so thin financially, and a few members have tendered their resignations within the last few months 9 Background Information – Org. Structure What we do Org. Structure Operational Industry Products
  • 41. Sales Financials The CEO: Is the former chief financial officer (CFO) of Code Galore that replaced the original CEO who resigned to pursue another opportunity two years ago Has a good deal of business knowledge, a moderate amount of experience as a C-level officer, but no prior experience as a CEO As a former CFO, tends to focus more on cost cutting than on creating a vision for developing more business and getting better at what Code Galore does best Background Information – Org. Structure 10 What we do Org. Structure Operational Industry Products Sales Financials Engineers perform code installations. The time to get the product completely installed and customized to the customer’s environment can exceed one month with costs higher than US $60,000 to the customer. Labour and purchase costs are too high for small and medium- sized businesses. So far, only large companies in the US and Canada have bought the product. C-level officers and board members know that they have
  • 42. developed a highly functional, unique product for which there is really no competition. They believe that, in time, more companies will become interested in this product, but the proverbial time bomb is ticking. Investors have stretched themselves to invest US $35 million in the company, and are unwilling to invest much more. 11 Background Information – Operational What we do Org. Structure Operational Industry Products Sales Financials Business function automation software is a profitable area for many software vendors because it automates tasks that previously had to be performed manually or that software did not adequately support. The business function automation software arena has many products developed by many vendors. However, Code Galore is a unique niche player that does not really compete (at least on an individual basis) with other business automation software companies. Background Information – Industry 12 What we do Org. Structure Operational Industry Products Sales Financials
  • 43. The product is comprehensive—at least four other software products would have to be purchased and implemented to cover the range of functions that Code Galore’s product covers. Additionally, the product integrates information and statistics throughout all functions—each function is aware of what is occurring in the other functions and can adjust what it does accordingly, leading to better decision aiding. Background Information – Products 13 What we do Org. Structure Operational Industry Products Sales Financials Sales have been slower than expected, mainly due to a combination of the economic recession and the high price and complexity of the product. The price is not just due to the cost of software development; it also is due to the configuration labour required to get the product running suitably for its customers. Background Information – Sales 14 What we do Org. Structure Operational Industry Products Sales Financials
  • 44. Acquisition Code Galore is in many ways fighting for its life, and the fact that, four months ago, the board of directors made the decision to acquire a small software start-up company, Skyhaven Software, has not helped the cash situation. Skyhaven consists of approximately 15 people, mostly programmers who work at the company’s small office in Phoenix, Arizona, USA. Originally, the only connection between your network and Skyhaven’s was an archaic public switched telephone network (PSTN). Setting up a WAN Two months ago, your company’s IT director was tasked with setting up a dedicated wide area network (WAN) connection to allow the former Skyhaven staff to remotely access Code Galore’s internal network and vice versa. You requested that this implementation be delayed until the security implications of having this new access route into your network were better understood, but the CEO denied your request on the grounds that it would delay a critical business initiative, namely getting Skyhaven’s code integrated into Code Galore’s. 15 The Problems Information Security More recently, you have discovered that the connection does not require a password for access and that, once a connection to the internal network is established from outside the network, it is possible to connect to every server within the network, including the server that holds Code Galore’s source code and software library and the server that houses employee payroll, benefits and medical insurance information.
  • 45. Fortunately, access control lists (ACLs) limit the ability of anyone to access these sensitive files, but a recent vulnerability scan showed that both servers have vulnerabilities that could allow an attacker to gain unauthorised remote privileged access. You have told the IT director that these vulnerabilities need to be patched, but because of the concern that patching them may cause them to crash or behave unreliably and because Code Galore must soon become profitable or else, you have granted the IT director a delay of one month in patching the servers. 16 The Problems – Overview Bots What now really worries you is that, earlier today, monitoring by one of the security engineers who does some work for you has shown that several hosts in Skyhaven’s network were found to have bots installed in them. Source Code Furthermore, one of the Skyhaven programmers has told you that Skyhaven source code (which is to be integrated into Code Galore’s source code as soon as the Skyhaven programmers are through with the release on which they are currently working) is on just about every Skyhaven machine, regardless of whether it is a workstation or server. 17 The Problems – Overview Code Galore vs. Skyhaven Employee knowledge Code Galore employees are, in general, above average in their
  • 46. knowledge and awareness of information security, due in large part to an effective security awareness programme that you set up two months after you started working at Code Galore and have managed ever since. You offer monthly brown bag lunch events in a large conference room, display posters reminding employees not to engage in actions such as opening attachments that they are not expecting, and send a short monthly newsletter informing employees of the direction in which the company is going in terms of security and how they can help. Very few incidents due to bad user security practices occurred until Skyhaven Software was acquired. Skyhaven’s employees appear to have almost no knowledge of information security. You also have discovered that the Skyhaven employee who informally provides technical assistance does not make backups and has done little in terms of security configuration and patch management. 18 The Problems – Overview 19 Your Role Hired two years ago as the only Chief Security Officer (CSO) this company has ever had. Report directly to the Chief Executive Officer (CEO). Attend the weekly senior management meeting in which goals are set, progress reports are given and issues to be resolved are discussed. The Information Security Department consists of just you; two members of the security engineering team from software are available eight hours each week. 10 years of experience as an information security manager, five of which as a CSO, but you have no previous experience in the
  • 47. software arena. Four years of experience as a junior IT auditor. Undergraduate degree in managing information systems and have earned many continuing professional education credits in information security, management and audit areas. Five years ago, you earned your CISM certification. The focus here is not on a business unit, but rather on Code Galore as a whole, particularly on security risk that could cripple the business. Due primarily to cost-cutting measures the CEO has put in place, your annual budget has been substantially less than you requested each year. Frankly, you have been lucky that no serious incident has occurred so far. You know that in many ways your company has been tempting fate. You do the best you can with what you have, but levels of unmitigated risk in some critical areas are fairly high. Your Role and the Business Units 20 Mr. Wingate’s focus on cost cutting is a major reason that you have not been able to obtain more resources for security risk mitigation measures. He is calm and fairly personable, but only a fair communicator, something that results in your having to devote extra effort in trying to learn his expectations of your company’s information
  • 48. security risk mitigation effort and keeping him advised of risk vectors and major developments and successes of this effort. 21 Your Role and the CEO, Ernest Wingate Code Galore’s IT director is Carmela Duarte. She has put a system of change control into effect for all IT activities involving hardware and software. This system is almost perfect for Code Galore—it is neither draconian nor too lax and very few employees have any complaints against it. You have an excellent working relationship with her, and although she is under considerable pressure from her boss, the CTO, and the rest of C-level management to take shortcuts, she usually tries to do what is right from a security control perspective. She is working hard to integrate the Skyhaven Software network into Code Galore’s, but currently, there are few resources available to do a very thorough job. She would also do more for the sake of security risk mitigation if she had the resources. Carmela has worked with Code Galore since 2006, and she is very much liked and respected by senior management and the employees who work for her. 22 Your Role and the IT Director, Carmela Duarte
  • 49. You believe that Code Galore’s (but not Skyhaven Software’s) security risk is well within the risk appetite of the CEO and the board of directors. You have a good security policy (including acceptable use provisions) and standards in place, and you keep both of them up to date. You have established a yearly risk management cycle that includes asset valuation, threat and vulnerability assessment, risk analysis, controls evaluation and selection, and controls effectiveness assessment, and you are just about ready to start a controls evaluation when you suddenly realise that something more important needs to be done right away (outlined in The Problem section). 23 Your Tasks © 2013 ISACA. All rights reserved. Using the figure 4 template, you need to modify the qualitative risk analysis that you performed six months ago to take into account the risk related to Skyhaven Software. The major risk events identified during this risk analysis are shown in figure 2. You must not only head this effort, but for all practical purposes, you will be the only person from Code Galore who works on this effort. 24 Your Tasks – Qualitative Risk Analysis © 2013 ISACA. All rights reserved. Your revision of the last risk analysis will not only bring Code Galore up to date concerning its current risk landscape, but will also provide the basis for your requesting additional resources
  • 50. to mitigate new, serious risk and previously unmitigated or unsuitably mitigated risk. You may find that some risk events are lower in severity than before, possibly to the point that allocating further resources to mitigate them would not be appropriate. This may help optimise your risk mitigation investments. To the degree that you realistically and accurately identify new and changed risk, you will modify the direction of your information security practice in a manner that, ideally, lowers the level of exposure of business processes to major risk and facilitates growth of the business. Failure to realistically and accurately identify new and changed risk will result in blindness to relevant risk that will lead to unacceptable levels of unmitigated risk. 25 Your Tasks – Qualitative Risk Analysis © 2013 ISACA. All rights reserved. You must revise the most recent risk analysis, not only by reassessing all the currently identified major risk, but also by adding at least three risk events that were not previously identified. COBIT 5 provides tools that might be helpful in determining the best approach reassessing and prioritising the major risk events, in EDM03, Ensure risk optimisation. You must also provide a clear and complete rationale for the risk events, their likelihood, and impacts (outlined in the section Alternatives With Pros and Cons of Each section). 26 Your Tasks – Qualitative Risk Analysis
  • 51. © 2013 ISACA. All rights reserved. The rationale for each security-related risk that you select must include a discussion of the pros and cons associated with identifying and classifying each as a medium-low risk or higher. For example, suppose that you decide that a prolonged IT outage is no longer a medium- to low-level risk, but instead is now a low risk. The pros (purely hypothetical in this case) may be that outage- related risk events are now much lower than before due to, for example, the implementation of a new backup and recovery system that feeds data into an alternative data center (not true in this caselet). In this case allocating additional resources would therefore be a waste of time and money. 27 Your Tasks – Pros and Cons © 2013 ISACA. All rights reserved. On the con side, lowering the severity of a prolonged IT outage risk may result in underestimation of this source of risk, which could result in failing to allocate resources and in a much higher amount of outage-related loss and disruption than Code Galore could take, given its somewhat precarious state. 28 Your Tasks – Pros and Cons © 2013 ISACA. All rights reserved.
  • 52. Exhibits – Major Risk 29 © 2013 ISACA. All rights reserved. Figure 2—Major Risk Figure 3—Network Diagram 30 © 2013 ISACA. All rights reserved. 31 Figure 4—Risk Analysis Template © 2013 ISACA. All rights reserved. Since Code Galore is in the business function automation software arena it should be consider using business process automation (BPA), a strategy an business uses to automate processes in order to contain costs. It consists of integrating applications, restructuring labor resources and using software applications throughout the organization. Code Galore is in a very difficult situation. Its existence is uncertain, and money is critical right now. Yet, this company has opened itself up to significant levels of security risk because of acquiring Skyhaven Software and the need for former Skyhaven programmers to access resources within the corporate network. Worse yet, even if the chief security officer (CSO) in this scenario correctly identifies and assesses the magnitude of
  • 53. security risk from acquiring Skyhaven and opening the Code Galore network to connections from the Skyhaven network and prescribes appropriate controls, given Code Galore’s cash crunch, not many resources (money and labour) are likely to be available for these controls. 32 Notes © 2013 ISACA. All rights reserved. All the CSO may be able to do is document the risk and make prioritised recommendations for controls, waiting for the right point in time when the company’s financial situation gets better. If an information security steering committee exists, the CSO must keep this committee fully apprised of changes in risk and solicit input concerning how to handle this difficult situation. At the same time, the CSO should initiate an ongoing effort (if no such effort has been initiated so far) to educate senior management and key stockholders concerning the potential business impact of the new risk profile. (Note: The kind of situation described in this caselet is not uncommon in real- world settings.) 33 Notes © 2013 ISACA. All rights reserved. What are the most important business issues and goals for Code Galore? What are the factors affecting the problem related to this case? What are the managerial, organizational, and technological issues and resources related to this case? What role do different decision makers play in the overall
  • 54. planning, implementing and managing of the information technology/security applications? What are some of the emerging IT security technologies that should be considered in solving the problem related to the case? 34 Discussion Questions 1-5 © 2013 ISACA. All rights reserved. In what major ways and areas can information security help the business in reaching its goals? Which of the confidentiality, integrity and availability (CIA) triad is most critical to Code Galore’s business goals, and why? Change leads to risk, and some significant changes have occurred. Which of these changes lead to the greatest risk? Imagine that three of the greatest risk events presented themselves in worst-case scenarios. What would be some of these worst-case scenarios? How can the CSO in this scenario most effectively communicate newly and previously identified risk events that have grown because of the changes to senior management? 35 Discussion Questions 6-10 © 2013 ISACA. All rights reserved.
  • 55. DATA The Best Approach to Decision Making Combines Data and Managers’ Expertise by Paolo Gaudiano JUNE 20, 2017 Data is now the critical tool for managing many corporate functions, including marketing, pricing, supply chain, operations, and more. This movement is being further fueled by the promise of artificial intelligence and machine learning, and by the ease of collecting and storing data about every facet of our daily lives. 2COPYRIGHT © 2017 HARVARD BUSINESS SCHOOL PUBLISHING CORPORATION. ALL RIGHTS RESERVED. But is the pendulum starting to swing too far? As a practitioner and teacher of predictive analytics, my greatest concern is what I call the “big data, little brain” phenomenon: managers who rely excessively on data to guide their decisions, abdicating their knowledge and experience. In a typical big data project, a manager engages an internal or external team to collect and process data, hoping to extract insights related to a particular business problem. The big data team has the expertise needed to wrangle raw data into usable form and to select algorithms that can identify statistically significant patterns. The results are then presented
  • 56. to the manager through charts, visualizations, and other types of reports. This scenario is problematic because most managers are not experts in data science, and most data scientists are not business experts. Addressing this dichotomy requires individuals who can “serve as liaisons” between the two, as Todd Clark and Dan Wiesenfeld suggested in a recent HBR article. This, however, is simply a palliative that does not resolve the underlying problem. As Tom Davenport wrote in HBR in 2006, the year before publishing his seminal book, Competing on Analytics, “For analytics-minded leaders, then, the challenge boils down to knowing when to run with the numbers and when to run with their guts.” Rather than reducing reliance on intuition, the advanced methodologies of big data require managers to use even more intuition to make sense of the growing number of outputs and recommendations being generated by data models. Furthermore, the predictive models created by big data methodologies do not incorporate the manager’s unique knowledge of the business. This is tantamount to someone collecting a lot of data and then deciding to throw away half of it — except in this case you are arguably throwing away the more valuable half, because the manager has specific knowledge of the business, while the data science approaches are generic. How can we effectively combine data science and business expertise? In a 2002 HBR article titled “Predicting the Unpredictable,” my business partner Eric Bonabeau introduced the concept of agent-
  • 57. based simulation (ABS), which at that time was a relatively novel approach to solving complex business problems through computer simulations. Fifteen years later, Icosystem (Bonabeau’s company, which I am still a core member of) and a number of others have demonstrated the power of ABS as a business management tool. For example, Bonabeau’s article described a project with Eli Lilly to develop a new way of managing drug development pipelines. In 2008 Bonabeau and two members of the Eli Lilly R&D leadership published an HBR article in which they reported that the new approach had been able to deliver molecules to Phase II trials “at almost twice the speed and less than a third of the cost of the standard process.” Although ABS was first created as a tool for social science research about four decades ago, it is only now starting to gain widespread adoption because of the dramatic increase in available computing power. For instance, Icosystem developed a simulation of the daily behavior of more than 300,000 3COPYRIGHT © 2017 HARVARD BUSINESS SCHOOL PUBLISHING CORPORATION. ALL RIGHTS RESERVED. https://hbr.org/2017/06/3-things-are-holding-back-your- analytics-and-technology-isnt-one-of-them https://hbr.org/2006/01/competing-on-analytics https://www.amazon.com/Competing-Analytics-New-Science- Winning/dp/1422103323/ https://hbr.org/2002/03/predicting-the-unpredictable https://hbr.org/2008/03/a-more-rational-approach-to-new- product-development
  • 58. sailors in the U.S. Navy from recruitment to retirement. This type of 20-year simulation can run on a laptop in less than one minute, and it’s enabled the Navy to test in one day more scenarios than they would normally be able to test in one year. But what about the “big data, little brain” problem? One of the most appealing aspects of ABS is that it combines domain expertise and data. The domain expertise is used to define the structure of the simulation, which captures the day-to-day behaviors and interactions unique to each business problem. The data is used partly to refine the details of the simulation and partly to ensure that, as the simulation runs, the resulting outcomes match real-world results. With this approach, the manager’s expertise regains the primary role, and the results of the simulation can be analyzed by the manager and data scientist together, as they both understand the workings of the simulation. Besides increasing transparency, combining domain expertise and data also increases predictive accuracy. Back in 2014 a leading automaker worked with an ABS marketing analytics platform to plan the launch of a new model. The ABS recommended launching the new model six months earlier than the client had planned. In 2016 the automaker launched the new model as recommended; a year later it found that ABS had predicted monthly sales for the first year with 93% accuracy. By combining data and the manager’s expertise into a predictive model, ABS solves complex
  • 59. problems in a transparent way with a high degree of predictive accuracy. The increased availability of commercial ABS tools and didactic materials suggest that this new approach is poised to revolutionize business management. Paolo Gaudiano is president and chief technology officer of Icosystem Corporation, a leader in the theory and application of complexity science, and he recently co-founded Aleria, which uses the same methodology to help organizations quantify the link between diversity and performance. He also teaches a graduate course on Business Complexity at the City College of New York. Follow him on Twitter @icopaolo. 4COPYRIGHT © 2017 HARVARD BUSINESS SCHOOL PUBLISHING CORPORATION. ALL RIGHTS RESERVED. https://twitter.com/icopaolo?lang=en Copyright 2017 Harvard Business Publishing. All Rights Reserved. Additional restrictions may apply including the use of this content as assigned course material. Please consult your institution's librarian about any restrictions that might apply under the license with your institution. For more information and teaching resources from Harvard Business Publishing including Harvard Business School Cases, eLearning products, and business simulations please visit hbsp.harvard.edu.
  • 60. Police decision making: an examination of conflicting theories Scott W. Phillips and James J. Sobol Criminal Justice Department, Buffalo State College, Buffalo, New York, USA Abstract Purpose – The purpose of this paper is to compare two conflicting theoretical frameworks that predict or explain police decision making. Klinger’s ecological theory proposes that an increased level of serious crimes in an area decreases the likelihood an officer will deal with order-maintenance issues, while Fagan and Davies suggest an increase in low-level disorder will increase order maintenance behavior of police officers. Design/methodology/approach – Using a vignette research design, the authors examines factors that may contribute to police officers’ decision to make a traffic stop in four jurisdictions with varying levels of serious crime. Ordered logistic regression with robust standard errors was used in the analysis. Findings – Analysis of the findings demonstrates that officers who work in higher crime areas are less likely to stop a vehicle, as described in the vignettes. Additional predictors of police decision to stop include vehicles driven by teenaged drivers and drivers who were speeding in a vehicle. Research limitations/implications – The current research is limited to an adequate but fairly small sample size (n¼204), and research design that examines hypothetical scenarios of police
  • 61. decision making. Further data collection across different agencies with more officers and more variation in crime levels is necessary to extend the current findings. Originality/value – This paper adds to the literature in two primary ways. First, it compares two competing theoretical claims to examine a highly discretionary form of police behavior and second, it uniquely uses a vignette research design to tap into an area of police behavior that is difficult to study (e.g. the decision not to stop). Keywords United States of America, Police, Policing, Decision making, Workload, Traffic stops, Vignettes Paper type Research paper Introduction Police work requires officers to deal with a substantial amount of non-criminal activity, such as resolving disputes (Johnson and Rhodes, 2009) or dealing with problems or very low-level offenses that fall into the broad description “order maintenance activities” (Walker and Katz, 2005). There are potential benefits when officers deal with these low-level problems or offenses, such as reducing the chance of further crime or increasing the officer’s environmental knowledge to improve problem solving (Walker and Katz, 2005). One of the most common types of order maintenance activities takes place in the form of a traffic stop (Walker and Katz, 2005), which could be seen as “a form of order maintenance where the officer has taken action
  • 62. against a suspected individual in order to prevent crime” (Vito and Walsh, 2008, p. 93). Many traffic stop studies were conducted in the past decade to determine if police officers were using race in their decision to stop a vehicle (e.g. Gaines, 2006; Meehan The current issue and full text archive of this journal is available at www.emeraldinsight.com/1363-951X.htm Received 25 February 2011 Revised 21 June 2011 Accepted 7 July 2011 Policing: An International Journal of Police Strategies & Management Vol. 35 No. 3, 2012 pp. 551-565 r Emerald Group Publishing Limited 1363-951X DOI 10.1108/13639511211250794 The authors would like to thank Sean P. Varano for his invaluable assistance in the vignette construction process. 551 Police decision making
  • 63. and Ponder, 2002a, b; Mosher et al., 2008; Petrocelli et al., 2003; Schafer et al., 2004; Smith and Petrocelli, 2001; Withrow, 2004a, b). Some of these studies included neighborhood characteristics in their examination of police decision making, such as an officer’s perception of the racial makeup of a patrol area (Albert et al., 2005) and the crime rate of a neighborhood (Petrocelli et al., 2003; Withrow, 2004a, b). Where many studies of traffic stop decision making were unguided by theory (Engel et al., 2002), the work of Klinger (1997) and Fagan and Davies (2000) provide theoretical guidance to predict or explain how patrol area can impact an officer’s decision to stop a person. Both theoretical frameworks, however, offer different predictions based on the type of criminal activity in the patrol area. A postulate within Klinger’s (1997) ecological theory assumes that workload influences when formal legal authority is applied. That is, police officers reserve their attention for more serious crimes in areas with higher-crime rates or more serious criminal behavior. Conversely, Fagan and Davies (2000) theory asserts that officers are more aggressive in response to low-level order maintenance problems. This research sought to examine the relationship between the work location of a police officer and its impact on a police officer’s judgment to
  • 64. stop a vehicle to determine which theoretical framework is supported. There were at least two justifications for this study. First, because police department policy is often based on theoretically framed research (Engel et al., 2002), it is important to assess components of these two competing theoretical frameworks to determine which is empirically sound when explaining police behavior. Further, if theories are necessarily incomplete (Bernard and Ritti, 1990), the present study may uncover conditions unique to each so they can be appropriately adjusted. Theory Researchers argued that studies of traffic stop decision making should not be viewed as scientific research because they failed to explicitly state the guiding theory of their research (Engel et al., 2002). To address this concern, the present inquiry attempted to shed light on police officer behavior during their routine patrol duties using two conflicting theoretical perspectives. One suggests that low-level offenses increase the likelihood that police officers will stop a person in an effort to deal proactively with bigger problems. The second proposes that officers are less likely to deal with low-level offenses and reserve their limited time and resources to deal with more serious crimes. Order maintenance policing Fagan and Davies (2000) provide a detailed discussion of police decision making,
  • 65. explaining why the 1982 Broken Windows theory of Wilson and Kelling, which focussed on police response in disorderly places, morphed into a policing tactic that focussed on people. It had been theorized that social disorganization in the form of an increased poverty rate, predominately decreased age distribution (i.e. younger population), and population turnover lead to increased crime rates across neighborhoods. Fagan and Davies (2000) explained that while social disorganization predicted rates of disorder in an area (e.g. loitering, public drinking), social disorganization does not predict homicide rates and only weakly predicted robbery rates. Therefore, efforts to control serious crime via disorder policing are unlikely to be effective. Based on the notion that proactive enforcement of minor crimes and disorder would reduce serious crime, the New York City Police Department (NYPD) increased their use 552 PIJPSM 35,3 of tactics dealing with order maintenance issues in neighborhoods with the highest levels of disorder crime. Order maintenance policing was intended to focus on “quality
  • 66. of life” issues, such as public drinking or panhandling, under the assumption that police enforcement of laws against these types of crime would reduce more serious criminal behavior. Fagan and Davies (2000) reported that the NYPD order maintenance policing policy was intended to address gun-crimes as one of the more serious criminal behaviors that could be deterred through aggressive enforcement of disorder crimes[1]. A successful order maintenance policing approach would require a “pro-active interdiction” (Fagan and Davies, 2000, p. 475) of anyone suspected of violating even minor offenses. This tactic, however, led to an increased use of “pretextual” stops, where police officers would scrutinize a person for any type of minor offense in order to establish a minimal level of reasonable suspicion to stop and frisk the person in the hopes of discovering a more serious offense (Fagan and Davies, 2000). Ecological workload A neighborhood attribute that likely has a direct impact on police officers’ decision making is the actual workload of the officers who patrol a neighborhood. Klinger (1997) suggested that as the number of calls-for-service and deviance levels of a work location increase, officers have less time to deal with citizens’ complaints. When an officer has less time available for dealing with the work in their patrol area, officers must manage their time by prioritizing the tasks they focus on. This formula pushes the officer’s
  • 67. “towards leniency as deviance increases” (Klinger, 1997, p. 293). When a police officer works in a location that has fewer calls-for-service or lower levels of deviance, the officer is free to use as much time as needed to deal with an incident (Klinger, 1997). Klinger (1997) relies on the work of Donald Black to build his discussion of “leniency,” which Klinger described as the amount of law a police officer applies to an incident. For example, a police officer could spend a substantial amount of time stopping vehicles but never issue a citation. Strictly speaking, not issuing a ticket would be considered a lenient response by the officer. Still, the workload aspect of Klinger’s (1997) ecological theory clearly implies that “leniency” is the actual attention or effort that a police officer devotes toward a problem. As a result, when Klinger stated police officers will be lenient “for increasingly serious crimes as levels of district deviance increases” (Klinger, 1997, p. 293), it is reasonable to assume that police officers who patrol areas with more serious crime would be less likely to focus their attention on traffic stops. Klinger’s work has yet to receive consistent empirical support. For example, Sobol (2010) examined postulates of Klinger’s theory and conceptualized workload as the amount of time officers had “assigned” vs “unassigned” to explain the vigor with which the police used their formal authority. Surprisingly,
  • 68. Sobol found that workload and district crime were negatively correlated (r¼�0.16) and that workload did not significantly affect the vigor with which the police used their formal legal authority. Other research shows that neighborhood characteristics influence an officer’s decision to “translate” a call-for-service into an official crime report; however, “neighborhood influences vary by crime type” (Varano et al., 2009, p. 560). Looking at studies on traffic stop specifically, to date, no study has included a workload variable in their analysis, but a few studies offer what might be considered reasonable surrogates that help build a foundation for this approach. Contrary to what might be expected within Klinger’s (1997) ecological theory, Roh and Robinson (2009) reported that patrol beats with more crime (i.e. hot spots) are related to an increased likelihood of a traffic stop. In addition, 553 Police decision making Phillips (2009a) found that sheriff’s deputies were less likely to stop a vehicle than officers in two small township police agencies. Although Phillips does not speculate, it may be that sheriff’s deputies have substantially more area to cover and are more likely to conserve their time resources by engaging in fewer
  • 69. traffic stops. Literature review Background: traffic stop research A number of different factors might influence the decision to stop a vehicle: neighborhood aspects, characteristics of the driver, organization influences, and legal factors. Each will be briefly discussed below in order to frame an understanding of the present research. Scholars have advised that a greater understanding of traffic stop behavior is limited because many studies rely on data from one large police department or jurisdiction (Mosher et al., 2008; Novak, 2004; Parker et al., 2004). Police behavior often occurs in a beat or neighborhood context and the use of race in the decision to stop a vehicle “could possibly be more prevalent in racially homogeneous communities” (Novak, 2004, p. 73). Smith and Petrocelli (2001) found that the Part-I crime rate of an area was not related to the police decision to stop a vehicle. Later, Petrocelli et al. (2003) examined multiple neighborhood characteristics in the decision to stop a vehicle, including percent black population of the neighborhood, percent of families below poverty line, percent unemployed in neighborhood, mean family income, and Part-I crimes per 1,000 population. They found that police tended to make more stops in neighborhoods with higher-crime rates. Alpert et al. (2007) examined the racial makeup
  • 70. of neighborhoods where traffic stops occurred and found no connection between racial composition and police stops. Withrow (2004a) found that drivers stopped during the night and driving in higher-crime areas were more likely to be black drivers. Similarly, when using in-car computer queries as a measure of surveillance, Meehan and Ponder (2002b) found that officer scrutiny “significantly increases as [African Americans] travel farther from ‘black’ communities and into whiter neighborhoods” (p. 422). Some research has found that a driver’s race is related to the police officer’s decision to stop a vehicle. Several studies reported a relationship between black drivers and the decision to stop a vehicle (Miller, 2008; Warren et al., 2006), while others have found only a weak (Novak, 2004) or no relationship (Phillips, 2009a) between black drivers and the decision to stop a vehicle. Driver age, however, was found to be related to the decision to stop (Miller, 2008). Further, the driver’s gender (i.e. male) was significantly related to police decision to stop a driver (Miller, 2008; Warren et al., 2006). Early research into the influence of police organizations on an officer’s decision making suggested management style and agency size may impact traffic stop behavior. Wilson (1978) posited that officers who worked in agencies with a legalistic management style “will issue more traffic tickets at a higher rate” (p. 172). Others
  • 71. (Brown, 1981; Mastrofski et al., 1987) found that police officers working in agencies of differing size behave differently in traffic stop situations. More recently, Mosher et al. (2008) reported that most prior research of police decision making in traffic stop situations takes place in only one jurisdiction. This drawback does not allow researchers to determine if organizational characteristics influence the decision making of police officers. Phillips (2009a) analyzed the responses of police officers in two small agencies against sheriff’s deputies and found that sheriff’s deputies were significantly less likely to stop a vehicle. His study is limited because he collected data in only three law enforcement agencies and the number of officers in this study was small. 554 PIJPSM 35,3 When legal considerations were included in the research, Withrow (2004b) stated most traffic stops occur when a driver commits a more serious traffic offenses (e.g. moving violations) than for less serious traffic offenses (e.g. equipment violations). Others suggest that drivers who are speeding (Phillips, 2009a), commit moving violations (Warren et al., 2006), or equipment violations (Alpert et al., 2007) are likely to
  • 72. be stopped. Novak (2004) reported that white drivers are more likely to be stopped for moving violations, unsafe driving, and speeding. It has been suggested that a measure of vehicle characteristics or quality that is involved in a traffic stop should be studied because some cars may be customized in a manner the draws the attention of police officers (Batton and Kadleck, 2004; Ramirez et al., 2000). While vehicle quality has never been clearly operationalized in prior studies, it is suggested that “car effect” (Batton and Kadleck, 2004) could include a poor quality vehicle (Engel and Calnon, 2004) or an older vehicle (Miller, 2008; Warren et al., 2006), Alpert et al. (2007) found that vehicle age had no impact on the decision to stop a vehicle; other research indicated older vehicles were related to the decision to stop a vehicle (Miller, 2008). Phillips (2009a), however, found that a newer vehicle was related to the decision to stop the vehicle. This study As the literature review demonstrated, the decision making of street-level police officers in traffic stop incidents may be influenced by different factors. The few studies that incorporated a neighborhood crime-rate variable (Petrocelli et al., 2003; Withrow, 2004a) found a positive relationship between this dimension and the decision to stop a vehicle. These results tended to support the framework provided by Fagan and Davies (2000). Such findings, however, may be difficult to generalize
  • 73. since their data were collected from one large urban police department (e.g. NYPD). In addition, Klinger’s (1997) discussion provides a general theoretical framework for police behavior, but does not consider the other variables that may mediate the influence of area, such as organizational size, the agencies management style, or the type of law enforcement agency (i.e. local, county, or state). This study sought to examine assumptions from the two competing theoretical models to explain police decision making in traffic stop situations. It offers an empirical examination of the influence of “neighborhood,” as suggested by both Fagan and Davies and Klinger, on the judgment of police officers in traffic stop situations while controlling for various aspects of the incident, including driver characteristics and legal aspects. Two features of this study contribute to our understanding of police decision making. First, data were collected in multiple police agencies of varying sizes, which can help minimize the problem of “aggregation bias” in most other studies of only one large jurisdiction (Mosher et al., 2008, p. 46). Like other studies, however, the respondents do comprise a convenience sample. Second, a vignette research design is used (Rossi, 1979; Rossi and Anderson, 1982), allowing the inclusion of multiple variables into vignettes to examine the decision making of a police officer to stop a vehicle. An additional benefit to the
  • 74. vignette research design is that it may minimize. Withrow (2004b) stated “because there is no record of the individuals not stopped,” most designs cannot determine the influence any variable on getting stopped (p. 229, emphasis in original). The vignette research design minimizes this problem because the design allows for the inclusion of multiple variables and can control for those cases where a person is not stopped by officers. 555 Police decision making Data and methods Study location Data used in this study were collected from police officers in four police agencies in New York State. Table I provides general information on the agencies and jurisdiction. The study locations can be roughly divided into two groups. One is the work district of a large urban police agency with neighborhoods of concentrated population and higher levels of serious crime and disorder, while the other three study locations consist of two small township agencies and a county sheriff’s department with very low crime levels. Including officers from two small agencies and a county sheriff’s department
  • 75. distinguishes this research from other studies of the police and traffic stop decision making because the studies cited in this paper used data primarily collected in large agencies or suburban areas near larger cities. The Lower Town Police Department and the Upper Town Police Departments (all department names are pseudonyms) serve townships and employ part-time and full-time police officers. These townships border each other, as well as a city of approximately 50,000 people (not part of this study). Police officers in the townships furnish routine patrol services, are dispatched to calls by the county sheriffs’ department, and provide no special services, such as detectives. The township police agencies offer a fairly diverse working environment for officers, with traditional style neighborhoods laid out in a grid pattern that include single family and apartment housing, shopping plazas with department stores, grocery stores, small shops, secondary highways with extensive commuter and commercial traffic, and rural areas with farms and rural housing. The third agency is the Lake County Sheriff’s Department. Deputies provide patrol services for a sizable rural area as well as several small towns and villages that employ no other police services. All three agencies serve a fairly homogenous population, and have few violent index crimes. The large police agency that participated in this study was the
  • 76. River City Police Department, specifically the North District (River City has five patrol districts). This agency is also located in Upstate New York. As indicated in Table I, North District is densely populated and is considered fairly common as large city areas go. Jurisdiction Square miles Patrol officers Violent index crimes (2007) Property crimes (burglary, larceny, car theft) Population served Race (white, African American, Others) % Lower Town
  • 77. P.D. 64 14 17 (2 rapes, 1 robbery, 14 aggravated assaults) 177 8,978 93, 3, 4 Upper Town P.D. 9 17 16 (4 robberies, 12 aggravated assaults) 334 19,038 97, 1, 2 Lake Co. 552 60 79 (15 rapes, 14 robberies, 50 aggravated assaults) 1,336 108,714a 90, 6, 4 North District 9.6 96 1,063 (14 murders, 53 rapes, 533 robberies, 462 aggravated assault) c 5,230 78,700 44, 34, 7b Notes: aDoes not include the population (111,134) of three cities within the county that employ their own police agencies; bUS Census data for 2000 for all of River City; cdepartment data
  • 78. Table I. Description of research locations (US Census data and New York State Division of Criminal Justice Services crime data, US Census Bureau) 556 PIJPSM 35,3 The population of North District is racially diverse compared to the smaller jurisdictions, and has a substantially higher number of violent index and property crimes than the other agencies. Research design A vignette research design employ aspects of a random experiment by incorporating each variable as a unique dimension within the vignette, and randomly vary the level of each dimension between vignettes (Rossi, 1979; Rossi and Anderson, 1982). Vignettes are then randomly assigned to respondents. This design measures respondents’ judgment or decision making as the level of each dimension changes. That is, as the level of one dimension changes, its influence in the judgment or decision-making process may shift in relation to another dimension. The vignettes
  • 79. used for this study were constructed along several variables (discussed below), and vignettes have been successfully used to examine police opinion and decision making in other work situations (Eterno, 2003; Hickman et al., 2001; Phillips, 2009b; Phillips and Sobol, 2010). Vignettes possess aspects of a controlled, random experiment and, therefore, provide a benefit in studying the judgment of police officers in traffic stop incidents: collecting data on vehicles not stopped. When there is an absence of data regarding citizens not stopped, as is the case in almost all prior traffic stop research, untangling the significant aspects of those who are stopped from those who are not is unworkable, making it impossible to discover which dynamics explain variations in police officer decision making. Further, using vignettes provided a unique opportunity to study multiple factors that may influence a police officers’ decision to stop a vehicle prior to actually stopping a vehicle. Most studies of traffic stop decision making collects data after the stop has occurred. Data collection A total of 100 survey packets, each of which included randomly constructed vignettes exploring different activities police officers’ encounter (domestic violence incidents, use of force incidents, traffic stop incidents), were constructed. Each packet contained two randomly selected vignettes describing a driver and vehicle that
  • 80. they encounter during routine patrol. Police officers in the sample agencies were provided with a randomly selected survey packet. Several methods were used to improve the validity of responses because police officers may be reluctant to respond to outsiders who ask questions about their behavior. A cover letter informed the respondents that their answers would not be seen by police management. Second, officer identities would be kept anonymous. Two methods were used to collect data in the smaller agencies during the summer of 2005. First, survey packets were passed out to patrol deputies in the Lake County Sheriff’s Department during all roll-call periods over the course of several days. Deputies completed the surveys during that time and returned them in a sealed envelope to the researcher. In total, 39 surveys were passed out and 38 were returned completed. The Upper Town Police Department does not have a routine roll-call period; however, during the data collection period the department had scheduled a department staff meeting. The police chief allowed the researcher to distribute surveys to police officers during this meeting. A total of 13 survey packets were distributed to the available officers and all were returned completed. The second method for collecting data was used in the Lower Town Police Department because Lower Town does not
  • 81. 557 Police decision making have a routine roll-call period. Survey packets were left for the officers in their departmental mailboxes. Officers returned the surveys to the police chief in a sealed envelope, and they were returned in bulk to the researcher. In total, ten surveys were distributed and nine were completed. The second data collection period occurred during the summer of 2006. A graduate student who works as an officer in River City distributed newly constructed survey packets to patrol officers in North District during all roll-call periods where they were completed and returned in a sealed envelope. Other than a brief verbal explanation of the study and the anonymity of the respondents, the graduate student had no interaction with the officers. The packets contained traffic stop vignettes constructed in an identical fashion as those used in the smaller agencies. In total, 45 survey packets were distributed and 42 were completed. A total of 102 police officers completed two vignettes and each completed vignette represented a case in the data file. The total number of complete vignettes from all respondents in the four police agencies thus was 204. Table II provides a description of the variables used in this
  • 82. study. Dependent variable Many studies of traffic stop decision making use multiple dependent variables, such as the original decision to stop a vehicle, the decision to search the vehicle, and how the stop ended (i.e. no action, warning, citation) (Engel and Calnon, 2004; Petrocelli et al., 2003). One deficiency when using a vignette design is that it is difficult to include “contingency” questions that would elicit subsequent decisions as an incident progresses through time. As a result, this study used only one dependent variable: a police officer’s self-reported likelihood of stopping a vehicle on a five-point Likert scale (1¼very unlikely to stop traffic; 5¼very likely to stop traffic). Independent variables – vignette dimensions and officer characteristics The following is a review of the vignette dimensions used in this study. For a detailed discussion of the justification for these dimensions, see Phillips (2009a). Research vignettes described three driver characteristics. The first dimension was the driver’s Variables Range M SD Dependent variables Stop 1-5 3.61 1.01 Independent variables Sheriff 0-1 0.37 0.48 Upper Town 0-1 0.12 0.33 Lower Town 0-1 0.08 0.28
  • 83. Black 0-1 0.34 0.47 Hispanic 0-1 0.32 0.46 Sex 0-1 0.53 0.49 Age teen 0-1 0.36 0.48 Age_20 0-1 0.30 0.46 Vehicle type 0-1 0.50 0.50 Tint 0-1 0.50 0.50 Cell phone 0-1 0.33 0.47 Speeding 0-1 0.39 0.48 Experience 0-35 10.17 6.78 Table II. Variable description 558 PIJPSM 35,3 race: white, black, and Hispanic. The second dimension was the driver’s gender and a third dimension is the driver’s age. The driver’s age is an ordinal-level variable describing a driver who appears to be in their late teens, late 20s, or late 30s (the reference category). This description is intentionally vague because police perception of a driver, not the actual age of the driver, is considered important to a police officer’s decision to stop a vehicle (Ramirez et al., 2000). These age categories were selected because it was believed that police officers would be much less likely to stop older drivers (i.e. those who appear at least 40 years old), and a pre-
  • 84. teen driver would almost certainly be stopped. The first vehicle characteristic was type of vehicle: a “new SUV” or an “old 4-door sedan.” A second vehicle characteristic that might draw the attention of a police officer is window tinting (Batton and Kadleck, 2004). This dimension was dichotomized here: the vehicle had tinted windows, or the dimension will be left blank in the vignette, an acceptable method for varying the level of a dimension ( Jacoby and Cullen, 1999). A specific traffic violation was included in all vignettes in order to establish a legal justification for the stop. Ramirez et al. (2000) argued that it may be helpful to include different types of violations to understand the role traffic offenses play in police decision making. Three traffic violation levels were used here in the vignette dimensions. First, a traffic violation will be indicated as “speeding.” A specific speed was not included. Not all police vehicles are equipped with a RADAR system to determine the exact speed of a car, and it is anticipated that simply indicating to a police officer that a person is speeding will satisfy the amount of information necessary to establish probable cause for a stop. Second, the 2002 legislation in New York State made it a traffic offense to talk on a hand-held cell phone while driving a vehicle. This offense was included as an intermediate- level violation. The
  • 85. third dimension described a broken tail light, a minor equipment violation. A sample vignette and dimension levels can be found in the Appendix. Because the small police agencies involved in this study employed almost no female or minority officers, it was decided that asking additional questions of a personal nature in these agencies would threaten confidentiality and might result in a reduced response rate. The only officer characteristic that was collected was the years of experience. Analytic strategy Because each police officer completed two vignettes, the data may have a clustered structure. Clustering of observations may violate the assumption of independence in the variables, causing an artificially deflated standard error and making it easier to find significance effects (Williams, 2000). For this reason the “cluster robust standard error” option in STATA was utilized. This option provides a more robust estimate of the standard error because it adjusts for the potential clustering of observations. Findings As seen in Table III, police officers serving in the two smaller townships were significantly more likely to report stopping a vehicle described in the vignettes compared to officers who worked in the larger city area (North District was the reference group in the analysis). Although patrol deputies who
  • 86. worked for the county sheriff’s department were not significantly different in their responses to vignettes than officers in North District, these findings suggest that workload dimensions may shape police decision making in traffic stop incidents. That is, the police officers working in 559 Police decision making North District, an area with higher levels of serious crimes when compared to the other jurisdictions in this study, do not appear very concerned with stopping vehicles for traffic violations. Klinger’s (1997) suggestion that officers who work in higher- workload neighborhoods focus less on minor offenses appears to be supported when examined in the context of traffic stop situations described in the vignettes. Two other vignette dimensions were also related to the decision to stop a vehicle. First, if the vehicle was speeding, police officers were significantly more likely to stop the vehicle. This was the most serious traffic offense described in the vignettes, and the finding is interesting because the offense was simply described in the vignette with no supporting information (i.e. the speed was not confirmed with RADAR). Second,
  • 87. officers were more likely to indicate they would stop a teen- aged driver when compared to a driver who appeared to be in their 30s. None of the other driver or vehicle characteristics described in the vignettes were related to the officer’s decision to stop a vehicle. Conclusion and discussion This study was constructed in response to the body of research suggesting that neighborhood context may influence police decision making, and the fact there are two conflicting theories to explain variation of police behavior across those contexts. Klinger’s (1997) ecological theory posits that police officers respond to components of their work environment, including the area workload, and that they must manage their time more effectively. Fagan and Davies (2000) explained that officers are more aggressive when dealing with a neighborhood’s order maintenance issues in order to address more serious crimes in those areas. The findings from this investigation suggest that officers assigned to high-crime areas would be less likely to deal with low- level traffic violations described in vignettes, lending support to Klinger’s framework. Fagan and Davies (2000) order maintenance explanation of police officer decision making should not be dismissed. They described police behavior that was influenced not simply by the environment, but also the police organization. The New York City
  • 88. Police Department administration expected aggressive street intervention by street officers. A second latent component of their study, which was never explicitly Variable Coefficient Robust SE Odds ratio Sheriff 0.20 0.49 1.22 Upper Town 1.02* 0.43 2.77 Lower Town 1.96** 0.30 7.09 Black 0.00 0.37 1.00 Hispanic 0.10 0.20 1.10 Sex �0.38 0.33 0.68 Age teen 0.35* 0.17 1.42 Age_20 0.33 0.21 1.39 Vehicle type 0.25 0.15 1.28 Tint 0.49 0.38 1.63 Cell phone 0.58 0.33 1.79 Speeding 0.78** 0.12 2.18 Experience 0.00 0.02 1.00 Pseudo R2 0.05 Notes: *po0.05, **po0.01 Table III. Ordered logistic regression for likelihood of traffic stop (N¼204) 560 PIJPSM 35,3