Behavioral SafetyBehavioral Safety
I
Human
Error
A closer look at safety’s next frontier
By Dan Petersen
www.asse.org DECEMBER 2003 PROFESSIONAL SAFETY 25
IN THE U.S., ORGANIZED ATTEMPTS to prevent or
control workplace injuries have existed for a long time,
probably starting in the railroad industry in the 1800s.
But the real attempts in general industry were rela-
tively weak until the early 1900s. With the passage of
workers’ compensation laws in several states between
1908 and 1911, injuries became a cost to organizations.
This provided the impetus to do something about it.
The financial incentive gave birth to “safety pro-
grams” and “safety engineers,” and to both the
National Safety Council and ASSE. What began in the
1900s is starkly different from the safety systems of
today. As technology has changed, as management
theories have evolved, so has safety programming. It is
true that safety seems to change more slowly than
technology or management concepts; it seems to lag by
20 or 30 years, and is influenced less by good research
and experimentation than by the economy, govern-
ment dictates and people selling new “solutions.”
Consider the many safety approaches that have
been used:
•physical condition approach, 1911 to present;
•industrial hygiene approach, 1931 to present;
•“unsafe act” approach, 1931 to present;
•management approach, 1950s to present;
•noise control approach, 1954 to present;
•audit approach, 1950s to present;
•system safety approach, 1960s and currently;
•OSHA physical condition approach, 1971 to
present;
•OSHA industrial hygiene approach (when the
OSHA chief was an industrial hygienist);
•other OSHA approaches, depending on the year
and the current emphasis;
•ergonomic approach, anticipating an OSHA
standard;
•“safety program” approach, anticipating an
OSHA standard;
•environmental approach;
•total quality management thrust, using statisti-
cal process control;
•behavioral approach.
None of these were fads—they were real attempts
to control losses. They are all still used today as part
of safety technology. They are layers of things that
must be done. Since safety staffs are now much
smaller, management staffs have been cut and
employee staffs downsized, choices must be made.
In all of these approaches, one thing that has not
been tried is understanding the true cause of most
injuries—human error. Chapanis begins one of his
articles with the following case history:
In March 1962, a shocked nation read that six
infants died in the maternity ward of the
Binghampton New York General Hospital
because they had been fed formulas prepared
with salt instead of sugar. The error was traced
to a practical nurse who had inadvertently
filled a sugar container with salt from one of
two identical, shiny, 20-gallon containers stand-
ing side by side under a low shelf, in dim light,
in the hospital’s main kitchen. A small paper
bag pasted to the lid of one container bore the
word “Sugar” in plain handwriting. The ...
1. Behavioral SafetyBehavioral Safety
I
Human
Error
A closer look at safety’s next frontier
By Dan Petersen
www.asse.org DECEMBER 2003 PROFESSIONAL SAFETY
25
IN THE U.S., ORGANIZED ATTEMPTS to prevent or
control workplace injuries have existed for a long time,
probably starting in the railroad industry in the 1800s.
But the real attempts in general industry were rela-
tively weak until the early 1900s. With the passage of
workers’ compensation laws in several states between
1908 and 1911, injuries became a cost to organizations.
This provided the impetus to do something about it.
The financial incentive gave birth to “safety pro-
grams” and “safety engineers,” and to both the
National Safety Council and ASSE. What began in the
1900s is starkly different from the safety systems of
today. As technology has changed, as management
theories have evolved, so has safety programming. It is
true that safety seems to change more slowly than
technology or management concepts; it seems to lag by
20 or 30 years, and is influenced less by good research
and experimentation than by the economy, govern-
2. ment dictates and people selling new “solutions.”
Consider the many safety approaches that have
been used:
•physical condition approach, 1911 to present;
•industrial hygiene approach, 1931 to present;
•“unsafe act” approach, 1931 to present;
•management approach, 1950s to present;
•noise control approach, 1954 to present;
•audit approach, 1950s to present;
•system safety approach, 1960s and currently;
•OSHA physical condition approach, 1971 to
present;
•OSHA industrial hygiene approach (when the
OSHA chief was an industrial hygienist);
•other OSHA approaches, depending on the year
and the current emphasis;
•ergonomic approach, anticipating an OSHA
standard;
•“safety program” approach, anticipating an
OSHA standard;
•environmental approach;
•total quality management thrust, using statisti-
cal process control;
•behavioral approach.
None of these were fads—they were real attempts
to control losses. They are all still used today as part
of safety technology. They are layers of things that
3. must be done. Since safety staffs are now much
smaller, management staffs have been cut and
employee staffs downsized, choices must be made.
In all of these approaches, one thing that has not
been tried is understanding the true cause of most
injuries—human error. Chapanis begins one of his
articles with the following case history:
In March 1962, a shocked nation read that six
infants died in the maternity ward of the
Binghampton New York General Hospital
because they had been fed formulas prepared
with salt instead of sugar. The error was traced
to a practical nurse who had inadvertently
filled a sugar container with salt from one of
two identical, shiny, 20-gallon containers stand-
ing side by side under a low shelf, in dim light,
in the hospital’s main kitchen. A small paper
bag pasted to the lid of one container bore the
word “Sugar” in plain handwriting. The tag on
the other was torn, but one could make out the
letters “S_lt” on the fragments that remained.
As one hospital board member put it, “Maybe
that girl did mistake salt for sugar, but if so, we
set her up for it just as surely as if we’d set a
trap.” When a system fails it does not fail for
any one reason. It usually fails because the kinds
of people who are trying to operate the system,
with the amount of training they have had, are
not able to cope with the way the system is
designed, following proce-
dures they are supposed to
follow, in the environment in
which the system has to
operate (Chapanis).
4. Peters provides this defini-
tion of human error:
In theory, we would
want to use a broadly ori-
Dan Petersen, Ph.D., P.E., CSP, is a consultant
specializing in safety management and
organizational behavior. He holds a B.S. in
Industrial Engineering, an M.S. in Industrial
Psychology and a Ph.D. in Organizational
Behavior. A renowned author and speaker,
Petersen is a professional member of ASSE’s
Arizona Chapter and a member of the Society’s
Management Practice Specialty.
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 25
26 PROFESSIONAL SAFETY DECEMBER 2003
www.asse.org
Sometimes, people are fasci-
nated by—and pay more at-
tention to—the catastrophic
losses. While their causes are
usually the same as those inci-
dents that produce only minor
losses, they get everyone’s
attention. Consider the publici-
ty accorded the Exxon Valdez
incident, which remained in
the headlines for six months
(although no single human
5. was ever injured). Or the
Three-Mile Island incident, in
which all the fail-safe systems worked, yet it nearly
destroyed the organization.
Categories of Human Error
Years ago, Nuberg provided a picture of the spec-
trum of error outcomes (Figure 1).
Kletz offers four different types of error:
•Errors due to slips or aberrations. Example:
Opening equipment that has been under pressure.
•Errors that could be prevented by better training
or instructions.
•Errors due to a lack of physical ability. Example:
Someone is asked to do the physically difficult or
impossible.
•Errors due to a person being asked to do the
mentally difficult or impossible (Kletz).
If a person is asked to detect a rare event, s/he
may fail to notice when it occurs, or may not believe
that it is genuine. The danger is greatest when this
person has little else to do. For example, it is very
difficult for night security personnel to remain alert
when nothing has been known to happen and when
there is nothing to occupy their minds and keep
them alert.
There is another category that is perhaps some-
what different: Errors due to the perception of peer
6. thought. Jerry Harvey illustrates this category in his
key essay, “The Abilene Paradox: The Management
of Agreement” (from The Abilene Paradox). Famous
examples might include what happened at the Bay
of Pigs or the Watergate cover-up. This concept is
more commonly referred to as “group think,” as
researched by Janis.
Categories of Causes
Think of human error in terms of causes that fit
into three categories, as illustrated by the accident
causation model in Figure 2. The model was devel-
oped as a result of inputs from many people. It was
based on a human factors model of accident causa-
tion developed at the University of Arizona by Dr.
Russell Ferrell, professor of human factors. From this
base, a different model was developed for presenta-
tion to a conference in safety management concepts
held by the National Safety Management Society in
December 1977. The model was then restructured
into the one shown in Figure 2 (Petersen).
The model states that an injury or other type of
financial loss to the company is the result of an acci-
ented definition which states that a human
error consists of any significant deviation from a
previously established, required or expected stan-
dard of human performance.
In practice, the term may have any one of
several specific meanings depending upon the
nature of contractual agreements, the unique
requirements of a particular program, the cus-
tomary error classification procedures, and the
7. emotional connotations involved with the use
of a term which might be incorrectly perceived
as possibly placing the blame on individuals
or their immediate supervision.
In the reality of situations where arguments
of precisely what is or is not a human error are
of less importance than what can be done to
prevent them, the operational definition may
be restricted to those errors a) which occur
within a particular set of activities; b) which
are of some significance or criticality to the pri-
mary operation under construction; c) [which]
involve a human action of commission or
omission; and d) about which there is some
feasible course of action which can be taken to
correct or prevent their reoccurrence (Peters).
As an industrial engineering undergraduate, the
author studied work simplification, plant layout and
motion study, not for the purpose of reducing error,
but rather to increase productivity. Years later, I
became acquainted with human factors concepts in
graduate work in psychology. It seemed that this was
a natural for the safety profession. That was 1971,
and for some reason, the profession found OSHA
and its standards to be considerably more interesting.
From a human factors standpoint, it seems that
safety has lost 30 years of possible progress in reduc-
ing human error.
Each reader can surely think of everyday exam-
ples of human error. Consider these types of errors:
•Design error, as in the Pinto gas tank, where, in
8. an apparent attempt to save a few dollars, human
lives were lost; or the DC-10 aircraft, where jet
engines fell off in the air and lives were lost.
•Communication errors, as in the crash of two
747s on an island in the Atlantic, producing the
worst loss of life in an air crash in history.
•Management system errors, as with Bhopal,
Chernobyl, the space shuttles and others.
Figure 1Figure 1
Error Spectrum
Work
free of
mistake
and error.
Minor
errors,
mistakes
and slight
blemishes.
Errors
causing delay,
seconds,
rework, rejects,
waste.
Damage to
9. property,
material loss,
process delay.
Errors
causing
injury.
Acts of
negligence
and deliberate
destruction.
Theft, arson,
pollution.
Source: Nuberg.
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 26
www.asse.org DECEMBER 2003 PROFESSIONAL SAFETY
27
•Who is held accountable? How?
•How are those responsible for safety measured
for performance?
•What systems are used for inspections to find
out what went wrong?
•What systems are used to correct things found
10. wrong?
•How are new people oriented?
•Is sufficient training given?
dent or incident. The incident is the result of 1) a sys-
tem’s failure and 2) a human error. Systems failure is
concerned with most questions that traditional safe-
ty management might cover, such as:
•What is management’s stated policy on safety?
•Who is designated as responsible and to what
degree?
•Who has what authority and the authority to
do what?
Figure 2Figure 2
The Accident Causation Model
Natural endowment
Physical condition
State of mind
Knowledge, skill
Drugs, alcohol
Pressure, fatigue
Task
Information processing
Environment
Worry, stress
Life change units
Situation
Hazards
Motivational
11. Attitudinal
Arousal
Biorhythmic
Peer pressure
Measures of the boss
Priorities of management
Person value system
Logical
decision in
situation
Capacity
with
Load
in a
State
Overload
a
mismatch
of
Proneness
Mental problems
Unconscious
desire to err
Decision
to err
Human
12. Error
Accident
Incident
Of the incident occurring
Of a loss resulting
Size
Force
Reach
Feet
Stereotypes
Human
capabilities
Expectations
Incompatible
display or
control
Workstation
design
Perceived low
probability
Traps
Systems Failure
Policy
Responsibility
Authority
Accountability
13. Measurement
Inspection
Correction
Investigation
Orientation
Training
Selection
Standard operating procedures
Hazard recognition
Records
Medical
Injury
Loss
Source: Petersen, 30.
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 27
creo
28 PROFESSIONAL SAFETY DECEMBER 2003
www.asse.org
Traps
The third cause of human error is the traps that are
left for the worker. This primarily involves human
factors concepts. One trap is incompatibility. The
worker errs because his/her work situation is incom-
patible with the worker’s physique or with what
s/he is used to. The second trap is the design of the
workplace—it is conducive to human error. A third
14. trap is the culture of the organization—what behav-
iors it encourages or discourages. Certain situations
are “error-provocative” (a notion examined later in
this article). This concept is perhaps the most impor-
tant single contribution made by error-reduction the-
orists. It leads to the conclusion that much more
progress can be made by changing the situation than
by preaching or disciplining. Human errors at lower
levels of the organization are symptoms of things
that are wrong in the organization at higher levels.
Human error can be reduced by changing the sit-
uation. This change is accomplished by assistance
from the outside (staff safety, line management, etc.),
working within a corporate philosophy, through
study of the situation and through participation of
the individual worker.
Culture & Safety
A similar model is shown in Figure 3. Instead of a
simple chain of events model (A causes B, which
causes C, etc.), we have a fault-tree type configura-
tion to be better attuned to safety technology, and a
number of factors are added.
Perhaps the most important point in this model is
that everything done to reduce human error—and
thus losses—is dependent on the employer’s per-
ception of the organization’s culture.
This model incorporates the overriding thought:
Culture is the key. Or, said another way, perception
of culture (which by definition is culture), is what
makes or breaks safety—what makes any element of
the process work or fail.
15. The model can be explained this way: Manage-
ment—through its vision, honest real values, systems
of measurement and reward and daily decisions—
creates organizational culture. In that culture, various
processes and procedures attempt to operate. Their
intent is to control losses. Some are overall (sys-
temwide)—for example, whether people are held
accountable for performance; some are specific—for
example, how a supervisor’s training is selected.
The field of safety largely ignored the concept of
culture throughout the 1980s. As management
attempted to improve culture through changing
styles of leadership, employee participation, etc.,
SH&E professionals tended to change their
approaches very little, keeping the same tools, using
the same elements in their safety “programs” that
they had always used. Safety programs typically
consisted of the usual things: Meetings, inspections,
accident investigations, using job safety analyses
(JSAs), etc. These tools were perceived as the essen-
tial elements of a safety program. In fact, OSHA pub-
lished a guideline in the 1980s suggesting that all
companies follow all of these practices (OSHA).
•How are people selected?
•What are the standard operating procedures?
What standards are used?
•How are the hazards recognized?
•What records are kept and how are they used?
•What is the medical program?
The second and always-present aspect and cause
of an incident or accident is human error. Human
16. error results from one or a combination of three
things: 1) overload, which is defined as a mismatch
between a person’s capacity and the load placed on
him/her in a given state; 2) a decision to err; and
3) traps that are left for the worker in the workplace.
Overload
The human being cannot help but err if given a
heavier workload than s/he has the capacity to han-
dle. This overload can be physical, physiological or
psychological. To deal with overload as an accident
cause, one must look at an individual’s capacity,
workload and current state. To deal with overload
as an organizational cause, one must identify the
controls available for dealing with capacity, work-
load and state.
A human being’s capacity refers to physical,
physiological and psychological endowments (what
the person is naturally capable of); current physical
condition (and physiological and psychological con-
dition); current state of mind; current level of knowl-
edge and skill relevant to the task at hand; and
temporarily reduced capacity owing to factors such
as drugs or alcohol use, pressure or fatigue.
Load refers to the task and what it takes physi-
cally, physiologically and psychologically to per-
form it. Load also refers to the amount of
information processing the person must perform;
the working environment; the amount of worry,
stress and other psychological pressure; and the per-
son’s home life and total life situation. Load refers to
a person’s work situation per se, and to work haz-
ards s/he faces daily. State refers to a person’s level
17. of motivation, attitude, arousal and to his/her bio-
rhythmic state.
In today’s environment, due to trends such as
downsizing, outsourcing, increase in span of control,
employee ownership concepts, self-directed work
teams and employee involvement, there has never
been more overload on the worker—or the manager.
Decision to Err
In some situations it seems logical to the worker
to choose the unsafe act. Reasons for this might
include:
1) Because of the worker’s current motivational
field, it makes more sense to operate unsafely than
safely. Peer pressure, pressure to produce and many
other factors might make unsafe behavior seem
preferable.
2) Because of the worker’s mental condition, it
serves him/her to have an accident.
3) The worker just does not believe s/he will have
an accident (low perceived probability).
Due to
trends such as
downsizing,
outsourcing
. . . there has
never been
18. more
overload
on the
worker—
or the
manager.
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 28
www.asse.org DECEMBER 2003 PROFESSIONAL SAFETY
29
component, whereas other cultures make it clear
that safety is unimportant. In the latter, almost noth-
ing will work—meetings will be boring, JSAs per-
ceived only as paperwork and so on.
The culture of the organization sets the tone for
everything in safety. In a positive safety culture, it says
that everything done about safety is important. In a
participative culture, the organization says to the
worker, “We want and need your help.” Some cul-
tures urge creativity and innovation; others stifle it.
Some cultures tap employees for ideas and help; some
force employees to never use their brains at work.
Culture is a major factor in the causes of human
error as well. Considerations that determine the cul-
ture include:
Several states enacted laws requiring companies to
do these things. These traditional elements were
19. regarded as a “safety program.”
While OSHA and the states were going down the
“essential element” track to safety (as was much of
the SH&E profession), several researchers began to
suggest totally different answers to the safety prob-
lem. Most of their research results were confident in
saying that “there are no essential elements”—what
works in one organization may not work in another.
Each organization must determine for itself what
will work—there are no magic pills. The answer
seems to be clear: It is the culture of the organization
that determines what will work in that organization.
Certain cultures do, in fact, have safety as a key
Figure 3Figure 3
Second Causation Model
Management
sets vision, has values, makes decisions
which create a culture
in which
processes/systems/programs
work or fail
Organizationwide Specific
Responsibilities defined
Authorities clear
Accountability tight
•valid measures
20. •sufficient rewards
SOPs in place
Training adequate
Priorities clear
Teamwork established
Positive reinforcement
Open communication
Law compliance
Risks assessed
System causes
Cost
containment
Incident
Loss
Human error causes
Traps Capacity Load State Decisions
to err
Overload
Design &
Ergonomics
Selection
Training
JSAs
LCUs
21. Motivation
& Styles
Group Norms
& Measures
Rewards
Which deal with these causes of human error
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 29
creo
30 PROFESSIONAL SAFETY DECEMBER 2003
www.asse.org
devices that gather needed information and trans-
late that information into inputs the human brain
can perceive. These displays are in two general
classes: Symbolic and pictorial.
Symbolic displays represent the information in a
form that bears no resemblance to what it is measur-
ing. Examples include the speedometer, thermometer,
pressure gauge and altimeter. In pictorial displays, the
geometric and spatial relationships are shown as they
exist. Examples include maps, pictures and television.
The two most common types of symbolic dis-
plays are visual and auditory. Much attention has
been given to the design of these types of displays.
Some general principles have emerged:
22. 1) Simplicity. The purpose of the display will dic-
tate its design, but as a general principle, the sim-
plest design is best.
2) Compatibility. The principle of compatibility
holds that the motion of the display should be com-
patible with (or in the same direction as) the motion
of the machine and its control mechanism.
3) Arrangement. As the design of the display is
important, so too is its location or arrangement in
relation to other displays. A poor arrangement of
displays can be the source of error. Sometimes dials
must be arranged in groups on a large control panel.
If all the dials must be read at the same time, they
should be pointing in the same direction within the
desired range. This will reduce check-reading time
and increase accuracy.
4) Coding. All displays should be coded or
labeled so that the operator can tell immediately
what mechanism the display refers to, what units are
measured and what the critical range is.
Some of these principles are illustrated fairly eas-
ily. For example, Figure 4 illustrates the principle of
simplicity. Of the two electric meters shown, the top
one is considerably easier to read and is much less
likely to cause a sensing error (Van Cott and Kin-
cade; Woodson and Conover). In Figure 5, the draw-
ing on the right illustrates the principle of
compatibility. To regulate heat, the user simply turns
the pointer clockwise to “Hi.” The principle of
arrangement is shown in Figure 6; the configuration
on the left is better than the one on the right and is
23. much less likely to cause errors.
Vilardo provides several principles for auditory
displays:
Situationality. The design of auditory displays
should consider other relevant characteristics of the
environment in which the system is to function (e.g.,
noise levels, types of responses controlled by the
auditory signal).
Compatibility. Where feasible, signals should
“explain” and exploit learned or natural relation-
ships on the part of the user, such as high frequen-
cies being associated with “up” or “high,” and
wailing signals indicating emergency.
Approximation. Two-stage signals should be con-
sidered when complete information is to be displayed
and a verbal signal is not feasible. The two stages
would consist of a) attention-demanding signals to
•How decisions
are made: Does the
organization spend
its available money
on people? On safe-
ty? Or are these
ignored for other
things?
•How people are
measured: Is safety
measured as tightly
as production? What
is measured tightly
24. is what is important
to management.
•How people are
rewarded: Is there
a larger reward for
productivity than
for safety? This indi-
cates management’s
true priorities.
•Is teamwork fos-
tered? Or is it “them
vs. us”? In safety, is it
“police vs. policed”?
•What is the his-
tory? What are the
traditions?
•Who are the cor-
porate heroes and why?
•Is the safety system intended to save lives or to
comply with regulations?
•Are supervisors required to perform safety tasks
daily? This says that safety is a true value.
•Do big bosses wander around and talk to people?
•Is using the brain allowed on the work floor?
•Has the company downsized?
•Is the company profitable? Too much? Too little?
These are only a few of the many things that set
the culture. It is more important to understand what
25. the culture is than to understand why it is that way.
After culture considerations comes the specifics—
how that culture translates itself into day-to-day
operations. Beyond the elements that transfer culture
into day-to-day operations, there are day-to-day
issues that define culture from the design standpoint.
Design Traps
Many writers and researchers over the years have
examined the worker-machine system. The human
error approach concentrates on the worker compo-
nent by examining four subsystems:
•sensing;
•information processing;
•responding;
•human expectations.
The Worker’s Sensing Subsystem
One of the worker’s functions is that of a sensor—
an information seeker. Humans have more than the
five senses. To use the various senses in the work
environment, people build information displays—
Figure 4Figure 4
Simplicity:
Electric Meters
These electric meters illustrate the principle of
simplicity. Of the two electric meters shown, the
top one is considerably easier to read and much
less likely to cause a sensing error.
Source: F. Vilardo.
26. PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 30
www.asse.org DECEMBER 2003 PROFESSIONAL SAFETY
31
Human Expectations
Any situation that calls for a movement contrary
to the established stereotype is bound to produce
errors. The designer is calling for errors (building
traps) by asking the worker—in a given unique situ-
ation—to change a behavior pattern that can be
described as a habit. People expect things to be, or
work, in a certain manner. Following are some
human stereotypes that should be considered in the
design of both displays and controls:
•Handles used for controlling liquids are expected
to turn clockwise for off and counterclockwise for on.
•Knobs on electrical equipment are expected to
turn clockwise for on or to increase current and
counterclockwise for off or decrease in current.
(Note: This is opposite to the stereotype for liquids.)
•Certain colors are associated with traffic, opera-
tion of vehicles and safety.
•For vehicles in which the operator is riding, the
operator expects a clockwise control motion to result
in a similar motion of the vehicle, and vice versa.
•Sky-earth impressions carry over into colors
27. and shadings: light shadows and bluish colors are
related to the sky or up, whereas dark shades and
attract attention and identify a general cat-
egory of information; and b) designation
signals that follow the attention-demand-
ing signals to designate the precise infor-
mation within the general category.
Dissociability. Auditory signals
should be easily discernable from other
sounds (be they meaningful or noise).
Parsimony. Input signals should not
provide more information to the user than
is necessary to execute the proper response.
Forced entry. When more than one
kind of information is to be presented, the
signal must prevent the receiver from lis-
tening to just one aspect of the total signal.
Invariance. The same signal should des-
ignate the same information at all times.
The Worker’s Information Processing Subsystem
The information-processing system involves
these kinds of processes:
1) Information storage: including both long- and
short-term memory.
2) Information retrieval and processing, including:
•recognition or detection of signals or stimuli;
•recall, including previously learned factual
28. information and information in short-term storage;
•information processing, categorizing, calcu-
lating, coding, computing, etc.;
•problem solving and decision making;
•controlling of physical responses.
Each of these highly complex areas is a major
field of study in itself. Clearly, it would be easy to
overload this subsystem in certain difficult or stress-
ful situations.
The Worker’s Responding Subsystem
The third function the worker serves in the work-
er-machine system is that of a controller. This func-
tion is the response that the worker must make to
any given stimulus. Just as principles exist for
designing displays that are less likely to cause error,
similar principles exist for designing controls.
1) Compatibility. Control movement should be
designed to be compatible with the display and
matching movement.
2) Arrangement. This principle provides for the
grouping of elements or components according to
their functions (those having related functions are
grouped together) and for grouping in terms of how
critical they are in carrying out a set of operations. It
also suggests that each item be in its optimum loca-
tion in terms of some criterion of usage (e.g., conven-
ience, accuracy, speed, strength to be applied). Items
used in sequence should be placed close together,
and items used most frequently should be placed
29. closer to the operator than those used less frequently.
3) Coding. Whenever possible, all controls
should be coded in some way. A good coding system
uses shape, texture, location, color and operation.
Also, all controls and displays should be labeled.
Labeling is crucial if the operators change often or
the equipment is shared.
Figure 5Figure 5
Compatibility: Heat Regulators
The drawing on the right illustrates the principle of
compatibility. To regulate
heat, the user simply turns the pointer clockwise to “Hi.”
Source: F. Vilardo.
A B
Figure 6Figure 6
Arrangement: Indicators
The principle of arrangement is shown here. The configuration
on
the left is better than the one on the right, and is much less
likely to
cause errors.
Source: F. Vilardo.
A B
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 31
32 PROFESSIONAL SAFETY DECEMBER 2003
30. www.asse.org
tributes to it. It is likely that the reason so many
studies of accident causation turn up with such
marginally low relationships is the unstable or
unreliable nature of the accident measure itself.
6) Design characteristics that increase the proba-
bility of error include a job, situation or system that:
a) violates operator expectations;
b) requires a performance beyond what an
operator can deliver;
c) induces fatigue;
d) provides inadequate facilities or informa-
tion for the operator;
e) is unnecessarily difficult or unpleasant;
f) is unnecessarily dangerous (Chapanis).
It is obvious that the principles of human error
control are synonymous with the principles of safety
management—yet these two areas have avoided
each other for years. At ASSE’s Human Error
Symposium last spring, many researchers and speak-
ers showed that the causes of aircraft incidents were
the same as the causes of medical incidents were the
same as the causes of industrial incidents, etc. These
fields need to learn from each other; safety in partic-
ular must learn from researchers in other fields.
It should be noted that the information and fig-
ures mentioned here are not new. Many are from the
1960s. Through research, the causes of human error
have been known since then. SH&E professionals
are only now beginning to think about how the field
31. can and should use this knowledge. �
References
Chapanis, A. Quoted in W. Johnson, New Approaches to Safety
in Industry. London: InComTec, 1972.
Goldberg, M.H. The Blunder Book. New York: William Morrow
& Co., 1984.
Harvey, J. The Abilene Paradox and Other Meditations on
Management. Lexington, MA: Lexington Books, 1989.
Janis, I.L. Victims of Group Think. Boston: Houghton Mifflin,
1972.
Johnson, W. The Management Oversight and Task Tree—
MORT.
Washington, DC: U.S. Government Printing Office, 1973.
Kletz, T. An Engineer’s View of Human Error. Rugby, England:
Institution of Chemical Engineers, 1985.
Nuberg, A. “Reducing Damage Accidents.” In Selected Read-
ings in Safety. J.T. Widner, ed. Macon, GA: Academy Press,
1973.
OSHA. Safety & Health Program Management Guidelines;
Issuance of Voluntary Guidelines. Federal Register. 54(1989):
3904-3916.
Peters G. “Human Error: Analysis and Control.” The Journal of
ASSE. Jan. 1966: 9-15.
Petersen, D. Human Error Reduction and Safety Management,
3rd
Ed. New York: John Wiley & Sons, 1996.
32. Van Cott, H.P. and R.G. Kincade, eds. Human Factors
Engineering Guide to Equipment Design. Washington, DC: U.S.
Government Printing Office, 1972.
Vilardo, F. “Human Factors Engineering: What Research
Found for Safety Men in
Pandora’s Box.” National
Safety News. Aug. 1968.
Woodson, W. and
D.W. Conover. Human
Engineering Guide for
Equipment Designers.
Berkeley, CA: University
of California Press, 1965.
Zebroski, E. “Lessons
Learned from Man-Made
Catastrophes.” In Risk
Management, R.A. Knief,
ed. New York: Hemi-
sphere Publishing, 1991.
greenish or brownish colors are related to the
ground or down.
•Things that are farther away are expected to
look smaller.
•Coolness is associated with blue and blue-green
colors, warmth with yellows and reds.
•Very loud sounds or sounds repeated in rapid
succession, and visual displays that move rapidly or
are very bright, imply urgency and excitement.
33. •Very large objects or dark objects imply heaviness.
Small or light-colored objects appear light in weight.
Large, heavy objects are expected to be at the bottom.
Small, light objects are expected to be at the top.
•People expect normal speech sounds to be in
front of them and at approximately head height.
•Seat belts are expected to be a certain level when
a person sits down.
When design situations violate principles,
processes or expectations, human error is designed
into the systems; thus, error is bound to occur. Often,
human error occurs not because humans are stupid
or clumsy, but because management systems or
designs have trapped them into error.
Chapanis provides some interesting observations:
1) Many situations are error-provocative.
2) Given a population of human beings with
known characteristics, it is possible to design tools,
appliances and equipment that best match their
capacities, limitations and weaknesses.
3) The improvement in system performance that
can be realized from the redesign of equipment is
usually greater than the gains that can be realized
from the selection and training of personnel.
4) For purposes of person-machine systems design,
there is no essential difference between an error and
an accident. The important thing is that both an error
and an accident identify a troublesome situation.
34. 5) The advantages of analyzing error-provocative
situations are:
a) It is easier to collect data on errors and near-
hits than on accidents.
b) Errors occur more frequently than accidents.
In short, this means that more data are available.
c) Even more important than the first two
points is that error-provocative situations provide
clues about what one can do to prevent errors or
accidents before they occur.
d) The study of errors and near-hits usually
reveals those situations that may result in an acci-
dent but have not as yet. In short, by studying
error-provocative situations, we can uncover dan-
gerous or unsafe designs before an accident occurs.
e) If we accept that the essential difference
between an error and an accident is largely a mat-
ter of chance, it follows that any measure based
on accidents alone—such as the number of dis-
abling injuries, injury frequency rates, injury
severity rates, number of first-aid cases, etc.—is
contaminated by a large proportion of pure error
variability. In statistical terms, the reliability of
any measure is inversely related to the amount of
random—or pure error—variance that con-
Your Feedback
Did you find this article
interesting and useful?
Circle the corresponding
35. number on the reader
service card.
RSC# Feedback
28 Yes
29 Somewhat
30 No
When design
situations
violate
principles,
processes or
expectations,
human error
is designed
into the
systems; thus,
error is bound
to occur.
PetersenFeatureDec2003.qxd 11/13/2003 11:25 AM Page 32
(2)
Front companies can be used to conduct fraudulent international
commercial trade to layer and integrate illegal proceeds. These
companies are considered to be effective in the money
laundering cycle for two reasons. First of all, in order to
36. operate, front companies do not necessarily require the
complicity of a financial institution. Secondly, detection is
more difficult if front companies are also conducting legitimate
business, particularly ones that are exempt from Currency
Transaction Reports (CTR), such as liquor stores or restaurants.
Any business that is cash rich can serve as a front company,
such as cheque cashing businesses, travel agencies, liquor stores
and restaurants. Import/export companies can also serve as front
companies and they typically use three operations to launder
money, namely, double invoicing, under-valuation and over-
valuation of goods and financing exports. Inflated prices to pay
for imported goods is also a common laundering technique used
by front companies.
The offshore center’s secrecy laws and the sheer volume of
wire transfer transactions can hamper subsequent investigations.
The Cayman Islands, the Bahamas, Panama and The Netherlands
Antilles are examples of offshore financial centers. As of 2003,
the Cayman Islands (population 36,000) has more than 500
insurance companies, 2200 mutual funds, 60,000 businesses and
600 banks and trust companies with almost $ 800 billion in
assets.
In the United States, one of the cornerstones of bank reporting
requirements is the Bank Secrecy Act (BSA) of 1970. The
Office of the Comptroller of the Currency (the OCC) is
responsible for Bank Secrecy Act compliance examinations of
3600 national banks in the United States. The central feature of
the Bank Secrecy Act is that certain banks, financial and non-
bank financial institutions are required to report financial
transactions over $ 10,000. However, at the time the act was
passed money laundering itself and structuring transactions
were not criminal offenses.
The Bank Secrecy Act did allow for civil penalties to be
imposed on non-compliant banks. The first major penalty was
37. imposed on the Bank of Boston in 1985. A review of the bank’s
activities in the 1970s revealed that it had failed to file currency
transactions reports on 1163 currency transactions valued at $
1.2 billion. Despite the case ending up in the US Supreme
Court, the Bank of Boston paid a $ 500,000 fine. The Bank of
New England was also found guilty of 31 offenses and fined $
1.2 million, and Crocker National Bank failed to report 7877
transactions and was fined $ 2.25 million. After the Bank
Secrecy Act was passed, it was possible to avoid reporting
requirements by going to casinos and avoiding traditional
financial institutions. In 1985, casinos became subject to Bank
Secrecy Act requirements and in 1986, the Money Laundering
Control Act (MLCA) was passed. Under the Money Laundering
Control Act, structuring transactions and money laundering
became crimes. The act also provided for both civil and
criminal forfeitures of funds or property implicated in the
laundry.
The Financial Action Task Force (FATF) estimates that money
laundering could comprise 2 percent of global GDP.
There have been alternative estimates as to the size and extent
of the money laundering problem. Quirk (1996) provides a wide
range of estimates of the size of underground economies, as a
percentage of Gross Domestic Product (GDP)—for example,
Australia 4–12 percent, Germany 2–11 percent, Italy 10–33
percent, Japan 4–15 percent, the United States 4–33 percent.
Walter (1990) claims that the underground economy accounts
for anywhere between 7 and 10 percent of a developed country’s
GDP. A more severe example is provided for Italy’s
underground economy, estimated to account for between 10 and
50 percent of Italy’s GDP. In less developed countries the size
of the underground economy may be significantly higher.
Kumar (1996) provides an estimate for India, where the
informal economy has been estimated to account for between 18
38. and 21 percent of India’s Gross National Product (GNP). As to
the size of the global underground economy, Kumar estimates
the gray and dirty money related activity to be valued at
approximately $ 6 trillion.
Tanzi (1996) presents an estimate of US$ 300–500 billion of
“dirty money” entering the international capital market each
year.
Despite the variation in the above size estimates, what becomes
apparent is that macroeconomic policymakers must take the
effects of money laundering into account. Simply, money
laundering can shift money demands from one country to
another and consequently will result in misleading monetary
data. The misleading monetary data will in turn have adverse
consequences on interest rates, exchange rates and asset prices
in economies. Gradually the governance of the banking system
is undermined, the financial market becomes corrupted and
public confidence in the international financial system is
eroded. This in turn increases the risk and instability of the
financial system. Overall, these effects eventually reduce the
growth rate of the world economy. Quirk (1996), in a study of
18 industrial countries, finds that significant reductions in
annual GDP growth rates were associated with increases in the
laundered criminal proceeds between 1983 and 1990.
(1)
The characteristics of organized crime are quite evident in
money laundering as it is a group activity which is long-term
and continuing; a criminal activity which is carried out often by
more than one person; an activity which is carried out
irrespective of national boundaries at large scale; and generates
39. proceeds, which are often made available for illicit use.
Criminals involved in money laundering commit three basic
types of crimes i.e. Crimes of passion or honor; Crimes of
violence or vandalism; and Economic crimes - crime committed
to make money. Most often crimes are committed for two
reasons; for kicks (to prove that they can get away with it and
for unscrupulous greed for quick money (they think that they
can make more money from the crime than they can from the
same amount of legitimate endeavors).
III. Money Laundering: How it is done?
Some current estimates are that of US$ 500 billion to US$ 1
trillion in criminal proceeds are laundered through banks
worldwide each year with about half of that is circulated
through US banks [US Senate, Nov. 9 1999 & Feb. 2001]. As
the banks are widely used for money laundering they have a
central role to play in curbing the menace. Accordingly, the
money laundering activities have been subject of eight prior
investigations in U S. According to the Permanent
Subcommittee's Report, there are five major factors which
create money laundering vulnerabilities: the role of private
bankers, private banks as client advocates, powerful customers,
and a corporate culture of secrecy, a corporate culture of lax
controls and the competitive nature of the industry.
3.1. Private Bankers as Client Advocates
Private bankers are the linchpin of the private bank system.
They are trained to service their clients and to set up accounts
and move money around the world using sophisticated financial
systems and secrecy tools. Private Banks encourage their
bankers to develop personal relationships with their clients,
visiting the client's homes, attending weddings and graduations,
and arranging their financial affairs. The result is that private
bankers may feel loyalty to their clients for both professional
40. and personal reasons, leading them to miss or minimize warning
signs. In addition, private bankers use their expertise in bank
systems to evade what they may perceive as unnecessary "red
tape" hampering the services their clients want, thereby evading
controls designed to detect or prevent money laundering.
3.2. Powerful Clients
Private bank clients are, by definition, wealthy. Many also exert
political or economic influence which may make banks anxious
to satisfy their requests and reluctant to ask hard questions. If a
client is a government official with influence over the bank's in
country operations, the bank has added reason to avoid offence.
Moreover verifying information about a foreign client's assets,
business dealings, and community standing can be difficult for
US banks. The Federal Reserve Board found in its private
banking review that foreign clients were particularly difficult
for private banks to assess due to a lack of independent data
bases of information, suit as credit reports. While private banks
routinely claim that their private bankers gain intimate
knowledge of their clients, the case histories demonstrate that
too often is not true. For example, in one case, a private banker
was unaware for more than three years that he was handling the
accounts of some of an African head of state.
3.3. Culture of Secrecy
A culture of secrecy pervades the private banking industry.
Numbered accounts at Swiss banks are but one example. There
are other layers of secrecy that private banks and clients
routinely use to mask accounts and transactions. For example,
private banks routinely create shell companies and trusts to
shield the identity of the beneficial owner of a bank account.
Private banks also open accounts under code names and will,
when asked, refer to clients by code names of encode account
transactions.
41. 3.4. secrecy jurisdictions
In additions to shell corporations and codes, a number of private
banks also conduct business in secrecy jurisdictions such as
Switzerland and Cayman Islands, which impose criminal
sanctions on the disclosure of bank information, related to
clients and restrict US bank oversight. The secrecy laws are so
light, they even restrict internal bank oversight.
3.5. Culture of Lax Controls
In addition to secrecy, private banking operates in a corporate
culture that is at times indifferent or resistant to anti-money
laundering controls, such as due diligence requirements and
accounts monitoring. The problem begins with the private
banker, who, in most private banks, is responsible for the initial
enforcement of anti-money laundering controls. It is the private
banker who is charged with researching the background of
prospective clients, and it is the private banker who is, asked in
the first instance to monitor existing accounts for suspicious
activity. But it is also the job of the private banker to open
accounts and expand client deposits. According to John Reed,
Cochairman of Citigroup (with 30 years of banking experience),
private bankers tend to become advocates for their clients and
has lost the detachment needed to monitor their transactions. he
also observed that private bankers often don't have the
temperament or discipline needed to ask clients detailed
questions about their funds and transactions and to record the
information provided in a proper form.
The fundamental problem is that private bankers are being
asked to fill contradictory roles - to develop a personal
relationship with a client and increase their deposits with the
bank, while also monitoring their accounts for suspicious
activity and questioning specific transactions. Human nature
42. makes these contradictory roles difficult to perform, and anti-
money laundering duties often suffer.
Private Banks have dealt with this problem by setting up
systems to ensure that private banker activities are reviewed by
third parties, such as supervisors, compliance personnel or
auditors. The subcommittee staff investigation has found,
however, that while strong oversight procedures exist on paper,
in practice private bank oversight is often absent, weak or
ignored.
3.6. Cut throat Competition
Another factor creating money laundering concerns is the
ongoing competition among private banks for clients, due to
profitability of the business. A Federal Report on private
banking states. : "As the target market for private banking is
growing, so is the level of competition among institutions that
provide private banking services" [FRB, 1997]. Private Banks
interviewed by the subcommittee Staff confirms that the
markets remain highly competitive; most also reported to have
plans to expand operations. The dual pressures of competition
and expansion are disincentives for private banks to impose
tough anti-money laundering controls that may discourage new
business or cause existing clients to move to other institutions.