SlideShare a Scribd company logo
Human Performance Engineering: A Practical Approach to
Application of Human Factors
P.M. Haas, Ph.D.
Concord Associates, Inc.
Knoxville, TN
NPRA National Safety Conference
April 28-30, 1999
Dallas, Texas

ABSTRACT
We cannot “engineer” humans, but we can engineer human performance. To assure
maximum safety and effective operations, we must apply the same degree of rigor
and systematic analysis to engineering human performance as we do to engineering
process and equipment performance. The field of Human Factors offers principles,
methods and data for designing the “human-machine interface” to improve the
safety, effectiveness, efficiency, and ease of use of engineered systems by human
operators. Human Factors is primarily a design discipline, and is most effective
when applied as part of a total system engineering design effort for a new system.
Unfortunately, design and development of totally new industrial facilities within the
U.S. has been very limited for a number of years while demands for enhanced
safety and performance at existing facilities have increased dramatically. The
petrochemical, refining, and other industries require from Human Factors specialists
a practical, systematic approach to improving human performance in existing
facilities without extensive system re-design. This paper presents an integrated,
practical approach, referred to as “Human Performance Engineering” (HPE) for
doing just that – improving human performance, and thereby improving overall
system performance, in an operational facility. It outlines a practical guide for
engineering the “human side” of the system (i.e., for building the underlying
structures and management systems that will produce and support safe and effective
human performance). Implementation of a comprehensive HPE program will
dramatically reduce human error and enhance process safety. It also will help to
establish and support a culture of continual improvement in human performance, a
culture of “operational excellence”.
SYSTEMS ENGINEERING
Human Factors as a discipline has been most effective when practiced as an integral part
of the systems engineering design process. From the systems engineering perspective,
the system (Figure 1) includes not only equipment or hardware, but also software, people,
administrative controls, procedures, job aids, facilities, and support organizations - all
immersed in and influenced by organizational culture and the social/technical
environment surrounding the system. The internal and external customers drive the
system mission and performance specifications. The essence of the systems view is that
all of the parts of the system need to be designed and utilized in a way that leads to
successful performance of the total system. The systems engineering process starts in the
early conceptual stages of design by specifying the system mission, goals and overall
performance requirements. It continues with the definition and allocation of functions,
trade off studies, and design analysis, or synthesis, to obtain and document specific
performance requirements for each element of the system, including the human element.
It also specifies the design of interfaces and interrelationships among the different
elements (e.g., the human-machine interface). Systems engineering is a process for
optimization to the top-level goals and performance requirements. These top-level
requirements include safety, reliability, operability, maintainability, environmental
responsibility, and other dimensions or system characteristics.

Social and Physical Environment
Corporate, Organizational Culture
Facilities and Support Organizations

Hardware
Personnel Subsystem
Job/Org. Design
Selection & Staffing
Training & Qualification
Supervision
Motivation, Attitudes
Individual/team
performance

Software
Administrative Controls, Procedures, Job Aids

Figure 1. The “Socio-Technical” System

Concord Associates, Inc.

Page 2 of 22
It is our belief that applications of systems engineering concepts and approaches in future
design of petrochemical facilities could dramatically improve safety and effectiveness of
operations. However, we recognize that very few totally new systems are being designed
and built in the U.S. at this time. The immediate concern is what we can do to improve
performance of existing systems. We believe we can apply systems engineering
principles to engineer (re-engineer) the performance of existing systems. In our company
we refer to this as systems performance engineering (Figure 2.)

Define Goals,
Objectives,
Requirements

Measure
Performance

Y

OK?
N

Develop and
Validate
Measures

Measure
Effectiveness
of Mods

Develop and
Implement
Modifications
N

OK?

Y

Figure 2. Systems Performance Engineering

HUMAN PERFORMANCE ENGINEERING
Further, we believe that the most fruitful area for improving the performance of existing
systems is human performance. We refer to the application of system performance
engineering to the human element of the system as human performance
engineering. The basic technical discipline in human performance engineering is, of
course, human factors, or ergonomics, which is a specialty focused on design of the
“human-machine interface”. However, the term “human performance engineering” is
intended to convey a considerably broader perspective and includes areas such as training
system design and administrative controls (conduct of operations) that typically are
viewed as human resource or management issues. Human performance engineering
addresses all factors that influence human performance. The term also is intended to
emphasize the notion that we can engineer – apply scientific principles to plan, design,
construct, and manage – human performance. Human performance engineering offers a
systematic and comprehensive framework for improving human performance and thereby
improving overall system performance. While the activities comprising human
performance engineering can be organized in different ways, a comprehensive program

Concord Associates, Inc.

Page 3 of 22
will address at least the following six areas, which are discussed in the subsequent
paragraphs:
1. Human Error Assessment - “prospective” analysis for process hazards analysis
and “retrospective” analysis for incident investigation
2. Conduct of Operations - disciplined professionalism
3. Personnel Subsystems - selection, training and qualifications
4. Procedures - design for effective use
5. Ergonomics - human-machine-interface design
6. Measurement and Feedback - continuous measurement of human,
organizational, and management system performance; learning from successes
and failures; and feedback to all personnel.

AREA 1: HUMAN ERROR ASSESSMENT
Human error assessment is used by safety analysts and engineers in two ways: 1)
prospectively to identify the likelihood of human errors initiating, exacerbating, or
mitigating accident events; and 2) retrospectively to identify human error-related causes
of events. In the past, human error assessment in either mode often has been rather
superficial. Many PHAs claim to have included “operator action” but in fact primarily
consider operator action as mitigating factors, and often with a 100% likelihood of
success. Many root cause analyses have claimed to address human error, but in effect dig
no deeper into causal factors than merely identifying “human error” as the root cause.
Quantitative vs. Qualitative HRA
One of the reasons for past reluctance to employ human reliability analysis (HRA)
techniques may be that most of the techniques have been developed and presented in the
context of quantitative risk assessment (QRA). Table 1 lists some of the better known
HRA techniques that have been used in the U.S. Originating in the military and
Table 1. Human Reliability Analysis Techniques

Technique
OHPRA (Concord Associates, Inc.)
THERP (Sandia Lab)
SLIM, SLIM-MAUD (Brookhaven Lab)
Simulation Models (e.g., SAINT,
Siegel, HOS, MicroSaint)
OAT, OAET (NRC, INEL)
HEART (British Gov’t)
MAPPS (NRC, Oak Ridge Nat’l Lab)
ATHEANA (NRC)
Concord Associates, Inc.

Primary Applications
Petrochemical industry
Nuclear power, weapons assembly
Nuclear power operations
Military systems design
Nuclear power operations
Industrial, nuclear
Nuclear power maintenance
Nuclear power operations
Page 4 of 22
aerospace community and most widely and recently used by the nuclear power
community, QRA methods are often perceived as being very detailed, intensive and
expensive. Quantitative HRA methods associated with QRA may be viewed to some
extent in the same vein – too detailed, too expensive, and lacking a sufficient “data base”
to warrant the complexity and cost.
In fact, qualitative HRA can be very powerful, practical, and cost-effective for both
prospective analyses, such as PHA and retrospective analyses, such as root cause
investigation. Subjective, “comparative” estimates of likelihood of human error are
relatively straightforward to obtain. Experts can make judgements that are meaningful
and precise enough to identify problems and make decisions regarding necessary
improvements, without performing a detailed quantitative analysis. There are a number
of techniques, including the qualitative portions of some of the quantitative approaches
included in Table 1 above, that can be used very effectively to identify important “errorlikely situations” without extensive quantitative analysis. One technique, developed by
Concord as a qualitative tool specifically for process industries, is the Operational Human
Performance Reliability Analysis (OHPRA) method1.
OHPRA – A Practical Human Performance Model for Petrochemical Operations
The conceptual model for the OHPRA operations module2 is illustrated in Figure 3 (there
is a similar one for maintenance activities). It assumes that operator tasks can be treated
as consisting of a “cognitive” portion, in which the operator correctly or incorrectly
diagnosis the situation and chooses the goal(s) and action(s) to be taken; and an “action”
portion, in which the chosen actions are executed. Success on a task requires success on
both portions. Likelihood of success on the cognitive portion considers cues and
information available to the operator for diagnosis and decision making, knowledge and
abilities level, and stresses such as time demands. The model assumes that given: 1) an
adequate cue, 2) appropriate operator knowledge and abilities, and 3) a manageable level
of stress, the operator will choose the correct action. If the operator does not choose the
correct action, the entire task is failed.
The four factors considered under “opportunity to perform” are as follows:
Access to operating area
∃ Covers elements such as habitability, people resources, actions that place the
operator at risk, etc.
∃ The ability of the operator to gain access to the needed equipment, controls,
information, etc. to perform manual actions called for including the personnel
safety hazards which may be involved.
Equipment availability/operability
∃ Deals with systems’ and components’ ability to function or be utilized by the
operator.
∃ The operability of the necessary equipment, controls, information systems,
special tools, etc., needed to support the operators manual actions.

Concord Associates, Inc.

Page 5 of 22
OPERATIONS
• C O R R E C T A C T IO N
C H O S E N

C U E
T O
A C T IO N

K N O W L E D G E
& A B I L I T IE S

•S Y S T E M
IN F O R M A T IO N
A V A IL A B L E
• T IM E T O A C T

ST R E SS

C O R R E C T
A C T IO N
C H O SE N

•A C C E S S T O
O P E R A T IN G A R E A
• E Q U IP M E N T
O P E R A B IL IT Y
(A V A IL A B IL IT Y )

U N IT
O PE R A T O R

A C C E SS T O
O P E R A T IN G
A R EA

F A IL U R E

E Q U IP M E N T
O P E R A B I L IT Y

O P P O R T U N IT Y
TO
PE R F O R M

SY S T EM
I N F O R M A T IO N
A V A IL A B L E

F A IL U R E

T IM E T O
A C T

SU C C ES SF U L

Figure
3. The OHPRA Operations Model
Time to act
∃ Deals with measured or predicted time envelopes to complete specific actions,
external influences that modify performance times, etc.
∃ The window of time which bounds the duration available for the operator to
perform actions.
System information available
∃ Deals with instrumentation, status monitoring, information reporting, etc. that
provide needed information to the operator to make decisions and confirm
status of action taken.
∃ Operator inputs including cues to action and system information feedback,
which enable the operator to process what, when and how actions are to be
taken. Information includes process instrumentation, procedural guidance,
visual observations, shift crew communications, etc.
Note that there is a feedback loop from the action phase back to the cognitive phase that
is dependent on information feedback from the system to the operator when actions are
taken. Part of the assessment is to evaluate the quality of operator feedback from alarm
and display systems, etc.
A structured subjective evaluation process, with detailed questions, guides the analyst
through consideration of each of the factors and “sub-factors” to arrive at an estimate of
“zero”, “low”, “medium” or “high” as the likelihood of success. Management can then
set its own “criterion level” for the level of success likelihood that warrants action.

Concord Associates, Inc.

Page 6 of 22
The OHPRA approach makes use of the judgement and experience of subject matter
experts, (SME's), in particular the operations and maintenance personnel in the facility
being assessed who have in-depth knowledge of that facility’s processes and systems.
The OHPRA analyst is a facilitator who has both operations/maintenance experience and
an understanding of human reliability analysis. The conceptual model described above
and the associated detailed guides are used not as a rigid “checklist”, but as an aid for
both the facilitator and the SMEs. Interviews and in-plant walk-downs are conducted
both to provide context and to validate expert input. Additional supporting information is
obtained from multiple sources, e.g., results from process hazards analysis, unit operating, startup, shutdown and safeing procedures, maintenance procedures, incident reports,
unit-specific and general employee training materials, unit process diagrams, operations
and maintenance departmental directives, and federal and state regulations (OSHA, EPA,
etc.).
The OHPRA analysis is focused on those individuals who can come in direct physical
contact with the unit processes, namely operators and maintenance personnel. Other
departments or services, such as management, engineering, planning and scheduling,
warehouse, industrial hygiene, safety, quality assurance, etc., are not excluded from
consideration; their contribution to risk is assessed as an influence on the operations or
maintenance worker.
OHPRA and the qualitative portions of similar HRA methods can be used effectively by
knowledgeable subject matter experts with a modest amount of training. Such
approaches provide a systematic framework for assessment of human error and the
factors influencing human performance. They can provide meaningful assessment of the
likelihood of error and can suggest areas of improvement for reducing that likelihood
without the time and expense of detailed quantitative analysis.
Human Error Assessment in Root Cause Analysis
Most companies now routinely employ a structured root cause analysis process to
identify causes of significant incidents. The methods may be as “simple” as “why trees”,
but typically involve some sort of logic tree structure combined with well-established
techniques such as “effects and causal factors analysis”, “barrier analysis”, and “change
analysis”. Many companies also are incorporating techniques or additional modules in
their root cause analysis protocols to dig deeper and ferret out the underlying systemic
factors that lead to significant human error (i.e., factors that create conditions in which
human error is more likely and/or more consequential). The OHPRA model is one such
human error assessment technique that can be very powerful, particularly when the event
and the human actions involved are complex and causal factors are difficult to identify.
The choice of approaches is, in our view, not as important as the skills and experience of
the analysts, nor as important as the management commitment and culture that supports
root cause analysis, critical self-evaluation and peer evaluation, and open communication
in a non-punitive environment.

Concord Associates, Inc.

Page 7 of 22
AREA 2: CONDUCT OF OPERATIONS
Intelligence, flexibility, and adaptability are strengths of human beings. They allow us to
respond to new situations, evaluate alternatives, adjust to adverse conditions, make
judgments with less-than-complete information, and to perform tasks that machines, even
computers, cannot do very well. However, these same qualities can lead to a high degree
of variability in human performance in process systems. And, that variability can be a
significant cause of inefficiencies and errors. Inconsistency in performance from facilityto-facility, day-to-day, shift-to-shift and person-to-person tends to increase the likelihood
for error. Thus there is a tradeoff between establishing formal, highly structured controls
on human performance and allowing humans the flexibility to do what they do best –
think. We want intelligent and qualified operators to run the plant responsibly and
responsively. We don’t want robots for operators, but we also don’t want completely
“seat-of-the-pants flying”.
A good Conduct of Operations (ConOps) program can be viewed (Figure 4) as a control
system that appropriately tightens the boundaries on allowed human performance. It
permits appropriate variability for practical operations, while maintaining performance
within a desired safety envelope. A good ConOps program also encourages and supports
a culture of self-discipline and professionalism, which is the core of a safety culture.

Safety Boundary

Design
Limit

Minimum Performance (Compliance)
Operational Excellence

Optimum

Operational Excellence
Minimum Performance (Compliance)
Safety Boundary

Figure 4. Conduct of Operations Policies Act to Control Human Variability
Formal documentation of a ConOps policy and training on ConOps requirements is a
powerful management tool. Documentation should include requirements and expectations
for safe and effective performance in both routine, day-to-day operations and
emergencies. Examples of items to be included are:

Concord Associates, Inc.

Page 8 of 22
•
•
•
•
•
•
•
•
•
•
•
•
•

compliance with written procedures
use of “in-hand” or “reference” procedures
modifying procedures
exchange of information at shift turnover
use of “closed-loop” communications
required reading
on-shift training
control room/area activities
control of equipment and system status
inhibiting safety systems
completing operating logs
emergency response drills
equipment and piping labeling.

AREA 3: PERSONNEL SUBSYSTEMS
The foundation for excellence in human performance is the underlying knowledge, skills,
abilities and attitudes of the work force. In order to apply systems performance
engineering to the area of personnel qualifications we first must recognize that “personnel
subsystems” are an integral part of the total system. Further, we need to understand that
requirements for the human performance necessary to meet system performance
requirements can be identified, made explicit, engineered, measured, and demonstrated,
just as can hardware or software requirements. Performance-based design of the
personnel subsystem is a key to excellence in operations.
The ISD Process
The general framework that has proven most effective for performance-based design of
training and qualifications programs is the Instructional Systems Design (ISD) process
first publicized in this country by the U.S. Air Force3. It has been referred to as the
Systems (or Systematic) Approach to Training (SAT) and other similar names, but all of
these approaches fundamentally are representations of the ISD process. The process
typically is described at the highest level as consisting of five interrelated steps or phases
(Figure 5). In practice, the activities in these phases are continuous and iterative.
The analysis phase identifies and documents the performance required and the
underlying human capabilities – knowledge, skills and abilities (KSAs) – necessary to
accomplish the functions that have been allocated to humans. The primary activity in the
analysis phase, and a critical activity for the entire human performance engineering
program, is Job/Task Analysis (JTA). JTA (in varying forms) provides the fundamental
information base that supports job design, training and qualification, procedures, humanmachine interface, human error assessment, and human-performance measurement. JTA
basically identifies who does what and how well they need to do it. It identifies the
specific tasks that individuals must perform to accomplish assigned functions under the
complete range of system states (startup, normal operations, abnormal/emergency
operations, shutdown). JTA also provides information on the key factors that influence
human performance – the system information required by the operator, the environment
Concord Associates, Inc.

Page 9 of 22
Systems
Analysis
Analysis

Evaluation

Implementation

Design

Development

Figure 5. The Instructional Systems Design (ISD) Process
and conditions of operation, necessary tools/aids, etc. JTA provides information
necessary to determine the necessary KSAs and the criteria, or measures of performance.
Training is then designed to assure the necessary KSAs, the necessary performance
capability, has been attained and is maintained. Thus training is focused on the
performance required to do the job as specified and only on that performance.
Major activities in the design phase of ISD include: (1) assessing the “entry-level”
capabilities of the trainee population, identifying the gap between entry-level and desired
level, and identifying where training is the preferred solution for closing that gap (rather
than, say, increasing entry-level requirements); (2) determining the best “setting” for
training (e.g., simulator, classroom or on-the-job); (3) developing terminal learning
objectives; (4) developing the training/evaluation standards that assure linkage of the
training to the objectives (this involves determining the enabling objectives that support
the terminal objective, performance tests or measures, KSAs being tested in each task
element, conditions and standards for the test, scoring methods, etc.); and, (5)
documenting the training plan for the specific training program being designed.
The development phase is where the training content is developed/assembled and the
training materials are produced. Development involves refining the media selection,
developing student and instructor lesson plans, developing lesson materials and training
support materials, developing test items, and conducting training tryouts, or “pilot
courses”.
The implementation phase is the actual delivery of the training, including testing and intraining evaluation. Activities include pre-testing trainees, instructor preparation,
delivering the lessons, evaluating trainee performance, collecting training effectiveness
evaluation data, and documenting completed training.

Concord Associates, Inc.

Page 10 of 22
Evaluation is actually a continuous and iterative process throughout the entire training
process. The goal is to continually measure and improve the effectiveness of the entire
training process. It involves identification of appropriate measures, collecting
information, analyzing information to identify systemic problems, and implementing
corrective actions. Indicators for training effectiveness come from multiple sources at
different levels. Trainee test performance, test statistics and trends, trainee evaluations,
instructor evaluations, post-trainee surveys, supervisor feedback, on-the-job performance
observations, incident reports, and other sources of information can provide indicators for
training effectiveness.

AREA 4: PROCEDURES
Why do we write procedures? How do we expect procedures to be used? Procedures are
an operator aid. They are designed to help qualified, well-trained operators perform
required tasks correctly and efficiently. Procedures are intended to be used, on the job.
NOTE: Procedures are an operator aid, NOT a lesson plan.
The purpose of training is to bring the individual’s knowledge, skills, abilities and
attitudes from their “entry-level” state to the level required to meet overall system
performance requirements. Training should provide the “why” of the procedure – the
process knowledge, understanding of system response, principles of equipment design
and operation, safety implications of operator actions, and many other basic elements of
knowledge that are essential to an effective operator are the focus of training. Training
most certainly also should provide the “how” of procedures – practice on exactly what
steps are to be carried out to perform specific tasks in the optimum manner. Thus
training on procedures is absolutely essential. And, using the actual procedure as part of
training is important.
However, procedures are NOT primarily training material. Indeed, properly designed
procedures usually make poor training materials, and well-designed training materials do
not make good procedures.
All operations need to be in conformance with the approved procedure. In some cases,
where tasks involve many steps or high complexity, where errors have significant
consequence, etc., every operator should be required to have the procedure “in-hand”,
and in some cases step-by-step check-off or sign-off should be required. In other cases,
experienced operators who perform the task regularly and where complexity and
consequences are not as high, reference procedures are appropriate. These can be used as
required or optional reading prior to performing an infrequently performed task.
We recommend using a “risk-based” assessment to categorize procedures according to
their required “level of usage”. Such an evaluation process should include at least the
usual “Criticality/Frequency/ Difficulty” ratings of tasks that are commonly performed as
part of the training needs assessment. What are the consequences of an error, what is the

Concord Associates, Inc.

Page 11 of 22
frequency with which this task is performed, and how complex is the task? Figure 6
illustrates a spreadsheet tool we have used to classify procedures. In this example there
are only three “levels” of procedures: (1) “critical”, which are to be used “in-hand” at all
times, perhaps with some or all portions requiring check-off; (2) “reference” to be
referred to prior to completing the task and used in hand as necessary; or (3) no written
procedure is required.
FREQUENCY OF USE
Frequency
Daily
Weekly
Monthly
Semi-Annually
Annually

Point Value
3
2
1
2
3

Point
Value

X

Weighting
Factor

X

0.8

Frequency
Score
=

0.0

TASK COMPLEXITY
Descriptor
Information Access
Mental Loading
Physical Loading
Communication Loading
Stress

Point
Value

Point Scale
(How difficult is it to get information to perform task?)
(1-3)
(How great is the "mental workload" to complete the task?)
(1-3)
(How complex are the physical actions required?)
(1-3)
(How complex and demanding are communication requirements?)
(1-3)
(How great is the psychological stress due to time demands,
(1-3)
abnormal conditions, or exposure to hazardous conditions?)

Complexity
Score
Total

0.0

÷

5

=

0.0

CONSEQUENCE
Descriptor
None
Minor
Moderate
Major
Extreme

(No Safety, Health, Quality or Environmental Concerns)
(First Aid, Minor Environmental Impact, Decrease in Product Quality etc.)
(Loggable Injury, Some Environmental Damage, etc)
(Lost Time Injury, Moderate Equipment Damage, Major Drop in Quality etc.)
(Fatality, Significant Equipment Damage, Severe Environmental Damage etc.)

Point Scale
0
2
4
6
8

Consequence
Score

TOTAL SCORE:

0.0

PROCEDURE CLASSIFICATION

IF SCORE IS:

0
>5
>8

to
to

5
8

THEN CLASSIFY AS:

NO PROCEDURE
REFERENCE
CRITICAL

CLASSIFICATION =

Figure 6. Illustration of a Risk-Based Assessment Tool to Categorize Procedure Usage
Human Factors Guidelines for Procedure Writing
If we demand procedural compliance, we must design procedures that give clear, precise
directions, that are easy to understand, and easy to use. Human factors guidelines for
procedure writing have captured the knowledge from cognitive science, information
technology, learning, and related fields as well as actual experience on errors committed
in following procedures. They provide practical guidance on how to present operators
with the necessary information in the most usable format. Guidelines for formatting, for
place-keeping, for caution and warning statements, for action steps, etc., are available4 to

Concord Associates, Inc.

Page 12 of 22
help write procedures that reduce the likelihood for errors in executing tasks under even
the most severe conditions. Application of these guidelines can dramatically reduce the
frequency of errors of omission and errors of commission in executing procedures.
We encourage clients to involve a broad representative spectrum of operators with human
factors specialists to develop a procedure writers guide that standardizes procedure
writing practice throughout the facility and even throughout the corporation.
Standardization helps to make the procedure writing and management of change process
more cost effective, and helps to minimize problems when people are transferred from
one unit or site to another.
Verification and Validation
Procedure verification has to do with the technical accuracy of the procedures. It requires
input from personnel in engineering, safety management, or other areas who understand
process design, control systems, plant response to off-normal conditions and accidents,
etc. Most facilities provide some sort of engineering or technical review of procedures
for purposes of verification. Fewer facilities have systematic programs to validate
procedures.
Validation has to do with clarity, understandability and usability, which depend on the
user and on the context and conditions under which they are used. It is risky to have
procedures designed by engineers and used by operators. In general, operators and
engineers understand and visualize systems differently, have different purposes in mind,
have different levels of formal education, different affinities for abstract reasoning,
different cultures, different language, etc. And, operators under actual operating plant
conditions, especially abnormal conditions, are under a much higher level of stress or
system demands than is an engineer at a desk. The process of validation by “simulation”
(walk down, in the plant, with the least qualified operator authorized to use the procedure
independently, and with as near to actual conditions, time constraints, etc. as possible)
can dramatically reduce the likelihood for errors in procedures. Plants that routinely
conduct thorough validations find that it is unusual to find a procedure that is “perfect” as
written “at the desk” and could not be improved by validation.
Implementation of a systematic, user-centered process for design, development,
implementation, and continual evaluation will produce procedures that are consistent in
structure, easily understood, and easily followed by knowledgeable operators.
Technically accurate, up-to-date procedures that are human engineered for effective use
are one of the primary defenses against operational errors. Well-designed procedures that
are used and do not simply sit on the shelf WILL reduce human errors and enhance
process safety.

AREA 5: ERGONOMICS
Human factors specialists promote the viewpoint that machines are designed to help
humans accomplish certain goals and therefore should be designed to match human
capabilities and limitations, behavioral tendencies, and preferences. Figure 7 illustrates
the issues addressed by human factors engineers in the systems engineering design

Concord Associates, Inc.

Page 13 of 22
process, from top-level system/mission goals, through function allocation and analysis,
job/organizational design, personnel subsystem design, and human interface design.
Practical Applications of Human Factors for Process Safety Improvement
The field of Human Factors draws from a broad spectrum of disciplines and scientific and
engineering specialty areas. Perception, cognition, learning, memory, motivation,
anthropometry, and speech communication are some of the required subspecialties in
psychology and physiology. Engineering expertise in design of visual, auditory, and
tactile displays, controls, biomechanics, noise, illumination, vibration, and other areas
also is required. Very often, the expertise in psychology and engineering is shared
among team members, and human factors specialists usually work best in a team
environment providing input and feedback to engineering designers.

System
Functions

Tasks/jobs
Personnel

Interface

Mission success, system performance requirements,
user needs/satisfaction
Requirements analysis, allocation to human, hardware,
software, facilities
Task requirements, conditions, demands on humans time, accuracy, precision, perception, vigilance
Capabilities and limitations - knowledge, skills, abilities,
attitudes; physiological, psychological, cognitive, social
Facilities, workspace, controls, displays, habitability,
accessibility, protective equipment

Figure 7. Human Factors Activities in Systems Engineering Design
The plant manager or process safety manager committed to improving safety and plant
performance by improving human performance rarely has this kind of specialized
expertise readily available at the plant site, and only in relatively few corporations is it
accessible from corporate headquarters. Fortunately, much of the accumulated
knowledge and experience from application of human factors (or failure to apply human
factors) in design has been codified by human factors specialists in design guidelines, and
further synthesized into checklists and evaluation guides5,6,7 that address each of the basic
areas of human factors design, such as:
• Workspace layout
• Controls and display design
• Alarms and annunciators
• Environmental factors such as noise, illumination, temperature
• Communication systems
• Equipment and piping labeling and color coding
• Protective clothing

Concord Associates, Inc.

Page 14 of 22
Since most human-equipment interfaces in modern technology involve a computer, a
major area of human factors engineering has evolved into its own specialty field, HumanComputer-Interaction, with its own university courses, professional organizations, crossdisciplinary specialists, and specific design guidelines8,9.
These human factors and human-computer-interaction guidelines and checklists do not
provide the expertise of an experienced human factors design engineer, but they do
provide a practical way to compare existing design with good design practice. With a
modest amount of human factors training and practice, individuals knowledgeable of the
system can use these them effectively to at least identify less-than-adequate humanmachine design and potential error-inducing conditions. Often, modifications for
practical, cost-effective improvements present themselves simply by identifying the
problem. In other cases, it will be necessary to bring in the deep expertise of a skilled
human factors engineer. But, even in those cases, solutions will be facilitated by having
in-plant personnel familiar with human factors principles and guidelines who can serve as
an interface providing process systems knowledge to the human factors expert and
practical feedback to plant personnel. Small investments in human factors training and
in-plant practice can pay great dividends in error-reduction, process safety, personnel
safety, and overall operational performance.

AREA 6: MEASUREMENT AND FEEDBACK
In the early to mid-1990s, Peter Senge and other authors10,11,12 began to emphasize the
critical need for “systems thinking” in management and organizations, and the ideal of a
“learning organization”. A key characteristic of a learning organization is a culture that
supports and thrives on learning from experience – from successes and from failures.
Certainly such a culture is key to employing a systems approach to reducing human error
and improving process safety. Incident investigation and root-cause analysis, discussed
earlier, are the central pillars of a system for experiential learning. However, we
encourage clients to develop a comprehensive program of performance measurement,
causal analysis, and retrospective analysis of multiple sources of operational information.
Some examples of additional sources of operational experience information are:
• Near-miss reports, self reporting of corrected errors, successful interventions
• Narrative and operating log reviews
• Maintenance requests, trends, equipment histories, equipment failure reports
• Process hazards analysis, risk assessments, safety reviews
• Internal and external evaluations and audits
• Training evaluations and feedback
• Emergency drills
• Review of applicable events at other sites, other companies, other industries
• “Behavioral” safety programs in which safety behaviors are systematically
sampled
• Safety meetings, quality meetings, and other open discussion forums
• Systematic involvement of senior operations staff to capture experience
• Performance measures.

Concord Associates, Inc.

Page 15 of 22
The quality of information available in these sources, of course, depends on time
available, training, and management support. A good information management system to
aid not only collection, but analysis, synthesis, and dissemination of information is
essential. Above all, success in the lessons learned program depends on maintaining a
culture that values learning, that rewards self-critical evaluation, and that fosters open,
punishment-free communication. Figure 8 illustrates the information flow in such a
comprehensive operational experience feedback program. Performance measurement is a
special area that bears further discussion.
Audits
Self Assessments
Perf. Measures
Incident Reports
Near-Miss Reports
Industry Events
Govt. Reports
ER Drills
Training
Evaluations
Self-Reports
PHA Findings

Operational
Experience
Sources

Root Cause
Performance Trends
Reliability Analysis
Human Error Analysis
Equip. Failure Analysis
Training Needs
Systems Analysis

Lessons
Learned

Analysis

Information Management System
Figure 8. Operational Performance Feedback System

Human Performance Measurement
No engineer or CEO would think of constructing a process plant without clear
performance specifications and comprehensive measurement of process variables. Yet
management systems and human performance systems in the past routinely have been
“engineered” with virtually no clear performance requirements or measures. The total
quality management movement of the 1980s has dramatically changed viewpoints on the
need for measures of management systems, and most companies have invested
substantial resources in development of performance measures for organizations and
processes. Managers today recognize the power of measures to not only assess
performance, but to lead people to the desired levels of performance. Most organizations
recognize that a “family of measures”13 covering multiple levels and involving multiple
types of measures – input measures, process measures, output measures, outcome
measures – is required to provide a complete picture of performance.
Rummler and Bache14 discuss three levels of performance measurement and three areas
of performance for each level (see Figure 9). A comprehensive performance
measurement program will address goals, design, and management for organizations, for
processes, and for job/individuals. The American Institute for Chemical Engineers
Center for Chemical Process Safety (CCPS) program for development of measures for
Process Safety Management (PSM) 15 is an example of a state-of-the-art performance
measurement system that addresses multiple levels in the area of process safety
Concord Associates, Inc.

Page 16 of 22
Goals

Design

Management

Organization
Goals

Organization
Design

Organization
Management

Process
Goals

Process
Design

Process
Management

Job/Individual
Goals

Job/Individual
Design

Job/Individual
Management

Figure 9. Multiple Levels of Measures

management. It is being constructed using the Authority Referenced Measures System
(ARMS) approach originally developed by Connelly16 to construct an integrated set of
“real-time” measures based on capturing and quantifying judgment of CCPS authorities
in process safety management. There are three basic steps to building a measure using
ARMS:
1. Extracting the authority’s knowledge and judgment to define a structure for the
measure (factors, subfactors and indicators of importance to the performance),
capturing the authority’s judgment and preferences by rating example
performances,
2. Capturing the authority’s judgment and preferences by rating example
performances,
3. Synthesizing a mathematical measure that is capable of reproducing
performance ratings like the authority would make given the same examples of
performance.
Usually, a single authority is used to build a “strawman” measure that can then be
reviewed and modified by other authorities. The authorities’ judgment is compared
to existing structures and available data to further assure completeness and
consistency. For example, in the case of a job/individual measure, the JTA performed
as part of a formal training analysis would provide the basic data for job performance
measurement, and the authority’s subjective measure should be consistent with and
compatible with the JTA data base.
The review of the strawman measure usually is conducted first in a small-group
environment and then with a broader audience, to obtain completeness and identify
critical differences in rating strategies. For example, a performance measure built for
the Senior Operator position used one Senior Operator to build the strawman, three
Concord Associates, Inc.

Page 17 of 22
Senior Operators to provide peer review and “validation”, review by supervisors and
operators to provide a perspective from individuals above and below the Senior
Operator in the organization, and review by Senior Operators at other facilities in the
corporation to assure the measure was applicable to all facilities.
After initial development a trial field use is recommended to test both effectiveness
and practicality of the measure, and to further obtain “buy-in” from the population
being measured. After necessary adjustment, the measure is fully implemented with
continual review and evaluation to adjust the measure for changing conditions.
The same basic ARMS process is used to build measures of organizational effectiveness,
program/process effectiveness, and job/individual effectiveness. The result is an
integrated set of measures that provides continual feedback on human and organizational
effectiveness. The measurement process provides early feedback on developing
problems and serves as an ongoing means of communication about performance
throughout the organization.
The routine, continuous feedback from the performance measurement process, combined
with causal analysis from events and retrospective analysis of other sources provides a
comprehensive base of information for learning and continuous improvement in safety
and operational effectiveness.

IMPLEMENTATION OF THE HPE PROCESS
Figure 10 illustrates a typical sequence we have used to implement a human performance
engineering program in an operating facility. The initial step is an assessment of the
existing state of the “human systems”. A team with expertise in each of the six areas
summarized above works intensively with facility personnel at all levels to perform a
diagnostic evaluation of current human performance systems. The evaluation includes
review of program documentation, review of incidents, interviews with management and
staff at multiple levels, evaluation “checklists” and similar tools, in-plant walk-downs,
and independent observation of on-the-job performance. The assessment focuses on the
effectiveness of underlying management systems and processes that support human
performance, but also captures information on specific problems of implementation. The
evaluation typically requires a week to ten days at a process plant. Strengths and
weaknesses are assessed, and a summary report of issues identified with recommended
solutions are prepared.
Some of the specific “implementation” problems can be addressed immediately, often
with very limited cost. Often, there are structural or “systemic” problems that need to be
resolved before specific improvement actions can be expected to be successful. Training
development, procedures design and use, and other areas of human performance typically
have not been developed under as rigorous requirements and formal structure as has been
the hardware and process side of the system, though there has been substantial progress
in the past five years as PSM programs are maturing. Even at facilities with rather
sophisticated and mature PSM programs, we often find two areas that have not been
adequately addressed – ConOps and Measures. These two areas address the most basic

Concord Associates, Inc.

Page 18 of 22
of management requirements: (1) to tell people exactly what you expect of them, and 2)
to follow up, find out whether they are performing as expected, and provide them
feedback. Improvements in these areas can produce substantial performance
improvement with out extensive rework or cost.

System Design

Assess
Existing
Systems
Systemic
Problems ?

Evaluation
Report

YES

Human Error
ConOps
Training
Procedures
HMI
Meaures/
Feedback

Program
Requirements

NO

Specific
Problems ?

YES

Near-Term
Fixes

MOC
Program

NO

Continuous
Improvement
Program

Figure 10. Implementing a HPE Program at an Operating Facility
The final result of the human performance engineering implementation is a continuous
performance improvement effort following the general model presented earlier in Figure
2. This “systems engineering performance” model is used to continually assess
performance requirements, measure performance, and make improvements as necessary.
It is a framework for maintaining the highest levels of human performance, safety, and
operations excellence.

CONCLUSION
Long-term fundamental improvement in human performance requires identification and
resolution of systemic problems that lead to human error. This paper has presented a
comprehensive framework for identifying weaknesses and developing improvements to
the underlying structures that influence human performance. We refer to this framework
as Human Performance Engineering, which is application of the systems engineering
viewpoint to human performance in an operating system. The paper discussed six areas
of integrated technical activity that comprise a practical HPE program.
Many of the human performance issues in these six areas have been addressed by
chemical process and petroleum facilities, in some cases in considerable depth, using

Concord Associates, Inc.

Page 19 of 22
various industry process safety management (PSM) frameworks17,18,19,20. However, they
have not always been integrated and presented in a way that provides a comprehensive
solution to the problem of human error and its impact on safety and operational
performance. Many managers are expressing dismay that they have made substantial
investments in PSM initiatives and continue to have accidents. As they reflect, they are
realizing that their efforts focused heavily on equipment and process issues and far less
on the human side of the system. Many are becoming keenly aware that most significant
incidents involve human error as the root cause or a significant contributing causal factor.
Yet few have the technical background or guidance to determine exactly what to do
reduce human error.
We believe that implementation of comprehensive human performance engineering
program with the elements outlined in this paper will minimize the frequency and
consequences of human error and thereby enhance process safety. Further, it will help
establish a culture of continual improvement in human performance, a culture of
“operational excellence”, which can result in dramatic improvements in overall system
effectiveness and productivity.

Concord Associates, Inc.

Page 20 of 22
REFERENCES
1. Haas, P.M., P.J. Swanson, and E.M. Connelly, “Operational Human Performance
Reliability Assessment,” Proceedings of the Sixteenth Reactor Operations
International Topical Meeting, Long Island, NY, 1993.
2. Swanson, P.J. and P.M. Haas, “Operational Human Performance Reliability Analysis
(OHPRA), A Tool for the Assessment of Human Factors Contribution to Safe and
Reliable Process Plant Operation,” CA/TR-93003, Concord Associates, Inc., 1993.
3. Handbook for Designers of Instructional Systems, AFP50-58, U.S. Department of the
Air Force, 1974.
4. Guidelines for Writing Effective Operating and Maintenance Procedures, Center for
Chemical Process Safety, AIChE, New York, NY, 1996.
5. Human System Interface Design Review Guideline, NUREG- 0700. U.S. Nuclear
Regulatory Commission, 1996.
6. American Standard for Human Factors Engineering of Visual Display Terminal
Workstations. The Human Factors Society. (ANSI/HFS 100 – 1988), Santa Monica,
California, 1988.
7. Human Engineering Design Criteria for Military Systems, Equipment, and Facilities.
MIL-STD-1472D, U.S. Department of Defense.
8. Brown, C.M., Human-Computer Interface Design Guidelines, Aplex Publishing
Corp., Norwood, NJ, 1988.
9. Gilmore, W.E., Gertman, D.I., Blackman, H.S., User-Computer Interface in Process
Control, Academic Press, San Diego, CA, 1989.
10. Senge, Peter M., The Fifth Discipline: The Art and Practice of a Learning
Organization, Currency Doubleday, New York, NY, 1990.
11. Senge, Peter M., Art Klein, Charlotte Roberts, Richard B. Ross, and Bryan J. Smith,
The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning
Organization, Doubleday, New York, NY, 1994.
12. Chawla, Sarita, and John Renesch, Editors, Learning Organizations: Developing
Cultures for Tomorrow’s Workplace, Productivity Press, Portland, OR, 1995.
13. Thor, Carl G., The Measure of Success: Creating a High Performance Organization,
Oliver Wright Publications, Inc., Essex Junctions, VT, 1994.

Concord Associates, Inc.

Page 21 of 22
14. Rummler, Geary A., and Alan P. Brache, Improving Performance: How to Manage
the White Space on the Organization Chart, Second Edition, Jossey-Bass Publishers,
San Francisco, CA, 1995.
15. Campbell, D.J., E.M. Connelly, J.S. Arendt, B.G. Perry, and S. Schreiber,
“Performance Measurement of Process Safety Management Systems,” International
Conference and Workshop on Reliability and Risk Management, Center for Chemical
Process Safety, AIChE, 1998.
16. Connelly, E.M., “A Theory of Human Performance Assessment,” Proceedings of the
Human Factors Society, 31st Annual Meeting, 1987.
17. Guidelines for Technical Management of Chemical Process Safety, Center for
Chemical Process Safety, AIChE, New York, 1989.
18. Process Safety Management, Chemical Manufacturers Association, Washington, DC,
1985.
19. 29 CFR 1910.119, Process Safety Management of Highly Hazardous Chemicals,
Occupational Safety and Health Administration, Washington, DC, 1992.
20. American Petroleum Institute Recommended Practice 750, Management of Process
Hazards, 1st ed., Washington, DC, 1990.

Concord Associates, Inc.

Page 22 of 22

More Related Content

What's hot

The Effect of Workload on Occupational Stress. Emmanuel Segui
The Effect of Workload on Occupational Stress. Emmanuel SeguiThe Effect of Workload on Occupational Stress. Emmanuel Segui
The Effect of Workload on Occupational Stress. Emmanuel Segui
Emmanuel Segui
 
Bin saleem
Bin saleemBin saleem
Bin saleemanesah
 
Overview of Systemic Modeling Approaches
Overview of Systemic Modeling ApproachesOverview of Systemic Modeling Approaches
Overview of Systemic Modeling Approachesstargate1280
 
ECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENT
ECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENTECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENT
ECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENT
ijseajournal
 
Performance Based Safety
Performance Based SafetyPerformance Based Safety
Performance Based Safetyvtsiri
 
pramod elsevier paper
pramod elsevier paperpramod elsevier paper
pramod elsevier paperpramod kumar
 
Interaction Design for Medical Systems (Whitepaper)
Interaction Design for Medical Systems (Whitepaper)Interaction Design for Medical Systems (Whitepaper)
Interaction Design for Medical Systems (Whitepaper)
Elizabeth Bacon
 
Mental Workload
Mental WorkloadMental Workload
Mental Workload
Seta Wicaksana
 

What's hot (9)

The Effect of Workload on Occupational Stress. Emmanuel Segui
The Effect of Workload on Occupational Stress. Emmanuel SeguiThe Effect of Workload on Occupational Stress. Emmanuel Segui
The Effect of Workload on Occupational Stress. Emmanuel Segui
 
Bin saleem
Bin saleemBin saleem
Bin saleem
 
Overview of Systemic Modeling Approaches
Overview of Systemic Modeling ApproachesOverview of Systemic Modeling Approaches
Overview of Systemic Modeling Approaches
 
Worksitehazanalysis2
Worksitehazanalysis2Worksitehazanalysis2
Worksitehazanalysis2
 
ECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENT
ECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENTECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENT
ECONOMIC OUTPUT UNDER THE CONDITIONS OF SOCIAL FREEDOM IN SOFTWARE DEVELOPMENT
 
Performance Based Safety
Performance Based SafetyPerformance Based Safety
Performance Based Safety
 
pramod elsevier paper
pramod elsevier paperpramod elsevier paper
pramod elsevier paper
 
Interaction Design for Medical Systems (Whitepaper)
Interaction Design for Medical Systems (Whitepaper)Interaction Design for Medical Systems (Whitepaper)
Interaction Design for Medical Systems (Whitepaper)
 
Mental Workload
Mental WorkloadMental Workload
Mental Workload
 

Similar to NPRA-042899

Human Factors Engineering 2.pdf
Human Factors Engineering 2.pdfHuman Factors Engineering 2.pdf
Human Factors Engineering 2.pdf
Saipriyadarshan
 
Analysis, the basis of design
Analysis, the basis of designAnalysis, the basis of design
Analysis, the basis of design
Surashmie Kaalmegh
 
Human factors in software reliability engineering - Research Paper
Human factors in software reliability engineering - Research PaperHuman factors in software reliability engineering - Research Paper
Human factors in software reliability engineering - Research Paper
Muhammad Ahmad Zia
 
IP05_USA AI. 11 - Human Factors.pdf
IP05_USA AI. 11 - Human Factors.pdfIP05_USA AI. 11 - Human Factors.pdf
IP05_USA AI. 11 - Human Factors.pdf
Toto Subagyo
 
FROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATION
FROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATIONFROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATION
FROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATION
ijseajournal
 
Business Modeling In Healthcare - BA World 2007
Business Modeling In Healthcare - BA World 2007Business Modeling In Healthcare - BA World 2007
Business Modeling In Healthcare - BA World 2007
Ken Wong
 
Milestone One Company Identification You have been hired as a.docx
Milestone One Company Identification You have been hired as a.docxMilestone One Company Identification You have been hired as a.docx
Milestone One Company Identification You have been hired as a.docx
ARIV4
 
Requirements elicitation frame work
Requirements elicitation frame workRequirements elicitation frame work
Requirements elicitation frame work
ijseajournal
 
HumanFactors AAt booklet.pdf
HumanFactors AAt booklet.pdfHumanFactors AAt booklet.pdf
HumanFactors AAt booklet.pdf
Toto Subagyo
 
Concept of systemThe word “system” has a very wide connotation. B.pdf
Concept of systemThe word “system” has a very wide connotation. B.pdfConcept of systemThe word “system” has a very wide connotation. B.pdf
Concept of systemThe word “system” has a very wide connotation. B.pdf
sutharbharat59
 
Software requirement analysis enhancements byprioritizing re
Software requirement analysis enhancements byprioritizing reSoftware requirement analysis enhancements byprioritizing re
Software requirement analysis enhancements byprioritizing re
AlleneMcclendon878
 
Design of Manual Material Handling System through Computer Aided Ergonomics
Design of Manual Material Handling System through Computer Aided ErgonomicsDesign of Manual Material Handling System through Computer Aided Ergonomics
Design of Manual Material Handling System through Computer Aided Ergonomics
Birhanu Dagnew Sendek
 
Man machine interface & it’s controls
Man machine interface & it’s controlsMan machine interface & it’s controls
Man machine interface & it’s controls
University of Petroleum and Energy Studies
 
David vernon software_engineering_notes
David vernon software_engineering_notesDavid vernon software_engineering_notes
David vernon software_engineering_notes
mitthudwivedi
 
Workstation evaluation and_design
Workstation evaluation and_designWorkstation evaluation and_design
Workstation evaluation and_design
Radhika Chintamani
 
#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf
bizuayehuadmasu1
 
Intranet Automation of Human Resource Management System
Intranet Automation of Human Resource Management SystemIntranet Automation of Human Resource Management System
Intranet Automation of Human Resource Management System
IOSR Journals
 
86One of the fi rst activities of an analyst is to determi.docx
86One of the fi rst activities of an analyst is to determi.docx86One of the fi rst activities of an analyst is to determi.docx
86One of the fi rst activities of an analyst is to determi.docx
ransayo
 

Similar to NPRA-042899 (20)

Human Factors Engineering 2.pdf
Human Factors Engineering 2.pdfHuman Factors Engineering 2.pdf
Human Factors Engineering 2.pdf
 
Analysis, the basis of design
Analysis, the basis of designAnalysis, the basis of design
Analysis, the basis of design
 
Human factors in software reliability engineering - Research Paper
Human factors in software reliability engineering - Research PaperHuman factors in software reliability engineering - Research Paper
Human factors in software reliability engineering - Research Paper
 
IP05_USA AI. 11 - Human Factors.pdf
IP05_USA AI. 11 - Human Factors.pdfIP05_USA AI. 11 - Human Factors.pdf
IP05_USA AI. 11 - Human Factors.pdf
 
FROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATION
FROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATIONFROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATION
FROM PLM TO ERP : A SOFTWARE SYSTEMS ENGINEERING INTEGRATION
 
Business Modeling In Healthcare - BA World 2007
Business Modeling In Healthcare - BA World 2007Business Modeling In Healthcare - BA World 2007
Business Modeling In Healthcare - BA World 2007
 
Milestone One Company Identification You have been hired as a.docx
Milestone One Company Identification You have been hired as a.docxMilestone One Company Identification You have been hired as a.docx
Milestone One Company Identification You have been hired as a.docx
 
Requirements elicitation frame work
Requirements elicitation frame workRequirements elicitation frame work
Requirements elicitation frame work
 
HumanFactors AAt booklet.pdf
HumanFactors AAt booklet.pdfHumanFactors AAt booklet.pdf
HumanFactors AAt booklet.pdf
 
Concept of systemThe word “system” has a very wide connotation. B.pdf
Concept of systemThe word “system” has a very wide connotation. B.pdfConcept of systemThe word “system” has a very wide connotation. B.pdf
Concept of systemThe word “system” has a very wide connotation. B.pdf
 
Software requirement analysis enhancements byprioritizing re
Software requirement analysis enhancements byprioritizing reSoftware requirement analysis enhancements byprioritizing re
Software requirement analysis enhancements byprioritizing re
 
Design of Manual Material Handling System through Computer Aided Ergonomics
Design of Manual Material Handling System through Computer Aided ErgonomicsDesign of Manual Material Handling System through Computer Aided Ergonomics
Design of Manual Material Handling System through Computer Aided Ergonomics
 
HCI_Lec-15.pptx
HCI_Lec-15.pptxHCI_Lec-15.pptx
HCI_Lec-15.pptx
 
PSW
PSWPSW
PSW
 
Man machine interface & it’s controls
Man machine interface & it’s controlsMan machine interface & it’s controls
Man machine interface & it’s controls
 
David vernon software_engineering_notes
David vernon software_engineering_notesDavid vernon software_engineering_notes
David vernon software_engineering_notes
 
Workstation evaluation and_design
Workstation evaluation and_designWorkstation evaluation and_design
Workstation evaluation and_design
 
#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf
 
Intranet Automation of Human Resource Management System
Intranet Automation of Human Resource Management SystemIntranet Automation of Human Resource Management System
Intranet Automation of Human Resource Management System
 
86One of the fi rst activities of an analyst is to determi.docx
86One of the fi rst activities of an analyst is to determi.docx86One of the fi rst activities of an analyst is to determi.docx
86One of the fi rst activities of an analyst is to determi.docx
 

More from PMHaas

Twelve tips to improve procedure usage
Twelve tips to improve procedure usageTwelve tips to improve procedure usage
Twelve tips to improve procedure usage
PMHaas
 
HPE Summary PowerPoint 2000
HPE Summary PowerPoint 2000HPE Summary PowerPoint 2000
HPE Summary PowerPoint 2000
PMHaas
 
ConOps: A control system ...
ConOps: A control system ...ConOps: A control system ...
ConOps: A control system ...
PMHaas
 
Five Steps to Excellence
Five Steps to ExcellenceFive Steps to Excellence
Five Steps to Excellence
PMHaas
 
Good practice note procedure validation
Good practice note   procedure validationGood practice note   procedure validation
Good practice note procedure validation
PMHaas
 
Good practice note ojt
Good practice note ojtGood practice note ojt
Good practice note ojt
PMHaas
 
Hf procedure writing part c
Hf procedure writing part cHf procedure writing part c
Hf procedure writing part c
PMHaas
 
Hf procedure writing part b
Hf procedure writing part bHf procedure writing part b
Hf procedure writing part b
PMHaas
 
Hpe program rating #6 measurement system
Hpe program rating #6 measurement systemHpe program rating #6 measurement system
Hpe program rating #6 measurement system
PMHaas
 
Hpe program rating #5 ops performance feedback
Hpe program rating #5 ops performance feedbackHpe program rating #5 ops performance feedback
Hpe program rating #5 ops performance feedback
PMHaas
 
Hpe program rating #4 human factors engineering
Hpe program rating #4 human factors engineeringHpe program rating #4 human factors engineering
Hpe program rating #4 human factors engineering
PMHaas
 
Hpe program rating #3 training
Hpe program rating #3 trainingHpe program rating #3 training
Hpe program rating #3 training
PMHaas
 
Hpe program rating #2 procedures
Hpe program rating #2 proceduresHpe program rating #2 procedures
Hpe program rating #2 procedures
PMHaas
 
Hpe program rating #1 con ops
Hpe program rating #1 con opsHpe program rating #1 con ops
Hpe program rating #1 con ops
PMHaas
 
Good practice note procedure validation
Good practice note   procedure validationGood practice note   procedure validation
Good practice note procedure validation
PMHaas
 
Twelve Things ... Use Procedures
Twelve Things ... Use ProceduresTwelve Things ... Use Procedures
Twelve Things ... Use Procedures
PMHaas
 
Hf procedure writing part a
Hf procedure writing part aHf procedure writing part a
Hf procedure writing part aPMHaas
 

More from PMHaas (17)

Twelve tips to improve procedure usage
Twelve tips to improve procedure usageTwelve tips to improve procedure usage
Twelve tips to improve procedure usage
 
HPE Summary PowerPoint 2000
HPE Summary PowerPoint 2000HPE Summary PowerPoint 2000
HPE Summary PowerPoint 2000
 
ConOps: A control system ...
ConOps: A control system ...ConOps: A control system ...
ConOps: A control system ...
 
Five Steps to Excellence
Five Steps to ExcellenceFive Steps to Excellence
Five Steps to Excellence
 
Good practice note procedure validation
Good practice note   procedure validationGood practice note   procedure validation
Good practice note procedure validation
 
Good practice note ojt
Good practice note ojtGood practice note ojt
Good practice note ojt
 
Hf procedure writing part c
Hf procedure writing part cHf procedure writing part c
Hf procedure writing part c
 
Hf procedure writing part b
Hf procedure writing part bHf procedure writing part b
Hf procedure writing part b
 
Hpe program rating #6 measurement system
Hpe program rating #6 measurement systemHpe program rating #6 measurement system
Hpe program rating #6 measurement system
 
Hpe program rating #5 ops performance feedback
Hpe program rating #5 ops performance feedbackHpe program rating #5 ops performance feedback
Hpe program rating #5 ops performance feedback
 
Hpe program rating #4 human factors engineering
Hpe program rating #4 human factors engineeringHpe program rating #4 human factors engineering
Hpe program rating #4 human factors engineering
 
Hpe program rating #3 training
Hpe program rating #3 trainingHpe program rating #3 training
Hpe program rating #3 training
 
Hpe program rating #2 procedures
Hpe program rating #2 proceduresHpe program rating #2 procedures
Hpe program rating #2 procedures
 
Hpe program rating #1 con ops
Hpe program rating #1 con opsHpe program rating #1 con ops
Hpe program rating #1 con ops
 
Good practice note procedure validation
Good practice note   procedure validationGood practice note   procedure validation
Good practice note procedure validation
 
Twelve Things ... Use Procedures
Twelve Things ... Use ProceduresTwelve Things ... Use Procedures
Twelve Things ... Use Procedures
 
Hf procedure writing part a
Hf procedure writing part aHf procedure writing part a
Hf procedure writing part a
 

Recently uploaded

Buy Verified PayPal Account | Buy Google 5 Star Reviews
Buy Verified PayPal Account | Buy Google 5 Star ReviewsBuy Verified PayPal Account | Buy Google 5 Star Reviews
Buy Verified PayPal Account | Buy Google 5 Star Reviews
usawebmarket
 
Cracking the Workplace Discipline Code Main.pptx
Cracking the Workplace Discipline Code Main.pptxCracking the Workplace Discipline Code Main.pptx
Cracking the Workplace Discipline Code Main.pptx
Workforce Group
 
Lookback Analysis
Lookback AnalysisLookback Analysis
Lookback Analysis
Safe PaaS
 
What are the main advantages of using HR recruiter services.pdf
What are the main advantages of using HR recruiter services.pdfWhat are the main advantages of using HR recruiter services.pdf
What are the main advantages of using HR recruiter services.pdf
HumanResourceDimensi1
 
20240425_ TJ Communications Credentials_compressed.pdf
20240425_ TJ Communications Credentials_compressed.pdf20240425_ TJ Communications Credentials_compressed.pdf
20240425_ TJ Communications Credentials_compressed.pdf
tjcomstrang
 
Affordable Stationery Printing Services in Jaipur | Navpack n Print
Affordable Stationery Printing Services in Jaipur | Navpack n PrintAffordable Stationery Printing Services in Jaipur | Navpack n Print
Affordable Stationery Printing Services in Jaipur | Navpack n Print
Navpack & Print
 
The-McKinsey-7S-Framework. strategic management
The-McKinsey-7S-Framework. strategic managementThe-McKinsey-7S-Framework. strategic management
The-McKinsey-7S-Framework. strategic management
Bojamma2
 
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...Accpac to QuickBooks Conversion Navigating the Transition with Online Account...
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...
PaulBryant58
 
BeMetals Presentation_May_22_2024 .pdf
BeMetals Presentation_May_22_2024   .pdfBeMetals Presentation_May_22_2024   .pdf
BeMetals Presentation_May_22_2024 .pdf
DerekIwanaka1
 
Sustainability: Balancing the Environment, Equity & Economy
Sustainability: Balancing the Environment, Equity & EconomySustainability: Balancing the Environment, Equity & Economy
Sustainability: Balancing the Environment, Equity & Economy
Operational Excellence Consulting
 
Attending a job Interview for B1 and B2 Englsih learners
Attending a job Interview for B1 and B2 Englsih learnersAttending a job Interview for B1 and B2 Englsih learners
Attending a job Interview for B1 and B2 Englsih learners
Erika906060
 
What is the TDS Return Filing Due Date for FY 2024-25.pdf
What is the TDS Return Filing Due Date for FY 2024-25.pdfWhat is the TDS Return Filing Due Date for FY 2024-25.pdf
What is the TDS Return Filing Due Date for FY 2024-25.pdf
seoforlegalpillers
 
Skye Residences | Extended Stay Residences Near Toronto Airport
Skye Residences | Extended Stay Residences Near Toronto AirportSkye Residences | Extended Stay Residences Near Toronto Airport
Skye Residences | Extended Stay Residences Near Toronto Airport
marketingjdass
 
PriyoShop Celebration Pohela Falgun Mar 20, 2024
PriyoShop Celebration Pohela Falgun Mar 20, 2024PriyoShop Celebration Pohela Falgun Mar 20, 2024
PriyoShop Celebration Pohela Falgun Mar 20, 2024
PriyoShop.com LTD
 
Enterprise Excellence is Inclusive Excellence.pdf
Enterprise Excellence is Inclusive Excellence.pdfEnterprise Excellence is Inclusive Excellence.pdf
Enterprise Excellence is Inclusive Excellence.pdf
KaiNexus
 
Set off and carry forward of losses and assessment of individuals.pptx
Set off and carry forward of losses and assessment of individuals.pptxSet off and carry forward of losses and assessment of individuals.pptx
Set off and carry forward of losses and assessment of individuals.pptx
HARSHITHV26
 
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...
BBPMedia1
 
Cree_Rey_BrandIdentityKit.PDF_PersonalBd
Cree_Rey_BrandIdentityKit.PDF_PersonalBdCree_Rey_BrandIdentityKit.PDF_PersonalBd
Cree_Rey_BrandIdentityKit.PDF_PersonalBd
creerey
 
Putting the SPARK into Virtual Training.pptx
Putting the SPARK into Virtual Training.pptxPutting the SPARK into Virtual Training.pptx
Putting the SPARK into Virtual Training.pptx
Cynthia Clay
 
Brand Analysis for an artist named Struan
Brand Analysis for an artist named StruanBrand Analysis for an artist named Struan
Brand Analysis for an artist named Struan
sarahvanessa51503
 

Recently uploaded (20)

Buy Verified PayPal Account | Buy Google 5 Star Reviews
Buy Verified PayPal Account | Buy Google 5 Star ReviewsBuy Verified PayPal Account | Buy Google 5 Star Reviews
Buy Verified PayPal Account | Buy Google 5 Star Reviews
 
Cracking the Workplace Discipline Code Main.pptx
Cracking the Workplace Discipline Code Main.pptxCracking the Workplace Discipline Code Main.pptx
Cracking the Workplace Discipline Code Main.pptx
 
Lookback Analysis
Lookback AnalysisLookback Analysis
Lookback Analysis
 
What are the main advantages of using HR recruiter services.pdf
What are the main advantages of using HR recruiter services.pdfWhat are the main advantages of using HR recruiter services.pdf
What are the main advantages of using HR recruiter services.pdf
 
20240425_ TJ Communications Credentials_compressed.pdf
20240425_ TJ Communications Credentials_compressed.pdf20240425_ TJ Communications Credentials_compressed.pdf
20240425_ TJ Communications Credentials_compressed.pdf
 
Affordable Stationery Printing Services in Jaipur | Navpack n Print
Affordable Stationery Printing Services in Jaipur | Navpack n PrintAffordable Stationery Printing Services in Jaipur | Navpack n Print
Affordable Stationery Printing Services in Jaipur | Navpack n Print
 
The-McKinsey-7S-Framework. strategic management
The-McKinsey-7S-Framework. strategic managementThe-McKinsey-7S-Framework. strategic management
The-McKinsey-7S-Framework. strategic management
 
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...Accpac to QuickBooks Conversion Navigating the Transition with Online Account...
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...
 
BeMetals Presentation_May_22_2024 .pdf
BeMetals Presentation_May_22_2024   .pdfBeMetals Presentation_May_22_2024   .pdf
BeMetals Presentation_May_22_2024 .pdf
 
Sustainability: Balancing the Environment, Equity & Economy
Sustainability: Balancing the Environment, Equity & EconomySustainability: Balancing the Environment, Equity & Economy
Sustainability: Balancing the Environment, Equity & Economy
 
Attending a job Interview for B1 and B2 Englsih learners
Attending a job Interview for B1 and B2 Englsih learnersAttending a job Interview for B1 and B2 Englsih learners
Attending a job Interview for B1 and B2 Englsih learners
 
What is the TDS Return Filing Due Date for FY 2024-25.pdf
What is the TDS Return Filing Due Date for FY 2024-25.pdfWhat is the TDS Return Filing Due Date for FY 2024-25.pdf
What is the TDS Return Filing Due Date for FY 2024-25.pdf
 
Skye Residences | Extended Stay Residences Near Toronto Airport
Skye Residences | Extended Stay Residences Near Toronto AirportSkye Residences | Extended Stay Residences Near Toronto Airport
Skye Residences | Extended Stay Residences Near Toronto Airport
 
PriyoShop Celebration Pohela Falgun Mar 20, 2024
PriyoShop Celebration Pohela Falgun Mar 20, 2024PriyoShop Celebration Pohela Falgun Mar 20, 2024
PriyoShop Celebration Pohela Falgun Mar 20, 2024
 
Enterprise Excellence is Inclusive Excellence.pdf
Enterprise Excellence is Inclusive Excellence.pdfEnterprise Excellence is Inclusive Excellence.pdf
Enterprise Excellence is Inclusive Excellence.pdf
 
Set off and carry forward of losses and assessment of individuals.pptx
Set off and carry forward of losses and assessment of individuals.pptxSet off and carry forward of losses and assessment of individuals.pptx
Set off and carry forward of losses and assessment of individuals.pptx
 
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...
 
Cree_Rey_BrandIdentityKit.PDF_PersonalBd
Cree_Rey_BrandIdentityKit.PDF_PersonalBdCree_Rey_BrandIdentityKit.PDF_PersonalBd
Cree_Rey_BrandIdentityKit.PDF_PersonalBd
 
Putting the SPARK into Virtual Training.pptx
Putting the SPARK into Virtual Training.pptxPutting the SPARK into Virtual Training.pptx
Putting the SPARK into Virtual Training.pptx
 
Brand Analysis for an artist named Struan
Brand Analysis for an artist named StruanBrand Analysis for an artist named Struan
Brand Analysis for an artist named Struan
 

NPRA-042899

  • 1. Human Performance Engineering: A Practical Approach to Application of Human Factors P.M. Haas, Ph.D. Concord Associates, Inc. Knoxville, TN NPRA National Safety Conference April 28-30, 1999 Dallas, Texas ABSTRACT We cannot “engineer” humans, but we can engineer human performance. To assure maximum safety and effective operations, we must apply the same degree of rigor and systematic analysis to engineering human performance as we do to engineering process and equipment performance. The field of Human Factors offers principles, methods and data for designing the “human-machine interface” to improve the safety, effectiveness, efficiency, and ease of use of engineered systems by human operators. Human Factors is primarily a design discipline, and is most effective when applied as part of a total system engineering design effort for a new system. Unfortunately, design and development of totally new industrial facilities within the U.S. has been very limited for a number of years while demands for enhanced safety and performance at existing facilities have increased dramatically. The petrochemical, refining, and other industries require from Human Factors specialists a practical, systematic approach to improving human performance in existing facilities without extensive system re-design. This paper presents an integrated, practical approach, referred to as “Human Performance Engineering” (HPE) for doing just that – improving human performance, and thereby improving overall system performance, in an operational facility. It outlines a practical guide for engineering the “human side” of the system (i.e., for building the underlying structures and management systems that will produce and support safe and effective human performance). Implementation of a comprehensive HPE program will dramatically reduce human error and enhance process safety. It also will help to establish and support a culture of continual improvement in human performance, a culture of “operational excellence”.
  • 2. SYSTEMS ENGINEERING Human Factors as a discipline has been most effective when practiced as an integral part of the systems engineering design process. From the systems engineering perspective, the system (Figure 1) includes not only equipment or hardware, but also software, people, administrative controls, procedures, job aids, facilities, and support organizations - all immersed in and influenced by organizational culture and the social/technical environment surrounding the system. The internal and external customers drive the system mission and performance specifications. The essence of the systems view is that all of the parts of the system need to be designed and utilized in a way that leads to successful performance of the total system. The systems engineering process starts in the early conceptual stages of design by specifying the system mission, goals and overall performance requirements. It continues with the definition and allocation of functions, trade off studies, and design analysis, or synthesis, to obtain and document specific performance requirements for each element of the system, including the human element. It also specifies the design of interfaces and interrelationships among the different elements (e.g., the human-machine interface). Systems engineering is a process for optimization to the top-level goals and performance requirements. These top-level requirements include safety, reliability, operability, maintainability, environmental responsibility, and other dimensions or system characteristics. Social and Physical Environment Corporate, Organizational Culture Facilities and Support Organizations Hardware Personnel Subsystem Job/Org. Design Selection & Staffing Training & Qualification Supervision Motivation, Attitudes Individual/team performance Software Administrative Controls, Procedures, Job Aids Figure 1. The “Socio-Technical” System Concord Associates, Inc. Page 2 of 22
  • 3. It is our belief that applications of systems engineering concepts and approaches in future design of petrochemical facilities could dramatically improve safety and effectiveness of operations. However, we recognize that very few totally new systems are being designed and built in the U.S. at this time. The immediate concern is what we can do to improve performance of existing systems. We believe we can apply systems engineering principles to engineer (re-engineer) the performance of existing systems. In our company we refer to this as systems performance engineering (Figure 2.) Define Goals, Objectives, Requirements Measure Performance Y OK? N Develop and Validate Measures Measure Effectiveness of Mods Develop and Implement Modifications N OK? Y Figure 2. Systems Performance Engineering HUMAN PERFORMANCE ENGINEERING Further, we believe that the most fruitful area for improving the performance of existing systems is human performance. We refer to the application of system performance engineering to the human element of the system as human performance engineering. The basic technical discipline in human performance engineering is, of course, human factors, or ergonomics, which is a specialty focused on design of the “human-machine interface”. However, the term “human performance engineering” is intended to convey a considerably broader perspective and includes areas such as training system design and administrative controls (conduct of operations) that typically are viewed as human resource or management issues. Human performance engineering addresses all factors that influence human performance. The term also is intended to emphasize the notion that we can engineer – apply scientific principles to plan, design, construct, and manage – human performance. Human performance engineering offers a systematic and comprehensive framework for improving human performance and thereby improving overall system performance. While the activities comprising human performance engineering can be organized in different ways, a comprehensive program Concord Associates, Inc. Page 3 of 22
  • 4. will address at least the following six areas, which are discussed in the subsequent paragraphs: 1. Human Error Assessment - “prospective” analysis for process hazards analysis and “retrospective” analysis for incident investigation 2. Conduct of Operations - disciplined professionalism 3. Personnel Subsystems - selection, training and qualifications 4. Procedures - design for effective use 5. Ergonomics - human-machine-interface design 6. Measurement and Feedback - continuous measurement of human, organizational, and management system performance; learning from successes and failures; and feedback to all personnel. AREA 1: HUMAN ERROR ASSESSMENT Human error assessment is used by safety analysts and engineers in two ways: 1) prospectively to identify the likelihood of human errors initiating, exacerbating, or mitigating accident events; and 2) retrospectively to identify human error-related causes of events. In the past, human error assessment in either mode often has been rather superficial. Many PHAs claim to have included “operator action” but in fact primarily consider operator action as mitigating factors, and often with a 100% likelihood of success. Many root cause analyses have claimed to address human error, but in effect dig no deeper into causal factors than merely identifying “human error” as the root cause. Quantitative vs. Qualitative HRA One of the reasons for past reluctance to employ human reliability analysis (HRA) techniques may be that most of the techniques have been developed and presented in the context of quantitative risk assessment (QRA). Table 1 lists some of the better known HRA techniques that have been used in the U.S. Originating in the military and Table 1. Human Reliability Analysis Techniques Technique OHPRA (Concord Associates, Inc.) THERP (Sandia Lab) SLIM, SLIM-MAUD (Brookhaven Lab) Simulation Models (e.g., SAINT, Siegel, HOS, MicroSaint) OAT, OAET (NRC, INEL) HEART (British Gov’t) MAPPS (NRC, Oak Ridge Nat’l Lab) ATHEANA (NRC) Concord Associates, Inc. Primary Applications Petrochemical industry Nuclear power, weapons assembly Nuclear power operations Military systems design Nuclear power operations Industrial, nuclear Nuclear power maintenance Nuclear power operations Page 4 of 22
  • 5. aerospace community and most widely and recently used by the nuclear power community, QRA methods are often perceived as being very detailed, intensive and expensive. Quantitative HRA methods associated with QRA may be viewed to some extent in the same vein – too detailed, too expensive, and lacking a sufficient “data base” to warrant the complexity and cost. In fact, qualitative HRA can be very powerful, practical, and cost-effective for both prospective analyses, such as PHA and retrospective analyses, such as root cause investigation. Subjective, “comparative” estimates of likelihood of human error are relatively straightforward to obtain. Experts can make judgements that are meaningful and precise enough to identify problems and make decisions regarding necessary improvements, without performing a detailed quantitative analysis. There are a number of techniques, including the qualitative portions of some of the quantitative approaches included in Table 1 above, that can be used very effectively to identify important “errorlikely situations” without extensive quantitative analysis. One technique, developed by Concord as a qualitative tool specifically for process industries, is the Operational Human Performance Reliability Analysis (OHPRA) method1. OHPRA – A Practical Human Performance Model for Petrochemical Operations The conceptual model for the OHPRA operations module2 is illustrated in Figure 3 (there is a similar one for maintenance activities). It assumes that operator tasks can be treated as consisting of a “cognitive” portion, in which the operator correctly or incorrectly diagnosis the situation and chooses the goal(s) and action(s) to be taken; and an “action” portion, in which the chosen actions are executed. Success on a task requires success on both portions. Likelihood of success on the cognitive portion considers cues and information available to the operator for diagnosis and decision making, knowledge and abilities level, and stresses such as time demands. The model assumes that given: 1) an adequate cue, 2) appropriate operator knowledge and abilities, and 3) a manageable level of stress, the operator will choose the correct action. If the operator does not choose the correct action, the entire task is failed. The four factors considered under “opportunity to perform” are as follows: Access to operating area ∃ Covers elements such as habitability, people resources, actions that place the operator at risk, etc. ∃ The ability of the operator to gain access to the needed equipment, controls, information, etc. to perform manual actions called for including the personnel safety hazards which may be involved. Equipment availability/operability ∃ Deals with systems’ and components’ ability to function or be utilized by the operator. ∃ The operability of the necessary equipment, controls, information systems, special tools, etc., needed to support the operators manual actions. Concord Associates, Inc. Page 5 of 22
  • 6. OPERATIONS • C O R R E C T A C T IO N C H O S E N C U E T O A C T IO N K N O W L E D G E & A B I L I T IE S •S Y S T E M IN F O R M A T IO N A V A IL A B L E • T IM E T O A C T ST R E SS C O R R E C T A C T IO N C H O SE N •A C C E S S T O O P E R A T IN G A R E A • E Q U IP M E N T O P E R A B IL IT Y (A V A IL A B IL IT Y ) U N IT O PE R A T O R A C C E SS T O O P E R A T IN G A R EA F A IL U R E E Q U IP M E N T O P E R A B I L IT Y O P P O R T U N IT Y TO PE R F O R M SY S T EM I N F O R M A T IO N A V A IL A B L E F A IL U R E T IM E T O A C T SU C C ES SF U L Figure 3. The OHPRA Operations Model Time to act ∃ Deals with measured or predicted time envelopes to complete specific actions, external influences that modify performance times, etc. ∃ The window of time which bounds the duration available for the operator to perform actions. System information available ∃ Deals with instrumentation, status monitoring, information reporting, etc. that provide needed information to the operator to make decisions and confirm status of action taken. ∃ Operator inputs including cues to action and system information feedback, which enable the operator to process what, when and how actions are to be taken. Information includes process instrumentation, procedural guidance, visual observations, shift crew communications, etc. Note that there is a feedback loop from the action phase back to the cognitive phase that is dependent on information feedback from the system to the operator when actions are taken. Part of the assessment is to evaluate the quality of operator feedback from alarm and display systems, etc. A structured subjective evaluation process, with detailed questions, guides the analyst through consideration of each of the factors and “sub-factors” to arrive at an estimate of “zero”, “low”, “medium” or “high” as the likelihood of success. Management can then set its own “criterion level” for the level of success likelihood that warrants action. Concord Associates, Inc. Page 6 of 22
  • 7. The OHPRA approach makes use of the judgement and experience of subject matter experts, (SME's), in particular the operations and maintenance personnel in the facility being assessed who have in-depth knowledge of that facility’s processes and systems. The OHPRA analyst is a facilitator who has both operations/maintenance experience and an understanding of human reliability analysis. The conceptual model described above and the associated detailed guides are used not as a rigid “checklist”, but as an aid for both the facilitator and the SMEs. Interviews and in-plant walk-downs are conducted both to provide context and to validate expert input. Additional supporting information is obtained from multiple sources, e.g., results from process hazards analysis, unit operating, startup, shutdown and safeing procedures, maintenance procedures, incident reports, unit-specific and general employee training materials, unit process diagrams, operations and maintenance departmental directives, and federal and state regulations (OSHA, EPA, etc.). The OHPRA analysis is focused on those individuals who can come in direct physical contact with the unit processes, namely operators and maintenance personnel. Other departments or services, such as management, engineering, planning and scheduling, warehouse, industrial hygiene, safety, quality assurance, etc., are not excluded from consideration; their contribution to risk is assessed as an influence on the operations or maintenance worker. OHPRA and the qualitative portions of similar HRA methods can be used effectively by knowledgeable subject matter experts with a modest amount of training. Such approaches provide a systematic framework for assessment of human error and the factors influencing human performance. They can provide meaningful assessment of the likelihood of error and can suggest areas of improvement for reducing that likelihood without the time and expense of detailed quantitative analysis. Human Error Assessment in Root Cause Analysis Most companies now routinely employ a structured root cause analysis process to identify causes of significant incidents. The methods may be as “simple” as “why trees”, but typically involve some sort of logic tree structure combined with well-established techniques such as “effects and causal factors analysis”, “barrier analysis”, and “change analysis”. Many companies also are incorporating techniques or additional modules in their root cause analysis protocols to dig deeper and ferret out the underlying systemic factors that lead to significant human error (i.e., factors that create conditions in which human error is more likely and/or more consequential). The OHPRA model is one such human error assessment technique that can be very powerful, particularly when the event and the human actions involved are complex and causal factors are difficult to identify. The choice of approaches is, in our view, not as important as the skills and experience of the analysts, nor as important as the management commitment and culture that supports root cause analysis, critical self-evaluation and peer evaluation, and open communication in a non-punitive environment. Concord Associates, Inc. Page 7 of 22
  • 8. AREA 2: CONDUCT OF OPERATIONS Intelligence, flexibility, and adaptability are strengths of human beings. They allow us to respond to new situations, evaluate alternatives, adjust to adverse conditions, make judgments with less-than-complete information, and to perform tasks that machines, even computers, cannot do very well. However, these same qualities can lead to a high degree of variability in human performance in process systems. And, that variability can be a significant cause of inefficiencies and errors. Inconsistency in performance from facilityto-facility, day-to-day, shift-to-shift and person-to-person tends to increase the likelihood for error. Thus there is a tradeoff between establishing formal, highly structured controls on human performance and allowing humans the flexibility to do what they do best – think. We want intelligent and qualified operators to run the plant responsibly and responsively. We don’t want robots for operators, but we also don’t want completely “seat-of-the-pants flying”. A good Conduct of Operations (ConOps) program can be viewed (Figure 4) as a control system that appropriately tightens the boundaries on allowed human performance. It permits appropriate variability for practical operations, while maintaining performance within a desired safety envelope. A good ConOps program also encourages and supports a culture of self-discipline and professionalism, which is the core of a safety culture. Safety Boundary Design Limit Minimum Performance (Compliance) Operational Excellence Optimum Operational Excellence Minimum Performance (Compliance) Safety Boundary Figure 4. Conduct of Operations Policies Act to Control Human Variability Formal documentation of a ConOps policy and training on ConOps requirements is a powerful management tool. Documentation should include requirements and expectations for safe and effective performance in both routine, day-to-day operations and emergencies. Examples of items to be included are: Concord Associates, Inc. Page 8 of 22
  • 9. • • • • • • • • • • • • • compliance with written procedures use of “in-hand” or “reference” procedures modifying procedures exchange of information at shift turnover use of “closed-loop” communications required reading on-shift training control room/area activities control of equipment and system status inhibiting safety systems completing operating logs emergency response drills equipment and piping labeling. AREA 3: PERSONNEL SUBSYSTEMS The foundation for excellence in human performance is the underlying knowledge, skills, abilities and attitudes of the work force. In order to apply systems performance engineering to the area of personnel qualifications we first must recognize that “personnel subsystems” are an integral part of the total system. Further, we need to understand that requirements for the human performance necessary to meet system performance requirements can be identified, made explicit, engineered, measured, and demonstrated, just as can hardware or software requirements. Performance-based design of the personnel subsystem is a key to excellence in operations. The ISD Process The general framework that has proven most effective for performance-based design of training and qualifications programs is the Instructional Systems Design (ISD) process first publicized in this country by the U.S. Air Force3. It has been referred to as the Systems (or Systematic) Approach to Training (SAT) and other similar names, but all of these approaches fundamentally are representations of the ISD process. The process typically is described at the highest level as consisting of five interrelated steps or phases (Figure 5). In practice, the activities in these phases are continuous and iterative. The analysis phase identifies and documents the performance required and the underlying human capabilities – knowledge, skills and abilities (KSAs) – necessary to accomplish the functions that have been allocated to humans. The primary activity in the analysis phase, and a critical activity for the entire human performance engineering program, is Job/Task Analysis (JTA). JTA (in varying forms) provides the fundamental information base that supports job design, training and qualification, procedures, humanmachine interface, human error assessment, and human-performance measurement. JTA basically identifies who does what and how well they need to do it. It identifies the specific tasks that individuals must perform to accomplish assigned functions under the complete range of system states (startup, normal operations, abnormal/emergency operations, shutdown). JTA also provides information on the key factors that influence human performance – the system information required by the operator, the environment Concord Associates, Inc. Page 9 of 22
  • 10. Systems Analysis Analysis Evaluation Implementation Design Development Figure 5. The Instructional Systems Design (ISD) Process and conditions of operation, necessary tools/aids, etc. JTA provides information necessary to determine the necessary KSAs and the criteria, or measures of performance. Training is then designed to assure the necessary KSAs, the necessary performance capability, has been attained and is maintained. Thus training is focused on the performance required to do the job as specified and only on that performance. Major activities in the design phase of ISD include: (1) assessing the “entry-level” capabilities of the trainee population, identifying the gap between entry-level and desired level, and identifying where training is the preferred solution for closing that gap (rather than, say, increasing entry-level requirements); (2) determining the best “setting” for training (e.g., simulator, classroom or on-the-job); (3) developing terminal learning objectives; (4) developing the training/evaluation standards that assure linkage of the training to the objectives (this involves determining the enabling objectives that support the terminal objective, performance tests or measures, KSAs being tested in each task element, conditions and standards for the test, scoring methods, etc.); and, (5) documenting the training plan for the specific training program being designed. The development phase is where the training content is developed/assembled and the training materials are produced. Development involves refining the media selection, developing student and instructor lesson plans, developing lesson materials and training support materials, developing test items, and conducting training tryouts, or “pilot courses”. The implementation phase is the actual delivery of the training, including testing and intraining evaluation. Activities include pre-testing trainees, instructor preparation, delivering the lessons, evaluating trainee performance, collecting training effectiveness evaluation data, and documenting completed training. Concord Associates, Inc. Page 10 of 22
  • 11. Evaluation is actually a continuous and iterative process throughout the entire training process. The goal is to continually measure and improve the effectiveness of the entire training process. It involves identification of appropriate measures, collecting information, analyzing information to identify systemic problems, and implementing corrective actions. Indicators for training effectiveness come from multiple sources at different levels. Trainee test performance, test statistics and trends, trainee evaluations, instructor evaluations, post-trainee surveys, supervisor feedback, on-the-job performance observations, incident reports, and other sources of information can provide indicators for training effectiveness. AREA 4: PROCEDURES Why do we write procedures? How do we expect procedures to be used? Procedures are an operator aid. They are designed to help qualified, well-trained operators perform required tasks correctly and efficiently. Procedures are intended to be used, on the job. NOTE: Procedures are an operator aid, NOT a lesson plan. The purpose of training is to bring the individual’s knowledge, skills, abilities and attitudes from their “entry-level” state to the level required to meet overall system performance requirements. Training should provide the “why” of the procedure – the process knowledge, understanding of system response, principles of equipment design and operation, safety implications of operator actions, and many other basic elements of knowledge that are essential to an effective operator are the focus of training. Training most certainly also should provide the “how” of procedures – practice on exactly what steps are to be carried out to perform specific tasks in the optimum manner. Thus training on procedures is absolutely essential. And, using the actual procedure as part of training is important. However, procedures are NOT primarily training material. Indeed, properly designed procedures usually make poor training materials, and well-designed training materials do not make good procedures. All operations need to be in conformance with the approved procedure. In some cases, where tasks involve many steps or high complexity, where errors have significant consequence, etc., every operator should be required to have the procedure “in-hand”, and in some cases step-by-step check-off or sign-off should be required. In other cases, experienced operators who perform the task regularly and where complexity and consequences are not as high, reference procedures are appropriate. These can be used as required or optional reading prior to performing an infrequently performed task. We recommend using a “risk-based” assessment to categorize procedures according to their required “level of usage”. Such an evaluation process should include at least the usual “Criticality/Frequency/ Difficulty” ratings of tasks that are commonly performed as part of the training needs assessment. What are the consequences of an error, what is the Concord Associates, Inc. Page 11 of 22
  • 12. frequency with which this task is performed, and how complex is the task? Figure 6 illustrates a spreadsheet tool we have used to classify procedures. In this example there are only three “levels” of procedures: (1) “critical”, which are to be used “in-hand” at all times, perhaps with some or all portions requiring check-off; (2) “reference” to be referred to prior to completing the task and used in hand as necessary; or (3) no written procedure is required. FREQUENCY OF USE Frequency Daily Weekly Monthly Semi-Annually Annually Point Value 3 2 1 2 3 Point Value X Weighting Factor X 0.8 Frequency Score = 0.0 TASK COMPLEXITY Descriptor Information Access Mental Loading Physical Loading Communication Loading Stress Point Value Point Scale (How difficult is it to get information to perform task?) (1-3) (How great is the "mental workload" to complete the task?) (1-3) (How complex are the physical actions required?) (1-3) (How complex and demanding are communication requirements?) (1-3) (How great is the psychological stress due to time demands, (1-3) abnormal conditions, or exposure to hazardous conditions?) Complexity Score Total 0.0 ÷ 5 = 0.0 CONSEQUENCE Descriptor None Minor Moderate Major Extreme (No Safety, Health, Quality or Environmental Concerns) (First Aid, Minor Environmental Impact, Decrease in Product Quality etc.) (Loggable Injury, Some Environmental Damage, etc) (Lost Time Injury, Moderate Equipment Damage, Major Drop in Quality etc.) (Fatality, Significant Equipment Damage, Severe Environmental Damage etc.) Point Scale 0 2 4 6 8 Consequence Score TOTAL SCORE: 0.0 PROCEDURE CLASSIFICATION IF SCORE IS: 0 >5 >8 to to 5 8 THEN CLASSIFY AS: NO PROCEDURE REFERENCE CRITICAL CLASSIFICATION = Figure 6. Illustration of a Risk-Based Assessment Tool to Categorize Procedure Usage Human Factors Guidelines for Procedure Writing If we demand procedural compliance, we must design procedures that give clear, precise directions, that are easy to understand, and easy to use. Human factors guidelines for procedure writing have captured the knowledge from cognitive science, information technology, learning, and related fields as well as actual experience on errors committed in following procedures. They provide practical guidance on how to present operators with the necessary information in the most usable format. Guidelines for formatting, for place-keeping, for caution and warning statements, for action steps, etc., are available4 to Concord Associates, Inc. Page 12 of 22
  • 13. help write procedures that reduce the likelihood for errors in executing tasks under even the most severe conditions. Application of these guidelines can dramatically reduce the frequency of errors of omission and errors of commission in executing procedures. We encourage clients to involve a broad representative spectrum of operators with human factors specialists to develop a procedure writers guide that standardizes procedure writing practice throughout the facility and even throughout the corporation. Standardization helps to make the procedure writing and management of change process more cost effective, and helps to minimize problems when people are transferred from one unit or site to another. Verification and Validation Procedure verification has to do with the technical accuracy of the procedures. It requires input from personnel in engineering, safety management, or other areas who understand process design, control systems, plant response to off-normal conditions and accidents, etc. Most facilities provide some sort of engineering or technical review of procedures for purposes of verification. Fewer facilities have systematic programs to validate procedures. Validation has to do with clarity, understandability and usability, which depend on the user and on the context and conditions under which they are used. It is risky to have procedures designed by engineers and used by operators. In general, operators and engineers understand and visualize systems differently, have different purposes in mind, have different levels of formal education, different affinities for abstract reasoning, different cultures, different language, etc. And, operators under actual operating plant conditions, especially abnormal conditions, are under a much higher level of stress or system demands than is an engineer at a desk. The process of validation by “simulation” (walk down, in the plant, with the least qualified operator authorized to use the procedure independently, and with as near to actual conditions, time constraints, etc. as possible) can dramatically reduce the likelihood for errors in procedures. Plants that routinely conduct thorough validations find that it is unusual to find a procedure that is “perfect” as written “at the desk” and could not be improved by validation. Implementation of a systematic, user-centered process for design, development, implementation, and continual evaluation will produce procedures that are consistent in structure, easily understood, and easily followed by knowledgeable operators. Technically accurate, up-to-date procedures that are human engineered for effective use are one of the primary defenses against operational errors. Well-designed procedures that are used and do not simply sit on the shelf WILL reduce human errors and enhance process safety. AREA 5: ERGONOMICS Human factors specialists promote the viewpoint that machines are designed to help humans accomplish certain goals and therefore should be designed to match human capabilities and limitations, behavioral tendencies, and preferences. Figure 7 illustrates the issues addressed by human factors engineers in the systems engineering design Concord Associates, Inc. Page 13 of 22
  • 14. process, from top-level system/mission goals, through function allocation and analysis, job/organizational design, personnel subsystem design, and human interface design. Practical Applications of Human Factors for Process Safety Improvement The field of Human Factors draws from a broad spectrum of disciplines and scientific and engineering specialty areas. Perception, cognition, learning, memory, motivation, anthropometry, and speech communication are some of the required subspecialties in psychology and physiology. Engineering expertise in design of visual, auditory, and tactile displays, controls, biomechanics, noise, illumination, vibration, and other areas also is required. Very often, the expertise in psychology and engineering is shared among team members, and human factors specialists usually work best in a team environment providing input and feedback to engineering designers. System Functions Tasks/jobs Personnel Interface Mission success, system performance requirements, user needs/satisfaction Requirements analysis, allocation to human, hardware, software, facilities Task requirements, conditions, demands on humans time, accuracy, precision, perception, vigilance Capabilities and limitations - knowledge, skills, abilities, attitudes; physiological, psychological, cognitive, social Facilities, workspace, controls, displays, habitability, accessibility, protective equipment Figure 7. Human Factors Activities in Systems Engineering Design The plant manager or process safety manager committed to improving safety and plant performance by improving human performance rarely has this kind of specialized expertise readily available at the plant site, and only in relatively few corporations is it accessible from corporate headquarters. Fortunately, much of the accumulated knowledge and experience from application of human factors (or failure to apply human factors) in design has been codified by human factors specialists in design guidelines, and further synthesized into checklists and evaluation guides5,6,7 that address each of the basic areas of human factors design, such as: • Workspace layout • Controls and display design • Alarms and annunciators • Environmental factors such as noise, illumination, temperature • Communication systems • Equipment and piping labeling and color coding • Protective clothing Concord Associates, Inc. Page 14 of 22
  • 15. Since most human-equipment interfaces in modern technology involve a computer, a major area of human factors engineering has evolved into its own specialty field, HumanComputer-Interaction, with its own university courses, professional organizations, crossdisciplinary specialists, and specific design guidelines8,9. These human factors and human-computer-interaction guidelines and checklists do not provide the expertise of an experienced human factors design engineer, but they do provide a practical way to compare existing design with good design practice. With a modest amount of human factors training and practice, individuals knowledgeable of the system can use these them effectively to at least identify less-than-adequate humanmachine design and potential error-inducing conditions. Often, modifications for practical, cost-effective improvements present themselves simply by identifying the problem. In other cases, it will be necessary to bring in the deep expertise of a skilled human factors engineer. But, even in those cases, solutions will be facilitated by having in-plant personnel familiar with human factors principles and guidelines who can serve as an interface providing process systems knowledge to the human factors expert and practical feedback to plant personnel. Small investments in human factors training and in-plant practice can pay great dividends in error-reduction, process safety, personnel safety, and overall operational performance. AREA 6: MEASUREMENT AND FEEDBACK In the early to mid-1990s, Peter Senge and other authors10,11,12 began to emphasize the critical need for “systems thinking” in management and organizations, and the ideal of a “learning organization”. A key characteristic of a learning organization is a culture that supports and thrives on learning from experience – from successes and from failures. Certainly such a culture is key to employing a systems approach to reducing human error and improving process safety. Incident investigation and root-cause analysis, discussed earlier, are the central pillars of a system for experiential learning. However, we encourage clients to develop a comprehensive program of performance measurement, causal analysis, and retrospective analysis of multiple sources of operational information. Some examples of additional sources of operational experience information are: • Near-miss reports, self reporting of corrected errors, successful interventions • Narrative and operating log reviews • Maintenance requests, trends, equipment histories, equipment failure reports • Process hazards analysis, risk assessments, safety reviews • Internal and external evaluations and audits • Training evaluations and feedback • Emergency drills • Review of applicable events at other sites, other companies, other industries • “Behavioral” safety programs in which safety behaviors are systematically sampled • Safety meetings, quality meetings, and other open discussion forums • Systematic involvement of senior operations staff to capture experience • Performance measures. Concord Associates, Inc. Page 15 of 22
  • 16. The quality of information available in these sources, of course, depends on time available, training, and management support. A good information management system to aid not only collection, but analysis, synthesis, and dissemination of information is essential. Above all, success in the lessons learned program depends on maintaining a culture that values learning, that rewards self-critical evaluation, and that fosters open, punishment-free communication. Figure 8 illustrates the information flow in such a comprehensive operational experience feedback program. Performance measurement is a special area that bears further discussion. Audits Self Assessments Perf. Measures Incident Reports Near-Miss Reports Industry Events Govt. Reports ER Drills Training Evaluations Self-Reports PHA Findings Operational Experience Sources Root Cause Performance Trends Reliability Analysis Human Error Analysis Equip. Failure Analysis Training Needs Systems Analysis Lessons Learned Analysis Information Management System Figure 8. Operational Performance Feedback System Human Performance Measurement No engineer or CEO would think of constructing a process plant without clear performance specifications and comprehensive measurement of process variables. Yet management systems and human performance systems in the past routinely have been “engineered” with virtually no clear performance requirements or measures. The total quality management movement of the 1980s has dramatically changed viewpoints on the need for measures of management systems, and most companies have invested substantial resources in development of performance measures for organizations and processes. Managers today recognize the power of measures to not only assess performance, but to lead people to the desired levels of performance. Most organizations recognize that a “family of measures”13 covering multiple levels and involving multiple types of measures – input measures, process measures, output measures, outcome measures – is required to provide a complete picture of performance. Rummler and Bache14 discuss three levels of performance measurement and three areas of performance for each level (see Figure 9). A comprehensive performance measurement program will address goals, design, and management for organizations, for processes, and for job/individuals. The American Institute for Chemical Engineers Center for Chemical Process Safety (CCPS) program for development of measures for Process Safety Management (PSM) 15 is an example of a state-of-the-art performance measurement system that addresses multiple levels in the area of process safety Concord Associates, Inc. Page 16 of 22
  • 17. Goals Design Management Organization Goals Organization Design Organization Management Process Goals Process Design Process Management Job/Individual Goals Job/Individual Design Job/Individual Management Figure 9. Multiple Levels of Measures management. It is being constructed using the Authority Referenced Measures System (ARMS) approach originally developed by Connelly16 to construct an integrated set of “real-time” measures based on capturing and quantifying judgment of CCPS authorities in process safety management. There are three basic steps to building a measure using ARMS: 1. Extracting the authority’s knowledge and judgment to define a structure for the measure (factors, subfactors and indicators of importance to the performance), capturing the authority’s judgment and preferences by rating example performances, 2. Capturing the authority’s judgment and preferences by rating example performances, 3. Synthesizing a mathematical measure that is capable of reproducing performance ratings like the authority would make given the same examples of performance. Usually, a single authority is used to build a “strawman” measure that can then be reviewed and modified by other authorities. The authorities’ judgment is compared to existing structures and available data to further assure completeness and consistency. For example, in the case of a job/individual measure, the JTA performed as part of a formal training analysis would provide the basic data for job performance measurement, and the authority’s subjective measure should be consistent with and compatible with the JTA data base. The review of the strawman measure usually is conducted first in a small-group environment and then with a broader audience, to obtain completeness and identify critical differences in rating strategies. For example, a performance measure built for the Senior Operator position used one Senior Operator to build the strawman, three Concord Associates, Inc. Page 17 of 22
  • 18. Senior Operators to provide peer review and “validation”, review by supervisors and operators to provide a perspective from individuals above and below the Senior Operator in the organization, and review by Senior Operators at other facilities in the corporation to assure the measure was applicable to all facilities. After initial development a trial field use is recommended to test both effectiveness and practicality of the measure, and to further obtain “buy-in” from the population being measured. After necessary adjustment, the measure is fully implemented with continual review and evaluation to adjust the measure for changing conditions. The same basic ARMS process is used to build measures of organizational effectiveness, program/process effectiveness, and job/individual effectiveness. The result is an integrated set of measures that provides continual feedback on human and organizational effectiveness. The measurement process provides early feedback on developing problems and serves as an ongoing means of communication about performance throughout the organization. The routine, continuous feedback from the performance measurement process, combined with causal analysis from events and retrospective analysis of other sources provides a comprehensive base of information for learning and continuous improvement in safety and operational effectiveness. IMPLEMENTATION OF THE HPE PROCESS Figure 10 illustrates a typical sequence we have used to implement a human performance engineering program in an operating facility. The initial step is an assessment of the existing state of the “human systems”. A team with expertise in each of the six areas summarized above works intensively with facility personnel at all levels to perform a diagnostic evaluation of current human performance systems. The evaluation includes review of program documentation, review of incidents, interviews with management and staff at multiple levels, evaluation “checklists” and similar tools, in-plant walk-downs, and independent observation of on-the-job performance. The assessment focuses on the effectiveness of underlying management systems and processes that support human performance, but also captures information on specific problems of implementation. The evaluation typically requires a week to ten days at a process plant. Strengths and weaknesses are assessed, and a summary report of issues identified with recommended solutions are prepared. Some of the specific “implementation” problems can be addressed immediately, often with very limited cost. Often, there are structural or “systemic” problems that need to be resolved before specific improvement actions can be expected to be successful. Training development, procedures design and use, and other areas of human performance typically have not been developed under as rigorous requirements and formal structure as has been the hardware and process side of the system, though there has been substantial progress in the past five years as PSM programs are maturing. Even at facilities with rather sophisticated and mature PSM programs, we often find two areas that have not been adequately addressed – ConOps and Measures. These two areas address the most basic Concord Associates, Inc. Page 18 of 22
  • 19. of management requirements: (1) to tell people exactly what you expect of them, and 2) to follow up, find out whether they are performing as expected, and provide them feedback. Improvements in these areas can produce substantial performance improvement with out extensive rework or cost. System Design Assess Existing Systems Systemic Problems ? Evaluation Report YES Human Error ConOps Training Procedures HMI Meaures/ Feedback Program Requirements NO Specific Problems ? YES Near-Term Fixes MOC Program NO Continuous Improvement Program Figure 10. Implementing a HPE Program at an Operating Facility The final result of the human performance engineering implementation is a continuous performance improvement effort following the general model presented earlier in Figure 2. This “systems engineering performance” model is used to continually assess performance requirements, measure performance, and make improvements as necessary. It is a framework for maintaining the highest levels of human performance, safety, and operations excellence. CONCLUSION Long-term fundamental improvement in human performance requires identification and resolution of systemic problems that lead to human error. This paper has presented a comprehensive framework for identifying weaknesses and developing improvements to the underlying structures that influence human performance. We refer to this framework as Human Performance Engineering, which is application of the systems engineering viewpoint to human performance in an operating system. The paper discussed six areas of integrated technical activity that comprise a practical HPE program. Many of the human performance issues in these six areas have been addressed by chemical process and petroleum facilities, in some cases in considerable depth, using Concord Associates, Inc. Page 19 of 22
  • 20. various industry process safety management (PSM) frameworks17,18,19,20. However, they have not always been integrated and presented in a way that provides a comprehensive solution to the problem of human error and its impact on safety and operational performance. Many managers are expressing dismay that they have made substantial investments in PSM initiatives and continue to have accidents. As they reflect, they are realizing that their efforts focused heavily on equipment and process issues and far less on the human side of the system. Many are becoming keenly aware that most significant incidents involve human error as the root cause or a significant contributing causal factor. Yet few have the technical background or guidance to determine exactly what to do reduce human error. We believe that implementation of comprehensive human performance engineering program with the elements outlined in this paper will minimize the frequency and consequences of human error and thereby enhance process safety. Further, it will help establish a culture of continual improvement in human performance, a culture of “operational excellence”, which can result in dramatic improvements in overall system effectiveness and productivity. Concord Associates, Inc. Page 20 of 22
  • 21. REFERENCES 1. Haas, P.M., P.J. Swanson, and E.M. Connelly, “Operational Human Performance Reliability Assessment,” Proceedings of the Sixteenth Reactor Operations International Topical Meeting, Long Island, NY, 1993. 2. Swanson, P.J. and P.M. Haas, “Operational Human Performance Reliability Analysis (OHPRA), A Tool for the Assessment of Human Factors Contribution to Safe and Reliable Process Plant Operation,” CA/TR-93003, Concord Associates, Inc., 1993. 3. Handbook for Designers of Instructional Systems, AFP50-58, U.S. Department of the Air Force, 1974. 4. Guidelines for Writing Effective Operating and Maintenance Procedures, Center for Chemical Process Safety, AIChE, New York, NY, 1996. 5. Human System Interface Design Review Guideline, NUREG- 0700. U.S. Nuclear Regulatory Commission, 1996. 6. American Standard for Human Factors Engineering of Visual Display Terminal Workstations. The Human Factors Society. (ANSI/HFS 100 – 1988), Santa Monica, California, 1988. 7. Human Engineering Design Criteria for Military Systems, Equipment, and Facilities. MIL-STD-1472D, U.S. Department of Defense. 8. Brown, C.M., Human-Computer Interface Design Guidelines, Aplex Publishing Corp., Norwood, NJ, 1988. 9. Gilmore, W.E., Gertman, D.I., Blackman, H.S., User-Computer Interface in Process Control, Academic Press, San Diego, CA, 1989. 10. Senge, Peter M., The Fifth Discipline: The Art and Practice of a Learning Organization, Currency Doubleday, New York, NY, 1990. 11. Senge, Peter M., Art Klein, Charlotte Roberts, Richard B. Ross, and Bryan J. Smith, The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization, Doubleday, New York, NY, 1994. 12. Chawla, Sarita, and John Renesch, Editors, Learning Organizations: Developing Cultures for Tomorrow’s Workplace, Productivity Press, Portland, OR, 1995. 13. Thor, Carl G., The Measure of Success: Creating a High Performance Organization, Oliver Wright Publications, Inc., Essex Junctions, VT, 1994. Concord Associates, Inc. Page 21 of 22
  • 22. 14. Rummler, Geary A., and Alan P. Brache, Improving Performance: How to Manage the White Space on the Organization Chart, Second Edition, Jossey-Bass Publishers, San Francisco, CA, 1995. 15. Campbell, D.J., E.M. Connelly, J.S. Arendt, B.G. Perry, and S. Schreiber, “Performance Measurement of Process Safety Management Systems,” International Conference and Workshop on Reliability and Risk Management, Center for Chemical Process Safety, AIChE, 1998. 16. Connelly, E.M., “A Theory of Human Performance Assessment,” Proceedings of the Human Factors Society, 31st Annual Meeting, 1987. 17. Guidelines for Technical Management of Chemical Process Safety, Center for Chemical Process Safety, AIChE, New York, 1989. 18. Process Safety Management, Chemical Manufacturers Association, Washington, DC, 1985. 19. 29 CFR 1910.119, Process Safety Management of Highly Hazardous Chemicals, Occupational Safety and Health Administration, Washington, DC, 1992. 20. American Petroleum Institute Recommended Practice 750, Management of Process Hazards, 1st ed., Washington, DC, 1990. Concord Associates, Inc. Page 22 of 22