, -
•"i FILE COPY
SYLLOGISTICS INC.
FIUNAL REPORT
co AIR FORCE OFFICER EVALUATION
SN'STEM PROJECT
04 La
N
IV ' i•
& THE HAY GROUP
DTIC
ELECTE
JUL 11 W
B
MANAGEMENT * PLANNING * ANALYSIS
-N 1 7
r xT' -c
Fr A
a ~<ii
TABLE OF CONTENTS
SECTION TITLE
PREFACE ............................................. iv
EXECUTIVE SUMMARY ........................................................ v
INTRODUCTION .................................................................. I-1
Historical Background ........................................................... I-1
Project Objectives and Tasking ............................................ 1-9
II STUDY METHOD................................................................... I-I
Phase 1: Background Study .......................... II-I
Phase 2: Data Gathering ....................................................... 11-2
Phase 3: Literature Review .................................................. 11-4
Phase 4: Data Analysis ......................................................... 11-5
Phase 5: Synthesis of Recommendations ............................. 11-5
III FINDINGS ON PERFORMANCE APPRAISAL IN NON-
AIR FORCE ORGANIZATIONS ........................................... I11-I
Performance Appraisal: Findings from
the Literature ....................................................................... III-1
Performance Appraisal: Findings from
the Private Sector ................................................................. 111-23
Performance Appraisal: Findings from
the Other Services ................................................................ 111-33
IV FINDINGS: AIR FORCE OFFICER EVALUATION
SYSTEM ................................................................................ IV-I
Major Features of the Current OER System ....................... IV-I
Issues Affecting Officer Evaluations ................................... IV-8
Summary ................................................................................. IV -21
V CONCEPTUAL DESIGNS FOR THE AIR FORCE OER ............. V-I
Formulation of Conceptual Design ....................................... V-I
Testing and Redesign of Concepts ....................................... V-5
Conceptual Designs for Officer Evaluation ......................... V-6
Uniform Elements of the Conceptual
Designs ............................................................................... V-7
Conceptual Design I: Differentiation
through Command Persuasion ......................................... V- 17
Conceptual Design 2: Differentiation
through Rater Persuasion .................................................. V-22
Conceptual Design 3: Differentiation
through Top Block Constraint .......................................... V-29
Evaluation of Conceptual Designs ........................................ V-37
SECTION TIT[E PAGE
VI IMPLEMENTATION PLAN ..................................................... VI-I
Feasibility Assessment and Final
Decision ................................................................................. V I-2
Design ..................................................................................... VI-3
Development........................................................................... V I-5
Test ............................................ VI-6
Full-Scale Training ................................................................ VI-8
Full-Scale Operation .............................................................. VI-9
Evaluation ............................................................................... VI- I1
Refinement and Maintenance ............................................... VI-12
VII CONCLUDING COMMENTS AND RECOMMENDATIONS ........ VI-I
Recom mended Initial Steps................................................... VII-2
Recommended Changes to OER Process ................ VII-3
Recommended Implementation Actions ............................... VII-5
Other Issues ............................................................................ VII-7
ApPPENDICES
A R EFERENCES .................................................................................... A -I
B SUMMARY OF PERFORMANCE APPRAISAL METHODS ....... B-I
C PRIVATE SECTOR PERFORMANCE APPRAISAL
INTERVIEWS .................................................................................... C-1
D INIrIAL AIR FORCE INTERVIEWS .............................................. D-i
E FEEDBACK INTERVIEW SUMMARY .......................................... E-I
F OER FORMS USED IN THE SERVICES ........................................ F-I
Accession For
NTIS GRAI
DTIC TAB 0l
Unaruiounced 0
Just trtoatton
D1 button/~
Availability Codes
~veil and/or
Dist Speoial
ii
LIST OF TABLES
LA.BETITLE PG
I-! Highlights of the Air Force OER ..................................................... 1-6
1l-I Focus Groups Identification ............................................................... 11-3
I11-1 Comparison of Performance Appraisal
Methods by Purpose and Costs ....................................................... 111-20
111-2 Other U.S. Services OER Comparison ............................................... 111-64
V-I Comparison of Conceptual Designs to Design
Criteria .............................................................................................. V-38
VT-I Implementation Milestone Schedule .................................................. VI-13
LIST OF FIGURES
FIGURE TITLE PAGE
IV-I Air Force Form 707 ........................................................................... IV-4
V-1 Sample Job Description ..................................................................... V-10
V-2 OER Worksheet and Counseling Form ............................................. V-I12
V-3 Conceptual Design I .......................................................................... V-19
V-4 Conceptual Design 2 .......................................................................... V-25
V-5 Conceptual Design 3 .......................................................................... V-33
iii
PREFACE
Syllogistics, Inc., and The Hay Group have prepared this final report of the Air
Force Officer Evaluation System Project sponsored by the Deputy Chief of
Staff/Personnel, under Air Force Contract No. F49642-84-D0038, Delivery Order No.
5025. Lieutenant Colonel James Hoskins, Personnel Analysis Center, Office of the
Deputy Chief of Staff, Personnel, and Lieutenant Colonel Jerry Wyngaard, Air Force
Military Personnel Center, monitored this effort and provided helpful comments on the
draft final report. The Study was executed by a combined project team of Syllogistics,
Inc., and The Hay Group.
The views and opinions expressed in this report are those of the authors and
should in no way be interpreted as an official position, policy, or decision of any
Government agency, unless so designated by other official documentation.
SYLLOGISTICS STUDY PERSONNEL
Mr. Frank M. Alley, Jr., Project Director and Principal Author
Ms. Forrest Bachner, Analyst and Co-Author
Ms. Donna Lessner, Analyst
Mr. Stuart H. Sherman, Jr., Senior Vice President, Corporate Oversight
Dr. Susan Van Hemel, Analyst and Co-Author
Mr. David Weeks, Consultant
HAY GROUP STUDY PERSONNEL
Dr. George G. Gordon, Technical Director and Co-Author
Mr. Jesse Cantrill, Analyst
Lt. General (USAF, Ret.) Edgar Chavarrie, Consultant
Mr. Gregori Lebedev, Partner and General Manager, Corporate Oversight
Mr. Rene Morales-Brignac, Analyst and Co-Author
iI,
EXECUTIVE SUMMARY
From June through September 1987, Syllogistics, Inc., and the Hay Group
conducted a study to examine the strengths and weaknesses of the current United States
Air Force Officer Effectiveness Report (OER) system and to recommend alternative
designs which could improve its usefulness. Two other groups conducted separate but
concurrent efforts with the same study objective. These were active duty and retired
senior Air Force officers at Randolph AFB and students at the Air Force Command And
Staff College. Specific Air Force guidance for the project was that any alternative
conceptual design to the OER should: I) focus on the officer's current job performance;
2) provide good differentiation among officers on potential for promotion and for
successfully executing higher responsibility; and 3) provide some vehicle for giving
officers feedback on their performance to support career development and counseling.
The study was carried out in five major phases:
0 A study of the background of the officer evaluation process in the Air
Force, including review of documentation and briefings by Air Force
personnel;
0 The field data gathering phase which included interviews and focus group
discussions with Air Force officers and functional managers, (interviews
and focus groups were conducted at Andrews, Charleston, Langley,
Offutt, Randolph, Scott, and Wright-Patterson Air Force Bases);
o A review of performance appraisal in non-Air Force organizations
(literature review, industry, other military services and government
entities);
o The analysis of the data; and
v
o Synthesis of options and recommendations.
KEY FINDINGS
Key findings from the study are described below, by source.
LITERATURE
o While a wide variety of performance appraisal methods have been
studied, most are unacceptable because they are either inappropriate to
Air Force needs or totally impractical to implement. The combination of
graphic rating scales and verbal descriptions remains, in our judgment,
the only feasible path to pursue.
0 A performance appraisal system should focus on a single purpose, e.g.,
promotion. Other purposes should be addressed through alternate means.
0 Pbrformance evaluations can be improved by training the evaluators. This
applies to both rating techniques and the need to rate accurately.
o Counseling (performance or career) is best done separately from the
formal evaluation.
OTHER SERVICES
0 Each of the other services recognizes the special relationship between an
officer and his/her immediate supervisor and has tried to reduce the
conflict between maintaining this relationship and providing an honest
evaluation.
vi
o Each of the services has some mechanism for minimizing inflation in
ratings, including peer rankings (Navy and Marine Corps), rate-the-rater
(Army), and intensive headquarters review (U.S. Coast Guard).
INDUSTRY
o Since the principal purpose of performance appraisal in the private sector
is to support relatively short-term compensation decisions, much of what
is done there would not meet Air Force needs.
o Some type of rating control is prevalent in the private sector, but it is
usually driven by the compensation or merit increase budgets.
o Performance feedback is encouraged and emphasized as an important
component in supervisor-subordinate relationships, and most private
sector organizations ti"-n supervisors to give such feedback.
AIR FORCE CULTURE
o There exists the perception that the Air Force officer corps is an elite
group who are all above average.
o The "controlled system" had a very negative effect on morale.
o There is an unwillingness to openly make fine distinctions among officers.
o Career advancement is often viewed as more important than job
performance, especially by junior officers.
DEVELOPMENT OF CONCEPTUAL DESIGNS
Building upon the foregoing rich and diverse baseline of information, the
Syllogistics/Hay study team developed three alternative approaches to enhance the OER
vii
process. These alternatives were developed in accordance with several design criteria
and guiding considerations. The design criteria stated that an improved OER should:
o Focus on job performance, not peripherals;
o Provide differentiation in potential for promotion;
o Be acceptable to the officer corps;
o Provide a means for developing subordinate officers; and
o Minimize the administrative burden.
In addition to these criteria the project team worked with a number of
considerations, including:
Alternative OER designs should reflect and sustain the larger Air Force
culture;
0 Within the Air Force, the alternative OER designs should encourage
change in attitudes and habits concerning the OER;
o Promotion board judgment, not mere statistics, should be the ultimate
method of making career decisions; and
o Alternative OER designs should be practical to implement.
RECOMMENDED OER DESIGNS
The study-developed alternatives share a number of common elements but
represent three levels of departure from current practices. Common elements in the
designs include a parallel, "off-line" feedback system between the rater and ratee; ratings
on fewer performance factors; a single verbal description of performance which focuses
viii
on specific accomplishment, not adjectives; computer basing of ratings; an improved
method for producing job descriptions; and having potential rating done only by officers
above the level of the rater. The principal distinguishing factor among the three
alternatives resides in the methods used to assure that differentiation among officers is
built into the system.
CONCEPTUAL DESIGN 1
The first alternative accompi;.z.: differentiation in the same way as does the
current Air Force system. That is, differentiation is represented by the level of the final
indorser. Discipline is maintained by persuasion from the Chief of Staff to the
MAJCOM commanders and by providing promotion boards with information on the
distribution of indorsements produced by each command.
CONCEPTUAL DESIGN 2
The second alternative calls for ratings of or[gzmanc by the rater on a number
of scales and rating of pntial by the indorser on a separate series of scales. "T.is
method attempts to obtain a fair degree of dispersion through the "rate-the-rater"
concept. Specifically, rating and indorsing histories become part of every OER
submitted to a promotion board and also become part of the rating and indorsing
officers' records (and selection board folders) to be considered in their own evaluations.
This alternative would provide a powerful stimulus to differential ratings. However,
given the Air Force history and culture favoring "firewalling*, there is substantial risk
that this approach would meet considerable resistance to compliance from the officer
corps; since with a changed system, many officers would be rated significantly lower
than they are currently.
ix
CONCEPTUAL DESIGN 3
The third and preferred alternative, differentiation through top block constraint,
is designed to reduce any stigma of "negative" ratings, while simultaneously placing
greater emphasis behind recommendations for early promotion by limiting them to ten
percent of each grade at the wing level or equivalent. This ten percent target would
allow for the overt identification of the truly outstanding performers. At the same time,
it is a small enough minority of the population so as not to threaten officers who are not
included in the ten percent stratum. By this approach, the rater would evaluate the
overwhelming majority of officers as "meeting and sometimes exceeding" job
requirements. The rater is encouraged to limit the number of officers rated "consistently
exceeds the job requirements,' through the rate-the-rater concept. The wing
commander, on the other hand, would be compelled by regulation to comply with the
ten percent early promotion recommendation limit.
Based on the study findings and analysis, the consulting team believes that the
third alternative is most likely to meet the Air Force's needs in both the short and long
term. In the short term, the amount of differentiation is very modest, but the possibility
of acceptance without major upheaval is reasonable. In the long run, as the ten percent
ratings and indorsements are distributed, promotion boards will be compari,,8 individuals
with variable and qu:litatively different records (since an individual may receive
different top block ratings on different factors from different raters and indorsers).
OTPER RECOMMENDATIONS
Some changes are also recommended in the information supplied to promotion
boards. In addition to supplying rating and indorsing histories, it is recommended that
only OERs in the current grade or the previous five OERs (whichever is greater) be
provided, the board be given a list of Special Category Units (SPECAT) that are !ikely
x
to have a high proportion of outstanding officers, and a thorough exposition of the
rating tendencies either of the command or of the raters/indorsers be provided to the
boards along with the selection folders.
The final recommendation focuses on the importance of a carefully planned and
deliberate implementation of any modification to the OER process. This is indeed a
critical considerat;on; since the implementation phase involves a number of complex
stages and sets the stage for the acceptance (or non-acceptance) of a modified officer
evaluation system.
The report provides the necessary rationale and backup information for each of
the conclusions and recommendations. We believe that the recommendations are
workable and, if implemented, will contribute significantly toward assuring the
continuation of a quality officer force.
xi
SECTION I
INTRODUCTION
From June through September 1987, Syllogistics, Inc., in conjunction with the
Hay Group, conducted a study to examine the strengths and weaknesses of the current
United States Air Force Officer Evaluation Report (OER) and to recommend alternative
designs which could improve its usefulness. This report documents the findings and
recommendations from that study, and is organized in the following way.
Section I gives the historical background of the OER and explains the project's
objectives and tasking. Section II sets out the p~rocedures which were followed in the
study. Section III presents the findings of the data collection and analysis phases of the
study from non-Air Force sources, while Section IV gives the Air Force specific
findings. Our rationale in formulating alternative OER designs is given in Section V
followed by indepth descriptions of these alternatives for improvement of the OER
system. Section VI outlines a proposed implementation plan and Section VII concludes
with summary observations of the study group.
The assessment of officer performance is an important function for the United
States Air Force and makes a significant contribution to the maintenance of the
consistent high quality of its officer force. The Air Force uses the OER for several
purposes, including: selection for promotion and school assignment; job assignment
decisions; and augmentation, and separation decisions.
HISTORICAL BACKGROUND
The Air Force like many large organizations has experienced inflated evaluation
ratings and/or evaluation systems which were incompatible with their overall purposes.
There have been six distinct phases in the Air Force OER system since the establishment
of the Air Force as a separate service in 1947. These are: I) the forced choice method
1-1
adopted from the Army in 1947-49; 2) the critical incident method used from 1949-52;
3) rating of performance factors with narrative commentary, 1952-1960; 4) the "9-4"
system, 1960-1974; 5) the "controlled era", 1974-1978; and finally, 6) a return to a
mechanism similar to 3) from 1978 to the present. Although these phases will be
discussed in greater detail in the following pages, two characteristics have recurred
throughout this history.
The first characteristic is that throughout all the OER changes, major and minor,
the Air Force has availed itself of extremely high-level expertise, from academia,
industry, and in-house, in its deliberations. The Air Force has over the years been
willing to consider many state-of-the-art approaches to performance appraisal.
The second characteristic is the fundamental conflict between administrative need
for differentiation, as institutionalized through the *up or out" system, versus an
institutional reluctance to identify less than outstanding performance.
PHASE 1: 1947-1949
Initially the Air Force adopted the A-my system for its OER program. This
system included narrative comment, but the primary rating tool was the forced choice
method which had been developed during World War I! by industrial psychologists as a
means of reducing bias in the ratings of Army officers. In this method the rater is
asked to choose from sets of phrases those which are most and least descriptive of the
ratee. Raters did not know how the overall rating would come out, as the OER forms
were machine read and scored according to a "secret" formula. The forced choice system
was discontinued due to the lack of rater acceptance. The raters wanted to know how
they were "grading" their subordinates.
1-2
PHASE 2: 1949-1952
In 1949 a new evaluation system was implemented which incorporated the critical
incident approach as well as mandatory comments by the rater. The front side of the
form showed the rater's comments about certain ratee traits and aspects of performance
along with the indorsement. The reverse side covered proficiency and responsibility
factors on which the rater evaluated the ratee. The scores were then multiplied by a
weighting factor, totaled, and divided by the number of factors to derive a total score.
This system was terminated in 1952 due to inflation of ratings and problems with
the scoring of the forms. Total score became the predominant concern, outweighing
individual factor scores. In addition there was some indication that inappropriate
weights had been assigned to certain factors. Finally, the ratings on the front and
reverse sides of the form often showed an illogical relationship and the form was very
time-consuming to complete.
PHASES 3 AND 4: 1952-1974
In 1952 a third OER system was implemented. This system was derived from a
study of private organizations, the other U. S. military services, and the Royal Canadian
Air Force.
The basic form of the 1952 system incorporated six performance factors which
were rated against graduated standards. The reverse side of the form cailed for an
overall rating as well as providing space for the indorsement.
Although there have been many forms as well as policy changes since the 1952
system was implemented, the basic form and aim of the system have remained
consistent, with the exception of the 1974-1978 period, through the present.
1-3
The changes which have occurred to the 1952 system include the timing of OER
preparation. This has alternated between a prescribed date and occurrence of an event,
e.g., a permanent change of station move. The period of supervision in which a
supervisor must have observed the work of a subordinate for rater qualification purposes
has gone from 60 to 120 days, to 90 days and back to 120 days. The relationship of the
rater to the ratee have shifted from the officer in charge of career development in 1952
to the immediate supervisor in 1954. In addition, at various points the rank of the rater
and of the indorser relative to the ratee has been variously controlled and uncontrolled.
The number of top blocks which could constitute an outstanding overall rating has for
psychological reasons, alternated between I block and 3. One top block supposedly sent
the message that most officers should fall in the "middle of the pack." Three top blocks
were thought to encourage greater differentiation.
In 1960 the "9-4"system was begun. The 9-4 system continued to use the overall
9 point scale evaluation from previous systems but added to it a requirement to rate
promotion potential on a scale from I to 4. Initially, the 9-4 system did bring some
discipline to the ratings but eventually the ratings became "firewalled" at the top score
of 9-4. This inflation occurred even with an extensive educationai program to warn
evaluators against rating inflation.
By 1968 ratings inflation had once again rendered the OER system ineffective.
Nine out of ten officers received the highest rating, 9-4.
Development work on a new system began in 1968 and continued through 1974
when the controlled OER came into being. During this six year period four major
designs were put forth as collaborative efforts of the Air Force Human Resources
Laboratory, industry, universities, government laboratories, foreign military services, the
other Armed Services, the Air University, and the Air Staff.
1-4
PHASE 5: 1974-1978
In 1974 the controlled OER era began. The basic form of the previous OER was
retained but raters were instructed to distribute their ratings as follows: 50% in the 1st
and 2nd blocks (two highest) with a limit of 22% in the highest block. Although the
system had been extensively discussed and pretested prior to implementation, it
encountered almost immediate resistance.
The basic problem with the controlled OER was that officers who were
experienced in a system that gave top marks on just about all evaluations understandably
resisted a system where top marks became the exception. Perceptions centered about the
notion that a *3" rating was the end of an upward career track in the Air Force.
Although educational efforts were made to overcome such misgivings and
ultimately only the top block was controlled, the initial anxiety about the system was
never overcome. In 1978 the controlled OER era ended when the Air Force leadership
decided that individual need for a less stressful OER system was more important than
the management benefits of differentiation.
PHASE 6: 1978-PRESENT
Since 1978, the OER has retained performance factors, narrative comment, and
promotion potential ratings. The majority of ratings are again "firewalled* to the top
blocks and the discriminating factor has become the rank of the indorsing official and
the words in his/her narrative remarks. Table I-I shows various characteristics of the
OER since 1947.
I-5
*d
a 0 06 C
ao6 6 .-
tnCL
05
C4
06'
C6 V) > IL)
v V ) 4
v: u
0. 0
CIS,
ISJ 1-
z u. w
3 3 3
0-3-
<g 1-6 Li. L
V
00 4.. LD
.'o
V* V) 0,;
51~~~ *OEV. oL~
06 o. C1
.
CA a in.
a a a a a
CA 0-0
3
cm~( E
-o 0
0(66-
0. 2
U V C)
0 &
C1-
V ;I-
I 0... . . .
I u
C:
.il.• --.-
• • .
PROJECT OBJECTIVES & TASKING
The Air Force leadership is concerned that the OER has again become less than
effective for its intended purposes. Some of the features which have been observed to
be deficient and which an acceptable revision should possess are: 1) focuses on the
officer', current job performance, 2) provides good differentiation among officers on
potential for promotion and for successfully executing higher responsibility, and 3)
provides some vehicle for giving officers feedba,.k on their performance to support
career development and counseling. In order to achieve these goals, the Deputy Chief of
Staff for Personnel directed that a study of the OER be performed, to result in
recommendations for an improved Air Force OER system and for its implementation.
Three groups were tasked to perform this study. The first of these groups is
composed of active duty and retired senior Air Force officers and is based at Randolph
AFB, Texas. The second group is composed of twelve students at the Air Force
Command and Staff College at Maxwell AFB, Alabama. They conducted their study as
a class project. The Syllogistics/Hay team is the final study group. This team was
chosen to provide an independent, outside view of the officer evaluation issue and to
apply the expertise of the private sector to the solution of the problems. This study is
thL basis of this effort.
The Syllogistics-Hay team was specifically tasked to study the current Air Force
Officer Evaluation Report piocess to determine its strengths and weaknesses, to apply
their knowledge of available methods for performance appraisal, and to develop one or
more conceptual designs for an improved OER process and recommendations for the
implementation of the design(s).
1-9
SECTION 1I
METHOD
The study was carried out in five major phases: 1) a study of the background of
the officer evaluation process in the Air Force, including review of documentation and
briefings by Air Force personnel; 2) the field data gathering phase, which included
interviews and focus group discussions; 3) a review of performance appraisal from non-
Air Force sources; 4) the analysis of the data; and 5) synthesis of options and
recommendations. Each of these phases will be described in some detail in the following
sections.
PHASE 1: BACKGROUND STUDY
At the outset of the study, the Air Force provided a briefing to contractor
personnel, covering several aspects of the OER, its purposes and the process by which it
is completed. The briefing described the current officer evaluation report form and its
evolution through the history of the Air Force, with information on the lessons learned
as each change was implemented. It described the philosophy of officer evaluation, as it
has evolved, and the difficulties which have recurred through time, especially inflation
of ratings and "gaming" of the evaluation system.
At the contractor's request, an additional briefing was provided, covering the Air
Force promotion system and its interaction with officer evaluation. This briefing
provided valuable background on the operation of promotion boards, on the use of the
OER in promotion decisions, and on the officer force structure and factors affecting
promotion opportunities.
Copies of briefing materials, as well as pertinent reports, Air Force regulations
and other publications were provided to the contractors. Contractor personnel carefully
I1-1I
reviewed these materials. This was an essential step in the preparation for the next
study phase, the gathering of data from Air Force personnel and others.
PHASE 2: DATA GATHERING
The data gathering phase of the study had four components. The first was
personal interviews with individual Air Force officers who are highly knowledgeable of
the personnel policies and procedures relating to officer evaluation. These officers
ranged from general officers in command and policy-making positions to mid-level
officers responsible for administration of the OER system. In each case, an interview
guide (see Appendix D) was used to direct the discussion and to ensure coverage of
points which the contractors had determined to be of major importance to t!•I• study.
Notes were taken in all interviews for later analysis by the study team. All interviews
were conducted by senior team members with extensive experience and expertise in
interview techniques. The interviews ranged in length from one to three hours. A list
of the officers interviewed is displayed at page D-2.
The second data gathering component was the convening of focus groups of six
to eight Air Force officers each to discuss the OER process. The nine groups included
ranks from lieutenant to major general, but each group was composed of officers of
similar rank (e.g., lieutenants and junior captains, lieutenant colonels and colonels). Some
groups included only rated officers or only support officers, while others were mixed.
A list of the groups, their location, and composition is given in Table II-I.
11-2
TABLE i!-1
FOCUS GROUPS IDENTIFICATION
Group No. Location Ranks Other Information
I Randolph AFB General Promotion Board
Officers Members
2 Pentagon Colonel All Air Staff; mixed
Rated/Non-rated
3 Randolph AFB Lt/Junior Capt Non-rated; support
4 Charleston AFB Lt/Junior Capt Rated; operations
5 Randolph AFB Sr Capt/Maj Rated: operations
6 Randolph AFB Sr Capt/Maj Nonrated; support
7 Randolph AFB Maj/LtCol Rated; operations
8 Charleston AFB Maj/LtCol Non-rated; support
9 Randolph AFB LtCol Mixed rated/non-
rated; ops/support
Each focus group was conducted by two contractor personnel, with additional
personnel present as recorders at most sessions. One of the two served as chief
facilitator and led the group discussion with the aid of a discussion guide (see Appendix
D). The second facilitator was less active, entering the discussion only infrequently, and
assisting in maintaining the focus of the session. The Air Force personnel in the groups
were informed of the purposes and method of the study at the beginning of each session
and were encouraged to be honest and open. The contractor's goal in these groups was
to elicit information, not only on the operation of the OER system, but more
importantly on how officers feel about the process and how it affects their careen.
Each focus group met for approximately one and one-half to two hours.
The third component of the data gathering effort was a series of interviews with
persons responsible foi administering officer evaluation systems of the U.S. military
services other than the Air Force and of the U.S. Department of State and the Canadian
11-3
Armed Forces. These interviews were conducted to learn about details of the officer
performance evaluation systems of these services. The interviews focused upon
identifying the ways in which these systems differ from the Air Force OER system and
the significance of such differences. Each respondent was asked about specific strengths
and weaknesses of the system which he/she administered, and most respondents provided
documentation on their systems.
The fourth data gathering component was a series of telephone interviews with
representatives of major .orporations which have active management performance
appraisal programs. These interviews were conducted to obtain information on current
private sector performance evaluation practices. Fourteen interviews were completed,
using an interview guide (see Appendix C) to ensure that all major points were covered.
The interviews were performed by persons with expertise in private sector performance
evaluation issues.
PHASE 3: LITERATURE REVIEW
In addition to the study of the background materials provided by the Air Force,
the contractors searched and reviewed z large sample of historical and current literature
on performance appraisal. Textbooks and review articles were used for an overview of
"Otraditional" performance appraisal methods, anrl for information on the salient features
of each of these methods.
Special attention was given to cuirent research literature, with the goal of
identifying and evaluating currently popular appraisal methods and systems. This
literature was reviewed selectively, with emphasis on issues and methods which appeared
especially relevant to the needs of the Air Force.
11-4
PHASE 4: DATA ANALYSIS
The data analysis effort included several elements, some of them performed
concurrently. Since the literature review analysis produced a conceptual framework
within which other information was analyzed, it will be discussed first.
The literature review findings were analyzed and organized in several ways.
First, the information was searched to determine major features which are common to
all or most performance appraisal systems. These features were listed and used in the
analysis of data from other sources (see below). The study team also developed a
taxonomy of performunce appraisal systems, based on what is evaluated, what measures
are used, and the techniques by which the measures are applied. The next step was to
identify in the literature a consensus on the , •,-ionship between organizational
characteristics and performance appraisal methods. This resulted in a number of
principles relating organizational characteristics to the categories of appraisal methods
which have been found to be appropriate to them.
The material from the briefings and documents provided by the Air Force was
reviewed to extract major recurring themes or issues. These issues were listed and
classified for use when evaluating alternative proposals for changes to the OER process.
Those issues which emerged as most important were also compared with the data
gathered in interviews and focus groups, (i.e., Are the historically important issues still
seen as important by current officers?)
The notes from interviews with Air Force personnel and from the Air Force
focus groups were analyzed to determine major issues. A capsule description of each
issue was prepared, and where specific issues could be identified with particular
IN-
population groups, this was done. Certain issues, for example, were of concern more to
rated than to non-rated officers; others were more salient to junior officers than to
senior officers.
The issues were categorized into groups according to their content or area of
reference, for example, issues relating to the OER form, to the OER process, to the
matter of control of rating distributioiks. The study team was careful to document the
perceived strengths cf the present system as well as its perceived weaknesses. The study
team also noted its impressions of Air Forcc cultural and organizational characteristics
which interact with the OER process, since these are of great importance in determining
the acceptability and feasibility of any proposed changes to the OER process.
The data from interviews with the other services and departments were reviewed
and analyzed to extract major features of each performance appraisal system. A
comparison matrix was prepared to facilitate understanding of these systems and of their
similarities and differences. These systems were also examined to determine how each
deals with the issues which had been found to be of greatest importance to the Air
Force.
The information gathered by telephone interview from large corporations .vas
subjected to an analysis similar to that used for the other military services, Major
features of each corporation's performance appraisal system were extracted, and a matrix
was prepared comparing the features across companies.
PHASE 5: SYNTHESIS OF RECOMMENDATIONS
Upon completion of the data analysis, the study team began developing
conceptual designs for improving the Air Force OER process. This involved careful
consideration of the ;.riteria which had teen developed for a successful OER, the
practical considerations wi'hich had emerged in the analysis phase, and the knowledge
11-6
-gained from the literature and from other organizations concerning the feasibility and
effectiveness of various potential solutions to the problems we had identified.
Several preliminary OER designs were outlined, and their salient features were
listed. These features were then discussed during interviews with 20 Air Force officers
of various ranks, many of whom administer OER processing for their commands or
activities, to obtain feedback on the value and feasibility of each feature. The feedback
interview results were tabulated and analyzed, and decisions were made by the study
team about features to be retained and those to be discarded or revised. The
preliminary alternative conceptual designs were then revised into final recommended
conceptual designs for presentation at the final briefing and in this final report.
"1-7
SECTION III
FINDINGS ON PERFORMANCE APPRAISAL IN NON-AIR FORCE
ORGANIZATIONS
This section gives the findings about performance appraisal in non-Air Force
organizations. These were collected from a review of the performance appraisal
literature, interviews with fourteen private sector organizations, and interviews with
officials from the other armed services as well as the Department of State.
PERFORMANCE APPRAISAL: FINDINGS FROM THE LITERATURE
A literature search was conducted during the project with two purposes in mind.
First, we wanted to determine recent trends and developments in the field of
performance appraisal. Second, we hoped to cull from the literature an indication of
standard elements for a performance appraisal system which could be used in our
analysis of, and deliberations over, alternative OER designs.
In addressing these two purposes, this section is organized into four parts. The
first part, Survey and Background, discusses the available liteiature and gives the
historical development and current position of performance appraisal. The second part,
Standards, offers a set of standards for all performance appraisal systems and discusses
typical errors in appraisal. This part also includes a discussion of the components of any
performance appraisal system. The third part, Afethods, describes the primary forms of
performance appraisal with the emphasis on subjective methods and compares these
methods. The fourth part, Implications, offers some conclusions from the literature
search and their implications for the Air Force's inquiry into alternative OER designs.
Ill-I
SURVEY AND BACKGROUND
The literature on performance appraisal is both extensive and diverse, and
touches on many side issues such as motivation, job satisfaction, equity, etc. The bulk
of the literature focuses on different aspects of documentable performance measures, a
focus which is understandable due to the legal requirements of Equal Employment
Opportuvity legislation.
At the same time, an irea that is somewhat lacking in treatment is that which
pertains to such broad organizational issues as the practical and meaningful
implementation of performance appraisal within an organization and the matching of
performance appraisal techniques with performance appraisal purposes.
Rating scales, as a performance appraisal technique, have been in use at least
since the 1920s. Although several newer techniques have been introduced, rating scales
still predominate. Much has been written about Behaviorally Anchored Rating Scales
(BARS), but the developmental costs appear to outweigh the advanta;es associated with
the technique. The use of outcome-oriented techniques, such e. rna~.gement-by-
objective, as a performance appraisal method is increas.!_g in Popularity as a
management tool, but there is some indication that its popularity for appraisal purposes
may be fading.
The thrust of the literature search was on current literature which for our
purposes was 1985 to the present. Certain standard texts were also used, primarily for
the Methods section. These were Qrstpizntional Behavior and Personnel Psvchologv by
Wexley and Yukl (1977); Personnel: A Diaanostic Aooroach by Glueck (1978); and,
finally, Anolied Psycholoav in Personnel Manaaement by Cascio (1982).
Performance appraisal, evaluation, or, as it is alternatively callpd, employee
proficiency measurement, is generally defined as 'the assessment of how well an
111-2
employee is doing in his/her job" (Eichel and Bender, 1984). The activity of assessing
job performance is certainly widespread in the United States. A Bureau of National
Affairs (BNA) study in 1974, for example, found that three-fourths of supervisors,
office workers, and middle managers have their performance evaluated annually. A
second BNA study (BNA 1975) showed that 54% of blue collar workers participate in
performance appraisal. How these assessments are used by organizations, however,
varies widely and has shifted noticeably over time.
Before 1960, performance appraisals were used by most organizations to justify
administrative decisions concerning salary levels, retention, discharges, or promotions.
In the 1960s, the purpose of performance appraisal grew to include employee
development and organizational planning (Brinkerhoff and Kanter, 1980). In the 1970s,
requirements of the Equal Employment Opportunity laws caused organizations to
formalize performance appraisal requirements in order to justify salary, promotion, and
retention decisions (Beacham, 1979).
Currently, performance appraisal is used primarily for compensation decisions
and often for counseling and training development. Performance appraisal is used less
frequently as a basis for promotion, manpower planning, retention/dischaige, and
validation of selection techniques. (Eichel and Bender, 1984; Hay Associates, 1975;
Locker and Teel, 1977).
Although performance appraisal is widely practiced, the activity is still usually
regarded "as a nuisance at best and a necessary evil at worst' (Lazer and Wikstrom,
1977). This attitude towards performance appraisal seems to be held often by both
evaluator and evaluatee. Schneier, Beatty, and Baird (1986) note that the requirements
of performance appraisal systems often clash with the realities of organizational culture
and of managerial work. For example, a manager often has an interest in taking
decisive action whereas the performance appraisal may have ambiguous, indirect results.
111-3
Employee attitudes toward organizational pron .tional systems have also been found to
be negative. In one study of such attitudes it was found that respondents believed that
personality was the most significant factor in career advancement and that promotion
decisions were usually made subjectively and arbitrarily by superiors (Tarnowieski,
1973).
Regardless of the perceptions, performance appraisal is a necessary organizational
activity. The following sections describe the current state of this activity.
STANDARDS OF PERFORMANCE APPRAISAL
Whatever performance appraisal system is used, there are certain standards which
the system should meet. The literature identifies five such categories of criteria, narrely:
legality, validity, reliability, acceptability, and practicality (i.e., cost and time). Thc,
categories are closely related and must be defined in relation to one another.
Luality refers to the legal requirements for performance appraisal systems,
which are the same as for any selection test in that they stipulate that the performance
appraisal system be valid and reliable. Validity, in turn, refers to the extent to which an
instrument or method measures what it purports to measure. For example, an
organization decides to evaluate an employee's performance. If the goal of the
performance appraisal is selection for promotion then the performance factors to be
evaluated must be selected based on an idea of what will be successful performance
indicators for the next level position. This evaluation would not be valid unless it could
be demonstrated that success in the selected factors was a predictor of success in the job
to which the employee was being promoted.
Apart from legal implications, it must be noted that the idea of validity is
important at the more elementary level of organizational planning as well. If the
organization were to evaluate job performance for developmental purposes then the
111-4
evaluation must be designed to identify individual strengths and weaknesses and must
incorporate a vehicle for communicating this information between the rater and ratee.
The third criterion, reliability, is the extent to which a personnel measurement
instrument provides a consistent measure of some phenomenon. For example, given the
assumption that a person's skills do not change, an instrument which measures skills
repeatedly would be reliable only if it repeatedly produced approximately the same
scores.
The fourth criterion, aa biity, refers to a system's having to be acceptable to
both evaluators and evaluatees. By acceptable, we mean that the system be perceived as
fair and supportable within the organizational culture. Findings from one study of
middle-level managers indicate that the procedures by which appraisals were made
seemed to affect the perception of fairness to the same degree as the ratings themselves
(Greenberg, 1986). This study also found that procedures that give employees input to
the performance appraisalsystem are seen as being fairer than those that do not.
The issue of acceptability must be considered whenever there is an attempt to
introduce a new appraisal system into an established organization. No matter how well-
designed an appraisal system is from a technical standpoint, it is not likely to be
effective if it requires behaviors which are incompatible with the customs and
expectations of the organization's members. A well-designed and well-implemented
program of education and training may improve the acceptability of any appraisal
system, but it will not overcome a fundamental mismatch between the appraisal method
and the corporate values or culture.
Finally, the criterion of Draicafity refers to the requirement that the
performance appraisal system should be fairly simple to administer and reasonable in
terms of time required and cost of development.
111-5
Problems of Performance Annralsals
Although these standards could go a long way in promoting the integrity of
performance appraisal systems, there are still typical, almost unavoidable errors made in
the performance appraisal process due to the subjective nature of most measurement
techniques combined with the proclivities of the raters. Among these are central
tendency errors, "halo" effects, contrast effects, similarity-to-self errors and opportunity
bias.
Central tendency error is the propensity to grade performance at an average point
on a scale rather than rate at the very high or very low end. Leniency and strictness are
different manifestations of the same theme -- leniency being defined as the tendency to
constantly rate at the higher end of the scale and strictness the reverse.
A second common difficulty is referred to as the "halo" effect. The halo effect
occurs when an evaluator assesses all factors based on the evaluator's own feelings about
one or more factors of performance, rather than assessing each factor objectively. Halo
effect can be reduced either by changing the sequence in which the evaluator rates
performance factors or by making the performance factors more specific.
Contrast effects occur when a person is evaluated against other people rather
than against the requirements of a job. For example, three people are up for a
promotion, one average and two less than average performers. The evaluator promotes
the average performer because he or sh,. looks better in contrast to the other two
candidates, not because he/she is necessarily qualified for the promotion.
Similarity-to-self error occurs when an evaluator rates a person based on the
evaluator's (often unconscious) perception of how similar that person is to him- or
herself. This similarity could be in terms of job experience, educational background,
111-6
personal preferences, etc. Once again, the evaluator is not using a job related criterion
to make his/her rating decision.
Opportunity bias is a rating error which can manifest itself in two ways. The
first is when objective data which may or may not be job related are used in an
evaluation. Such objective dath could be absenteeism, tardiness, sick leave, etc. These
data are objective and readily available, but may be over-emphasized relative to other
aspects of the job which are unable to be measured objectively.
The second way in which opportunity bias occurs is often associated with
evaluations for employees of field offices, remote sites, etc., by headquarters personnel.
In this manifestation, the evaluator tends to downgrade the field personnel because their
work is not visible to the evA!uator.
Components of Performance Annpra1sPl
Prior to discussing specific methods of performance appraisal, the actual
components of the performance appraisal system need to be identified. These include
goals, methods of performance appraisal, indicators of performance, schedule of
appraisals, znd evaluators.
•.gJj. The goal or purpose of performance appraisal is usually either to support
the administrative needs of the organization or to facilitate individual employee
development. The goal of the performance appraisal should drive the type of
performance appraisal system used and the type of performance information collected.
For example, the primary administrative uses of performance appraisal are for
compensation and promotion decisions. One would assume, then, that an organization
would make these decisions based on assessment of current performance and would
choose a performance appraisal method which would provide that information. The
same idea would hold for the organization whose performance appraisal goal is employee
111-7
development. The method chosen in this case should give an indication of employee
strengths and weaknesses.
There is indication in the literature that performance appraisal for multiple
purposes which include development tends :o fail on the development side. One important
study showed that employees became defensive about performance counseling when a
compensation decision was dependent on a favorable rating (Meyer, Kay & French,
1965). For this reason some authors argue for separate performance appraisal systems
for different purposes or at least for separating the counseling session in time from the
formal evaluation.
Methods. Methods of performance appraisal can be categorized as objective and
subjective methods for purposes of broad differentiation.
Subjective methods, on the one hand, rely on the opinion of an individual or
several individuals regarding an employee's performance. Most often subjective methods
use some sort of scaling device to record these opinions concerning specified
performance factors. There is tremendous variation in these techniques, mainly in the
degree of accuracy attempted by the scale.
Objective methods, on the other hand, use direct measures to rate employees.
Such direct measures can be either rates of production, personnel statistics (e.g., absence
rates, sick days) accomplishment or non-accomplishment of specified performance
objectives or test scores.
Objective methods are generally used with employees whose jobs are repetitive or
production-oriented. Objective measures carry the obvious advantage of not being
dependent on evaluator judgment. However, they may not be as useful to many
organizations as subjective measures because they often reflect outcomes which may not
provide the total, or most important, picture of an individual's performance. In
111-8
addition, they frequently fail to provide a means for comparison of performance among
employees. Finally, it is occasionally the case that plausible objective performance
measures simply cannot be devised for a particular job. Practical considerations usually
limit the use of objective techniques, although it is important to note that objective
information can be helpful in supporting subjective ratings, even when correlations
between subjective and objective ratings are low (Cascio & Valenzi, 1978).
Taylor and Zawacki (1984) categorized methods as traditional (i.e., use of
quantitative or statistical tools along with judgment by an evaluator to evaluate
performance) or collaborative (i.e., use of some form of joint, evaluator-evaluatee, goal-
setting technique related to performance.) In a study of Fortune 500 companies, these
authors found that collaborative designs brought about improvements in employee
attitudes more often than traditional designs. They also found that, although more
companies were satisfied with collaborative than with traditional designs, there was a
general shift in usage to traditional designs, perhaps due to legal requirements for
precise measurement.
In another study of the effects of goal-setting on the performance of scientists
and engineers, nine groups were formed which varied goal setting strategies (assigned
goals; participatively set goals; and "do your best") and recognition vehicles (i.e., praise,
public recognition, bonus) (Latham & Wexley, 1982). Those in the groups which set
goals, either assigned or participatively.had higher performance than those in the "do
your best' group. In addition, it was found that those in the participative group set
harder goals and had performance increases which were significantly higher than the
other two goal-setting categories,
Indiisiira. Indicators of performance can b- behaviors displayed by employees,
tangible results of employees performance, and/or ratings on employee traits or qualities
(e.g., leadership, initiative).
111-9
There is consensus in the literature that traits are not the preferred performance
indicators. Traits are difficult to define and therefore can lead to ambiguity and poor
inter-rater reliability. Trait rating may also not be helpful from a developmental
position as it is hard to counsel employees, for example, on "drive'. Finally, a trait-
oriented appraisal is likely to be rejected by the courts (Latham & Wexley, 1982). It is
difficult to show, first, that a trait has been validly and objectively measured, and
second, that a particular trait is a valid indicator of job performance level. Behavioral
indicators can be shown through job analysis to be valid measures of performance.
Research on these indicators suggests that rating both behaviors and results is the best
course of action (Porter, Lawler & Hackman, 1975).
Schedule of the Apnralsal. Most organizations appraise performance annually,
usually for administrative convenience. S6nedules are often based on employee
anniversary dates with the organization, seasonal business cycles, etc.
Appraisals scheduled once a year solely for administrative convenience are
difficult to defend from a motivational viewpoint, since feedback is more effective if it
immediately follows performance (Cook, 1968). In addition, if all appraisals are
conducted at one time then managers have an enormous workload, although the annual
dates for all employees need not coincide. Variable schedules for appraisals can be used
when there are significant variations in an employee's behavior, although problems with
this idea can include inconvenience and lack of consensus over what should constitute
"*significantvariation.'
Evaluatoil. An evaluator can be the employee's immediate supervisor, several
supervisors, subordinates, peers, outside specialists or the employee him/herself.
In a study by Lazer & Wikstrom (1977), the employee's immediate supervisor was
found to be the evaluator for lower and middle management in 95% and for top
Ill-10
management in 86% of companies surveyed. Use of the immediate supervisor as the
evaluator is generally based on the belief that the supervisor is the most familiar with an
individual's performance and therefore the best able to make the assessment.
Several supervisors can be used to make the appraisal, a method which has the
possibility of balancing any individual bias. Eichel and Bender's study (1984) shows that
in 63% of the responding companies another supervisor would join in the appraisal in
some way. Another study (Cummings and Schwab, 1973) showed however, that an
evaluation by a trained supervisor was as effective as by a typical rating committee. In
any event, the research on the effectiveness of joint appraisal by several supervisors is
sparse and inconclusive.
Peer evaluation, although rarely used, consistently meets acceptabie standards of
reliability and is among the best predictors of performance in subsequent jobs. Also,
peer appraisals made after a short period of acquaiutance are as reliable as those made
after a longer period (Gordon A Medland, 1965; Korman, 1968; Hollander, 1965). Peer
evaluations may not be used extensively because peer. are often reluctant to ac! as
evaluators or to be evaluated by their peers, supervisors may not want to relinquish their
managerial input to evaluation, and it may be difficult to identify an appropriate peer
group.
Outside specialists can be brought in to conduc: appraisals but this is rare. The
assessment center technique incorporates outside personnel but this technique is often
expensive in terms of time and manpower. Use of outside specialists was so infrequent
that it was not even reported in the 1975 BNA study.
Self evaluation in the form of either formal or informal input to the appraisal
process was reported in three out of four responding companies in Eichel and Bender's
survey (Eichel & Bender, 1984). Several studies which compared self and sup- visory
Ill-I I
assessments showed low agreement between the two techniques (Meyer, 1980). Self
assessment appears to be used primarily for employee development purposes, while
supervisory assessment is used mainly for evaluative purposes.
The role of the evaluator is key in most performance appraisal systems, because
most performance appraisal systems rely on the judgment of the evaluator. On this
point the literature supports the idea that evaluator training can be effective in reducing
evaluator error, such as 'halo', especially if the training includes practice (Landy & Farr,
1980).
Within the context of these components of any performance appraisal, specific
methods of appraisal are described next.
METHODS
As discussed in the previous section, methods for performance appraisal can be
divided into objective or subjective. An overview of methods is described below with
the subjective methods first. Appendix B offers a more complete discussion of each
technique along with sample forms.
Sublective Methods
Nine subjective performance appraisal methods are identified in the literature,
including:
,l*atlj._ScaIle. These have been and continue to be the most popular forms of
performance appraisal. In this method, the evaluator is asked to score an employee on
some characteristic<s) on a graphic scale. Characteristics can be personal traits such as
drive, loyalty, enthusiasm, etc., or they can be performance factors such as application
of job knowledge, time management, and decision-makitg. Scoring is sometimes left
completely to the judgment of the evaluator; alternatively, standards can be developed
II1-12
which give examples of wa xt should constitute a particular score on the trait or
performance factor.
The scale on which the factor is scored may be a continuous line or in the
multiple step variation the evaluator may be forced to score in discrete boxi;s.
The widespread use of rating scales is probably attributable to administrative
convenience and applicability across jobs. In their simplest forms, however, rating scales
are prone to many types of evaluator bias.
Behaviorally Anchored Rating Scales, or BARS, were developed to address this
problem. BARS provide specific behavioral examples of "good" performance or "poor"
performance developed and validated by supervisors for a particular job. The use of
behavioral examples precludes much of the ambiguity of such descriptors as
"exceptional". BARS, once developed, are fairly easy to use and can provide the
employee with rather specific feedback. BARS are very expensive to develop and
usually are constructed for each specific job. There seems to be some consensus that on
a job by job basis the expense may be outweigh the value. Their most appropriate
application is for very high density jobs such as telephone operators.
CJjcklijzj. In this method the evaluator is given a list of behavioral statements
and asked to indicate or check whether he/she has observed the evaluated employee
exhibiting these behaviors. A rating score is obtained by totaling the checks. Weighted
checklists also use behavioral statements, but weights have been developed for each
statement which correspond to some numerical point on a scale from poor to excellent.
Evaluators indicate presence or absence of each behavior without knowledge of
associated scores. The evaluatee's final score is obtained by averaging the weights of all
items checked.
i11- 13
Eorced Choice. The forced choice method was developed during World War II
by industrial psychologists as a means of reducing bias in the ratings of Army officers.
In this technique groups of statements are developed and grouped, two favorable and
two unfavorable per group. The evaluator is asked to pick from each group of four
statements which are most and least descriptive of the employee being rated. One
statement in each group is actually a discriminator of effective and ineffective behavior.
The other statements are not. The rater does not know which statements are the
discriminators and which are not. Scoring is done separately, usually by the personnel
-department.
The obvious advantage of this technique is that the system, properly constructed,
should reduce subjectivity. However, evaluators are often reluctant to use the method
because they don't know how they are rating employees. In addition, considerable time
is required to develop the discriminating statements properly. Finally, the system does
not effectively support employee development needs.
Critical Incident. Like checklists, the critical incident technique involves
preparing statements which describe employee behaviors. These statements, however,
describe very effective or successful behaviors. Supervisors then keep a record during
the rating period indicating if and when the employee exhibits these behaviors. This
record can be used during the appraisal interview to discuss specific events with
employees. The critical incident technique can be very effective for development
purposes, but is not as useful for compensation or promotion decisions.
Forced Distribution. The forced distribution method asks the evaluator to rate
employees in some fixed distribution of categories, such as 20 percent poor, 50 percent
average, and so forth. This distribution can be done in sequence for different purposes,
i.e., job performance and promotion potential. This technique is administratively simple,
but there are several disadvantages to the use of a forced distribution. It is not useful in
III- 1
providing feedback to the ratee on his/her performance for use in developmental
counseling. It often encounters resistance from the raters, who are uncomfortable
assigning large numbers of subordinates to categories which are less than favorable. The
use of forced distributions where the ratings of multiple groups must be combined may
also lead to problems, because the groups may not all be seen as of equal "quality" by
raters and ratees. For example, is an average performance in a highly selected work
group the same as an average performance in a less elite group? If not, how can the
difference be equitably dealt with in the system? Forced distribution is usually done to
control ratings and to limit inflation.
Bnaal.fja. Ranking involves simply rating employees from highest to lowest
against some criterion. The method carries about the same advantages and disadvantages
as forced distribution but is harder to do as the group size increases. Ranking also does
not allow valid comparison across groups unless the groups share some of the individuals
in common.
Paired Comnarison. The paired comparison is a more structured ranking
technique. Each employee is systematically compared one on one against each other
employee in a defined group on some global criterion, such as ability to do the j..
When all employees in the group have been scored, the number of times an employee is
preferred becomes, in effect, his/her score. This method gives a straightforward
ordering of employees; however, it does not yield information which might be helpful
for employee development. Paired comparison, like ranking, does not allow comparison
across groups.
Fie.ldRyle. The field review approach uses an outside specialist, often
someone from the personnel department, to conduct the evaluation. Both the manager
and the subordinate are questioned about the subordinates' performance, then the
specialist prepares the appraisal with managerial concurrence. The major advantage of
111-15
the field review technique is that it reduces managerial time in the appraisal system and
may provide more standardization in the appraisal s. Managers may, however, delegate
all the appraisal functioa to the personnel office when in practice the technique is
designed to be a collaborative effort.
Essay Evaluatign. In t-is technique the evaluator writes an ebsay about the
employee's performance. The essay is usually directed, that is, certain aspects of the
employee's behavior must be discussed. Essays are often used in conjunction with
graphic rating scales to explain a score. One disadvantage of this approach is that the
writing ability of the rater can influence the employee's final rating if the evaluation is
passed through the organizational hierarchy.
Oblective Methods
Objective methods do not rely on the judgment of an evaluator aid usually
involve capturing direct information about an employee's proficiency or personal work
statistics such as tardiness, etc. Objective methods are usually restricted to production
oriented and repetitive jobs although they are also applied to jobs which are responsible
for sales, profit or other objective outcomes. Even though objective methods may not
rely on subjective judgments, they are still not a panacea for performance appraisal for
the jobs where they are applicable. This is because the objective data is most relevant
to the assessment of current performance, but probably could not stand alone as a
performance appraisal technique for promotion or development purposes. Judgment as
to the relevance of the data still adds a level of subjectivity which is impossible to
avoid.
Two objective methods, proficiency testing and measurement against production
standards are discussed below.
1II- 16
Proficlency Tests. Proficiency tests measure the proficiency of employees at
doing work and are basically simulations of the work a job entails. Typing tests and
assessment center simulation are examples of this technique. Written tests can also be
used to measure the employee's job related knowledge. One disadvantage of the testing
technique, in addition to those given generally above, is that some people are more
anxious during a testing situation than in an actual work situation, and these people will
be at a disadvantage if their anxiety affects their performance. A second disadvantage is
that proficiency tests tend to measure what -an be done as opposed to what is done daily
on the job. For example, lack of motivation on the job may not be reflected in the test
scores.
Measurement Against Production Standards. Production standards are levels of
output which reasonably can be expected from an employee within a given amount of
time. Standards can be set through sophisticated industrial engineering techniques or
they can be as simple as the average output of all employees in the given time. In any
event, an employee's actual performance can then be measured against the standard
rather than against other employees.
OtherLMthod
Management By Objective (MBO. MBO, which can be a goal oriented
management tool, can be used either separately or simultaneously as a performance
appraisal technique. When MBO is used as a nerformance appraisal technique, the
supervisor and subordinate usually establish performance objectives, often in quantitative
terms, for the rating period. At the end of the rating period, actual performance is
compared to the objectives and scored. In an intuitive sense MBO is very appealing as a
technique for performance appraisal as it appears straightforward, can be used to convey
broad organizational goals, and usually has a quantitative orientation. Many
organizations have adopted MBO or some form of goal setting for appraisal purposes,
possibly for these reasons (Kane & Freeman, 1986, Eichel & Bender, 1984).
MBO as a performance appraisal technique is relatively new and therefore has
not been studied extensively (for that purpose). The literature does indicate, however,
some areas where MBO can be troublesome. MBO can be difficult as an appraisal
technique if the appraisal is for promotion purposes; because MBO does not provide
relative performance indicators (French, 1984). A second possible problem is that MBO
tends to focus on goals which can be quantified: production rate, return on investment.
etc. Such quantitative goals often do not include or address causal issues such as
leadership, judgment, etc. In addition quantitative organizational goals are rarely the
result of the performance of an individual. Thus, the appraisal may incorporate factors
beyond the control of the individual. For whatever reason, the literature indicates that
MBO and, to some extent, goal setting as a performance appraisal technique may be
decreasing in popularity (Schuster & Kindall, 1974; Kane & Freeman, 1986; Taylor &
Zawacki, 1984).
Comnarison of Methods
Table 111-1 compares the various performance appraisal methods by purpose or
goal of the performance appraisal and by cost in terms of development and usage.
Examination of this table shows that there is no one method which would satisfy
all three purposes: development, compensation allocation, and promotion. It also shows
that costs associated with various systems vary primarily as a function of the amount of
information which must be collected or developed. Finally, the three employee
comparison methods (ranking, paired comparison, and forced distribution) have the
particular advantage/diadvntage of being useful for employee comparison within a
group, but offering considerable barrier to comparing employees across groups.
III-18
In the next part we will discuss conclusions from the literature and some possible
implications for the Air Force.
IMPLICATIONS FOR THE AIR FORCE
The performance appraisal literature is frustrating in that it tends to dwell more
on specific details of certain methods rather than on larger organizational issues. There
are, however, some themes which appear relevant to the current OER considerations.
The Air Force is a huge and diverse organization which must recruit, train,
develop, and retain its desired work force. In addition, through the up or out system,
the Air Force must constantly pare away at each class of officers. With these thoughts
in mind, the performance appraisal system and the information it can yield to the
individual and the organization take on extraordinary importance. It is also clear,
however, that attempts to increase accuracy in measurement, fairness in procedure, and
information for developmental purposes must be assessed against the administrative
realities and practicalities of a very large and somewhat decentralized organization.
The idea has been offered that the purpose of the performance appraisal system
should drive the type of technique chosen or at least the information collected. The Air
Force, like most organizations, uses performance appraisal now for multiple purposes but
primarily for promotion. If the OER system is to be effective for the purpose of
selection for promotion, then it should focus on that purpose and achieve its other, current
purposes through alternative means.
A variety of performance appraisal methods was described, classified according
to how performance is measured. Examination of these methods suggest that some
methods may be more realistic for the Air Force than others. For example, the
III- 19
S0
U,
< 0-- o"0 .0.. .
io'E 0 0 L
z
0: 00 0cZe
0
4i 'Z C
8c
• o_,2 .0-
a D E .
!II- 20
employee comparison techniques of forced distribution, ranking, and paired comparison
could not be used easily for promotion purposes, because once the rankings within a
particular group have been established, there is no information to support comparisons
across the ranked groups. The problem of equating rankings or distributions across work
grouips or commands does not have a simple solution and is one of the issues which
contributed to the lack of acceptability of the ;974-1978 controlled distribution system.
Critical incident, BARs, and MBO are, or can be, extremely good techniques for
employee development purposes. Each technique, however, carries some feature(s)
which would seem to conflict with the administrative realities of such a huge
organization as the Air Force. For example, BARs involves extensive development
resources and a single OER form could not be used across jobs. Critical incident
requires the superior to keep a log on each subordinate throughout the rating period.
MBO tends to focus on short term quantitative effects and, like ranking, does not
provide relative information across people, much less groups.
The forced choice method appears to actually distinguish performance but is also
associated with user resistance and high developmental costs.
Surprisingly, the method which may be the most feasible, given administrative
workload and organizational culture, is the traditional graphic rating scale, which, in
fact, the Air Force uses now.
Rating scales provide relative information, and can be made more or less specific
through anchors or standards (such as the Air Force has now). Also the performance
factors can be used to transmit the emphasis which the Air Force believes its officer
corps should exhibit. The need may be not so much for a new technique to improve the
OER system but rather control of the present technique to reduce inflation and improve
the quality of performance information evaluated. Currently. the system works with
111-21
informal controls (such as the indorsement process) or with no controls (the tendency to
firewall on the front side of the OER form).
One means of controlling the technique is to influence the rater. This could be
done by including "evaluation of subordinates" as a performance factor on the OER, by
maintaining a history of the ratings given by the rater, or some combination of these.
Evaluations can also be improved through rater training. This idea is very
important if the Air Force wants to move away from the writing style and content habits
currently in use. Raters can be given instruction on the type of behaviors (depending on
technique) to be observed as well as on the organizational desire to have some accurate
means of distinguishing performance. Thus, the training would be two-pronged,
focusing on 1) what and how to rate and 2) the need to rate accurately.
The Air Force currently does not include counseling as part of its overall
performance appraisal system but has indicated a desire to do so. The literatureseems to
indicate that counseling is best done separatelyfrom the formal evaluation. Also, related
to counseling, the literature points to participative goal setting as the most useful
technique in actually changing employee performance and/or attitudes.
Peer evaluation is a promising source of information concerning leadership
identification. Peer evaluation seems to be especially applicable in a military setting
where groups of people enter together and attend training schools, etc. where such
evaluations could be conducted. Peer evaluations should only be used as a
supplementary leadership indicator, however, as there is substantial opportunity for
personal change over a 12-20 year career.
The most fundamental implication appears to be the need for organizational
responsibility toward a performance appraisal system. In order to be useful, a
111-22
performance appraisal system cannot be an independent managerial tool but rather a
process which is an organic part of the organization in which it is operating.
Organizational responsibility toward a performance appraisal system encompasses:
o stating the specific purposes of the performance appraisal;
o defining those behaviors or performance factors which the organization
has established as being necessary to its mission and culture; and,
o supporting the performance appraisal system through education of the
workforce and consistent enforcement of performance appraisal guidelines
from the highest levels of the organization to the lowest.
PERFORMANCE APPRAISAL: FINDINGS FROM THE PRIVATE SECTOR
This section discusses the findings of a series of telephone interviews with
representatives of large, well known industrial organizations. The purpose of the
interviews, which were conducted during the months of June and July 1987, was to
obtain data about current performance a3praisal practices and methodology in the
private sector.
Individuals from fourteen organizations were interviewed using a semi-structured
interview approach. The interviews were designed to acquire information about the
following:
I. The purpose(s) of the performance evaluation system;
2. Process issues (who rates, ratings review, timing, etc.);
3. Rater training;
4. Type of system;
111-23
5. Feedback; and
6. Control mechanisms
SAMPLE
Of the fourteen corporations covered, ten belong to the Fortune 100 list and the
remaining four are in the Fortune 500 group. A special effort was made to contact
organizations which were comparable to the United States Air Force in terms of budget
and personnel dimensions, and this was successfully accomplished. The fourteen
organizations are located in the eastern (9) and midwest (5) regions of the country.
Following is a breakdown of the organizations by industry sector
Aerospace - 4
Electric/Electronics - 6
Chemicals - 3
Pharmaceutical - 1
The interviews were conducted with individuals who represented the human
resource management function of their organizations, and were knowledgeable of and/or
responsible for the performance appraisal system for exempt employees.
FINDINGS
All the organizations had operational performance appraisal systems in place, and
with one exception, all were quite systematic in their approach to evaluating job
performance. The findings about these performance appraisal systems will be discussed
in aggregate and by the following categories:
1. Purpose(s);
2. Type;
111-24
3. Process (who, what, when);
4. Feedback;
5. Rater training;
6. Rcview; and
7. Controls.
In general, all performance appraisal systems were clearly compensation focused,
i.e., the pritiary purpose of performance appraisals was for short-term compensation and
salary administration issues (me. it increases, incentives, etc.).
The purposes of the appraisal systems in these private sector organizations were
few (the maximum number of purposes reported was three) and clearly defined.
Specific purposes were mentioned (all of which were secondary in importance compared
to the short-term compensation purpose) among which are the following:
promotion/succession planning, development, monitoring of performance, and feedback.
Ten of the fourteen corporations reported the use of goal setting/MBO-type
performance appraisal systems, with varying degrees of flexibility. For example, some
organizations described their systems as "straight' MBO procedures, while others reported
that they employed a "loose* version of MBO.
This section will discuss who conducts the rating, the things being rated, and the
timing and frequency of the performance evaluations.
111-25
In nine of the fourteen organizations the immediate supervisor was responsible
-for conducting the performance appraisal. In three organizations, the evaluation was
performed by the direct supervisor and the rater's supervisor. In one organization the
appraisal had two parts: one was completed by the ratee and the other by the direct
supervisor. In the remaining organization, the rating was prepared by a group of
directors.
All fourteen participants in the interview process reported that employees are
rated against performance standards, rather than on a comparison with peers. This is an
important distinction because, as shall be discussed later in the "Implications' section,
comparison against peers is used for the most part for promotion/succession planning
purposes, while ratings against performance standards are used almost exclusively for
compensation related activities.
The findings also yield a very interesting dichotomy of performance standards:
1. Results-oriented standards, which measure the results or output of the
employee being rated. Examples would be sales or profit figures for the
rating period.
2. Behavioral standards, which rate the employee's work behavior rather
than results. The rating factors on the Air Force OEP. are examples of
behavioral standards.
Again, there are important implications in terms of the purpose for which each
set of standards is used, since results-oriented standards tend to be used for the
immediate purpose of determining short term compensation matters, while behavioral
standards are instrumental in promotion/succession planning decisions.
111-26.
Performance appraisals are conducted annually in thirteen organizations (every
six months in one organization). More than 50% of the interviewees reported that the
performance appraisal cycle is driven by the merit increase/salary administration
schedule. (This reinforces the notion that performance appraisals in the private sector
are primarily applied to compensation determinations.)
The timing of the performance appraisals is also a critical issue. Over 50% of
the interviewed organizations execute the appraisals for all their employees during the
same time period (usually at the end of the fiscal year). This is not an unexpected
finding given the prevalence of MBO-type systems. In an MBO system - at least
conceptually - individual goals are derived from the unit's yearly 3oals, and the unit's
goals are themselves derived from the division's yearly goals, and so forth. The g•oals at
all the different levels of an organization are ultimately derived from the organization's
overall goals; logic and efficiency dictate that accomplishment of goals at all levels be
assessed simultaneously.
A related process issue refers to the length of time that appraisal forms are kept
in the individual employee's record. For the present sample, the performance appraisal
forms remain in the employee's record for an average of approximately 3 years. In one
case, only the current appraisal form is part of the record, but the form includes a
section on performance history.
Feedback
All fourteen organizations - with the exception of one participant who indicated
that this was a problem area - encourage and emphasize feedback as an important
component of the supervisor-subordinate relationship. In most of the organizations,
rater and ratee meet at the beginning of the yearly cycle for a goal-setting exercise.
The ratee usually signs off on a list of potential goals or accomplishments.
r11-27
Two organizations have an "areas for improvement" section in the appraisal form,
as well as a self assessment section. In one instance, it was reported that
feedback/coaching was one of the main performance factors on which supervisors were
rated.
Twelve of the fourteen organizations require and provide formal rater training
for their supervisors. One person interviewed indicated that rater training was a
problem area, and another reported that informal training was provided to their
supervisors. The majority of the organizations place a strong emphasis on rater training,
including the distribution of written materials on the subject. In one instance, outside
consultants were hired to provide formal training to supervisors. Several of the
organizations emphasize the goal-setting and feedback aspects of performance appraisal.
In eight of the fourteen organizations the performance appraisal is reviewed by
the rater's supervisor. In fo'ur cases, the appraisal is reviewed by a group (i.e., group of
supervisors, central office, employee relations department). One organization did not
provide information on this issue. One participant reported that there are three levels of
revi. w for performance appraisals, when it comes to making promotion decisions.
Eight ,if the fourteen participants are currently employing a forced distribution
scheme with varying degrees of flexibility, in order to control the rating process,
especially the problem of inflation. Two corporations are considering the
implementation of a forced distribution process, while the remaining four do not have a
control process at this time. In almost all cases, there is a very strong tendency to
111-28
carefully monitor performance ratings. (One of the four organizations without controls,
interestingly enough, has encountered a central tendency rather than an inflation
problem.)
Several of the organizations with forced distribution schemes have defined a
minimum number at which the forced distribution shall be implemented (e.g., 100
employees). In addition, the distributions conform to various shapes, although the
tendency is to have small groups at the higher and lower extremes, plus a large group in
the middle.
Whether there is a forced distribution process in operation or not, performance
ratings in general are very carefully monitored at levels several times removed from the
rater, for promotion/succession planning purposes.
IMPLICATIONS
The purpose of this section is to discuss the implications of the private sector
findings for the Air Force's OER system. The potential impact and applicability of the
key features of performance appraisal systems in the private sector will be examined.
This will be accomplished following the format of the previous section, i.e., by findings
category.
Perhaps :he single most important finding in the entire interview process was the
fundamental difference between the primary purpose of performance appraisal in the
private sector and in the United States Air Force. The primary purpose of performance
appraisals in the private sector is to make short-term compensation-focused decisions.
An OER in the Air Force has far-reaching promotion and career implications for the
individual officer. This fundamental difference represents a major obstacle to the
111-29
application of private sector practices in the Air Force. However, several key features
of appraisal systems in the industrial world can be successfully incorporated into the Air
Force setting.
A second issue relates to the number of purposes for which performance ratings
are used. Air Force regulations cite no fewer than six purposes for the current OER. It
w"'l be recalled that three was the maximum number of purposes reported by the private
sector interview participants. A useful suggestion would be to reduce ihe number of
purposes for which the OER is used in the Air Force, or at least to specify its primary
pu rpose(s).
The prevalence of goal-setting/MBO systems in the private sector was not
surprising, given the compensation focus of the systems. Several features of an MBO-
type system -- clear performance objectives, increased communications between rater
and ratee, continuity, goal orientation -- could be considered for possible
implementation by the Air Force.
However, it should be kept in mind that without an organization-wide
commitment to MBO, isolated features of the system should be carefully considered.
Process
In all fourteen corporations the immediate supervisor was directly involved in the
performance ratings. Significantly, the rater was removed from the potential for
promotion decision. The practice of having the rater provide only performance ratings
(without getting directly involved in the promotion decision) is an issue for consideration
by the Air Force.
111-30
Regarding the criteria against which individuals are evaluated, the usual practice
in the private sector companies is to rate the employee against a series of performance
standards. Comparison with peers, on the other hand, is used for succession
planning/promotion purposes and the rater is usually not directly involved in this
process.
As already mentioned, the private sector sample tended to use two sets of
performance standards -- results-oriented and behavioral. The Air Force can consider
-adopting two sets of performance standards, with the results-oriented standards applied
to duty performance ratings and the behavioral standards used for future
potential/promotion determinations at a higher level.
The timing of the appraisal is another process issue which was explored in the
interviews. Most organizations conduct all of their appraisals at the same time. This i"
a good practice but it probably cannot be easily implemented in the Air Force.
However, the Air Force could consider the option of incorporating all OER's into the
permanent record at the end of the year.
A final process issue refers to retaining the appraisal forms in the individual's
record. The Air Force should consider whether all OER's should remain in the officer's
selection record (as current practice dictates) or whether some limit should be imposed.
Feedback is an important aspect of performance appraisal systems in the private
sector. Formal feedback mechanisms could be established in the Air Force, with an
"areas for improvement* section. This feedback/coaching exercise should probably be
established as a parallel process, rather than forming part of the OER form. Informal
and interim feedback/coaching can also be actively encouraged by evaluating the raters
on this managerial aspect of their officer duties.
111-31
Rater training is a key feature of appraisalsystems in the private sector. Formal
and specific courses on performance appraisal are available, and in most cases required
in private sector organizations. Training programs emphasize different things (e.g.,
providing feedback, goal-setting, use of rating scales) depending on the kind of system
being used. A stronger emphasis on training officers in performance appraisal matters -
- as an integral function of their duties and responsibilities -- is recommended.
In virtually all the corporations that were interviewed, performance ratings are
reviewed at a higher level (usually the rater's supervisor). This review is conducted with
the purpose of examining the correctness of the performance ratings per se. In some
cases, higher level reviews are conducted but with different objectives, i.e., promotion
and succession planning. A similar process, for example, Could be established at the
Wing Commander level of the Air Force.
Controls
This is a particularly interesting topic given the evolution and history of the
United States Air Force officer performance evaluation process. A similar evolutionary
insight was gained from the present set of interviews, as virtually all participating
organizations had either abandoned, implemented, or considered the implementation of a
control mechanism. In addition, the controls issue in these large corporations as well as
in the Air Force goes to the heart of the most pressing and evident performance
appraisal problem of the OER system -- the inflation of ratings.
Ten of the fourteen private sector organizations either had implemented or were
considering the implementation of a control mechanism for performance ratings. Even
111-32
though the four remaining organizations were not currently using formal control
mechanisms, strong monitoring and training programs in these companies were making a
significant contribution to a healthy variance in performance ratings.
From a more technical perspective, it was interesting to note that in the
interview sample, it was common practice to configure the forced distribution with small
groups at the extremes and a large group in the middle (which in some cases consisted of
2 or 3 sub-groups) In hindsight, it seems that the '22-28-50' configuration which was
implemented in United States Air Force in 1974 was counter to the way in which most
programs are designed.
An additional technical issue regarding forced distribution schemes refers to a
minimum number of individuals on which the distribution is imposed. In the current
interview sample, this minimum number ranged from 50-100. This issue calls to mind
the often cited example of the Thunderbird pilots. Applying a forced distribution to the
six (eight if you count the two alternates) most accomplished pilots in the Air Force is
not a reasonable proposition. Having a minimum number of 50-100 pilots, for example,
would allow for more equitable and meaningful distinctions between higher and lower
performers,
PERFORMANCE APPRAISAL: FINDINGS FROM THE OTHER SERVICES
Early in this study, daia were collected from other uniformed services to learn
how these organizations have responded to the challenges of conducting performance
nppraisals of their officers.
Th', data was gathered in a series of interviews with representatives of the
Army, Navy, Marine Corps, and Coast Guard. In addition to these uniformed services,
an interview was held with representatives of the Department of State concerning
performance appraisal of foreign service officers. (The study team judged that the
111-33
conditions of employment for foreign service officers are sufficiently like those for Air
Force officers to warrant inclusion of this information in the analysis.)
In each service, these interviews were held with representatives of the office in
the service headquarters having proponency for policy toward, monitoring of, and
quality control of the officer evaluation process. In each case, the person interviewed
was the officer in charge, generally in the grade of coloncl/GM-14, except for the
Department of State where the interviewee was the Deputy Director. (It is interesting to
note that in two services, the Army and the Navy, the individual in charge of officer
evaluation reporting is a civilian employee.)
Each service furnished copies of its basic instructions for OER preparation, the
forms used, and supporting pamphlets and materials. In the course of each interview,
questions were asked to learn the issues each service has faced in developing a
meaningful evaluation system. Each service was cooperative and without exception
provided candid responses to our questions.
In addition to United States Government entities, data were collected from the
Embassy of Canada on the evaluation of Canadian Armed Forces officers. It was not
feasible to interview the Canadian officials having responsibility for operation of the
OER system. For that reason, because there is nothing uniquely different in the
Canadian OER system, and because the Canadians use a closed system, this information
will not be included in the subsequent portions of this section of the report.
The remainder of this subsection will consist of brief discussions of the systems
for officer evaluation used in each service, followed by a summary showing the central
tendencies among these systems compared and contrasted to the Air Force OER system.
111-34
United States Army
The Army OER system uses a form and a procedure that were substantially
revised in 1979 in response to unacceptable inflation in ratings. The preceding form had
been in use for six years, and had also been introduced in response to inflation.
Research had suggested that the strongest pressures to inflate ratings were placed on the
immediate supervisor of the ratee. Therefore, the essence of the current system is to
shift the responsibility for applying meaningful discrimination from the rater to the
senior rater (the final indorser), who is typically the rater's supervisor.
Purtense
The purposes served by the Army OER system include the following:
1. Influence the selection of future leaders through maximum input from the
field.
2. Improve the linkage between individual and corporate performance
(modified Management By Objective).
3. Strengthen the chain of command by bonding the ratee to the rater and
encouraging continual, two-way communications between senior and
subordinate.
,4. Enhance professionalism by displaying the standards of professional
competence and ethical behavior which Army officers are expected to
display (teach through use of the form).
111-35
The ratee must have been under the supervision of the rater for not less than 90
days and the senior rater for not less than 60 days. The OER is submitted under the
following general conditions:
1. Annually, based on date of last report;
2. When there is a change in the ratee's principal duty (to include PCS);
3. When departing on extended temporary duty or long term schooling;
4. When there is a change of rater;
5. To complete the record when the ratee is scheduled to meet a promotion
board (in or above the zone) and has not had a report in the current job.
The process begins at the beginning of the rating period when ratee and rater are
required to hold a face-to-face meeting to develop a duty description and set major
performance objectives to be accomplished during the rating period. This information is
recorded on the OER support form (see Appendix F). The rater is the ratee's
supervisor.
Throughout the rating period the ratee and rater are expected to meet
periodically to assess whether the duty description and performance objectives are
adequate. The rater is expected to coach the ratee on his/her personal and professional
development.
At the end of the rating period the personnel support center initiates the OER
preparation by forwarding the OER form to the ratee. who validates the rating chain
and the administrative information thereon. The ratee then writes a description on the
111-36
support form of the significant contributions he/she has made in the job during this
period and forwards the OER form and the support form to the rater.
The rater and intermediate rater (if any) evaluate the performance and potential
of the ratee on the OER form. They also provide comments on the OER support form
and forward both to the senior rater. (An intermediate rater is used only when there is
an officer in the chain of supervision between the rater and senior rater. This occurs
most often when the rater's supervisor does not meet the grade test to qualify as senior
rater.)
The senior rater provides an independent evaluation of the ratee's potential and,
in most cases, the final chain-of-command review of the OER. When the senior rater
has completed the OER. the support form is returned to the ratee. The OER is
dispatched to the Military Personnel Center. A copy of the OER is given to the ratee at
this time.
At the Military Personnel Center, the senior rater's potential evaluation is entered
into the automated personnel record and his/her rating history for that grade is
recomputed. A profile of this rating history is pasted onto the OER next to the senior
rater's potential evaluation of the ratee. The OER is then entered into the official
military personnel file.
Form
One form is used for all officer evaluations, warrant officer through major
general. An example of the current Army OER form is displayed at Appendix F. The
rater prepares the duty descriptiob, using the OER support form. He/she rates fourteen
performance factors on a scale of I to 5 and may write optional comments on
professional ethics. The rater also rates on overall performance (scale of I to 5) and
111-37
-potential for promotion (scale of I to 3). Finally, the rater provides separate narratives
on performance and on potential.
Tli- intermediate rater provides comments on performance and potential, but
does not evaluate on any numeric scale.
The senior rater evaluates the potential of the ratee for promotion, considering
all other officers of that grade in the Army, on a scale of I to 9. The senior rater also
completes a narrative section that focuses mainly on potential but which may refer to
performance by the ratee or to the comments or ratings of the rater or intermediate
rater.
Dlscrlminatlnn Factors
The results of surveys of Army selection board members show that the most
useful discriminator on the OER is the senior rater's evaluation, taken as a whole (that
is, the combination of the potential rating, the senior rater's rating profile, and the
narrative). Other factors from the OER which the selection boards find useful in
discriminating among officers are (in descending order of importance): the rater's
narrative on potential, the rater's narrative on performance, and the duty description.
Feedback
In the Army system, the sources of feedback to the ratee are the OER support
form and the face-to-face discussions which are mandated by Army regulation.
Compliance with the system was not as good as was desired, and in 1985 a provision was
added which requires ratee and rater to certify, by initialing the form, thtat the
discussion required at the start of the rating period had occurred. Written feedback at
the end of the rating period (using the support form) is optional. The ratee receives a
copy of the completed OER but the feedback is diluted by the fact that the senior
111-38
rater's profile is not attached and by the widespread inflation in rater evaluations. The
ratee can review the official file which includes senior rater profiles on his/her OER, by
application to the Army Military Personnel Center.
Quality Control
The essence of the Army's quality control system is an attempt to influence the
behavior of the approximately 10,000 senior raters through interventions initiated by the
Military Personnel Center. To date, these interventions appear to be successfui, as the
rate of compliance by senior raters with the guidance is above 85 percent.
The most stringent control over senior rater behavior involves placing a form in
his/her official military personnel file which displays that senior rater's rating history.
This history reveals at a glance whether the senior rater is complying with the spirit of
the system -- that is, creating a distribution of scores, over time, along the scale of
potential for promotion. This information is available for promotion board review, thus
placing those senior raters who inflate ratings in jeopardy of their own future
promotions. Second, the Army Military Personnel Center has a senior rater contact
program by which they hope to provide continuing education and training in the system.
One of the themes of this education program is the concept of o. Senior
raters are urged to select one or two blocks on the nine point scale (other than the top
one) where they will place typical, high-performing officers, leaving room to rate
exceptional officers on each side of this center of mnz. The rationale provided to
convince senior raters to use this approach is that they should want to:
I. Leave space to identify the very best;
2. Not ruin the careers of the others; and
3. Not de-motivate the officer corps.
111-39
Even the most conscientious senior raters are prone to inflation in score
(however, it is the Army experience that few senior raters are attempting to game the
system). A feature of the senior rater contact program is to offer a senior rater the
opportunity to restart the profile if he/she decides that it has become so inflated as to
obscure meaningful evaluations. The Army is also experimenting with an Army-wide
restart (in warrant officer grades) and will observe the effect on inflation control.
Promotion boards are given a briefing by the OER Evaluation Office. The
response of the boards to the senior rater profile technique, as measured by a
confidential survey procedure, is quite positive. In fact, the boards have asked for rater
profiles in addition; however, the evaluation staff doubt that rater compliance would be
high enough to make this step meaningful.
United States Coast Guard
The Coast Guard OER system was substantially revised ir, 1984, and the resulting
process and form are in many respects likii that of th* Army. The Coast Guard system
protects the ratee-supervisor relationship by shifting the burden of discrimination to the
next higher level (reporting officer). Also, :Se most useful discriminator is the overall
potential evaluation for which the reporting officer's profile is maintained and added to
the report at Coast Guard Headquarters.
A distinguishing feature of the Coast Guard OER system is the degree of
responsibility placed on the ratee. He/she is specifically tasked to clarify the duty
requirements, to obtain feedback and counseling, and to manage his/her performance to
meet or exceed the standards.
Purpose
The purposes served by the Coast Guard OER system include the following:
.40
1. To provide information for central personnel management decisions,
especially promotions and assignments.
2. To set the standards for officer character and performance.
3. To prescribe a common set of values by which Coast Guard aspirations
for its officer corps ý:an be described.
4. To teach each officer what is expected of him/her.
5. To provide a means by which officers can receive feedback about how
well they are measuring up to the standards.
Process
The OER is submitted under the following general conditions:
1. Annually, batched by grade, for officers in grades lieutenant commander
(0-4) through captain (0-6); semi-annually, also batched by grade, for
officers in grades lieutenant (0-3) and below.
2. Transfer of ratee
Transfer of reporting officer (Note: not the supervisor, but the
supervisor's supervisor.)
4. Promotion of the ratee (Note: there are different forms for each grade
with different performance standards).
The process is initiated by the ratee who is required to verify the adm~nistrative
information on the OER form and forward it to the supervisor 14 days before the end
of the rating period. The ratee may also record the duty description and a list of
accomplishments during the rating period on the optional OER support form and
111-41
forward it along with the OER. (This OER support form is mandatory in the case of
ensigns and lieutenants (junior grade). For these officers there are mandatory face-to-
face meetings with their supervisors at the beginning and end of each rating period at
which times the OER support form is used.) Copies of these forms are displayed in
Appendix F.
The supervisor evaluates the ratee's performance of duties, interpersonal
relations, leadership, ard communications skills using graphic scales and narrative. He
also prepares the duty description. The supervisor completes the optional OER support
form and forwards the OER and support form to the reporting officer. The reporting
officer is normally the supervisor's supervisor. He/she may be in the same grade as the
ratee provided they are separated by two year groups. The reporting officer evaluates
the ratee on a set of personal traits and a set of factors under the title - "Representing
the Coast Guard" using graphic rating scales and narrative. The reporting officer
comments on overall leadership and potential for promotion and rates on an overall
potential scale (range of I to 7).
The report is reviewed by a third officer, normally the reporting officer's
supervisor. Only Coast Guard officers may act as reviewing officers. The reviewer's
responsibility is to ensure that the report is consistent and that it reflects the Coast
Guard standards for officer evaluation.
At the Coast Guard Headquarters, the OER is reviewed for administrative
accuracy and internal consistency. Unsatisfactory reports are returned for
correction/revision. The reporting officer's potential rating is entered into the
automated personnel record and his/her rating history for that grade is recomputed. A
profile of 4'3t rating history is pasted onto the record copy of the OER, just below the
reporting officer's evaluation for potential.
111-42
When accepted as correct at Headquarters, a copy of the report, without the
rating profile, is returned to the ratee.
Form
A separate OER form is used for each officer grade. (Appendix F displays the
form used for lieutenant commanders.) A distinguishing feature of the Coast Guard OER
is that the evaluation standards for each rated factor are printed on the form; thus the
need for a separate form for each grade. For each factor there is a brief description of
what is to be rated and a scale of I to 7. For values 2, 4, and 6, there is provided a
description of the behaviors corresponding to those values on the scale. This is a variant
of the behaviorally anchored rating scale described in Appendix B. The scales are so
constructed (and the instructions emphasize) that a value of 4 describes the "typical, high
performing Coast Guard Officer" of that grade. It is expected (and, to date,
experienced) that 70 percent of officers will be found in the range 3 to 5 on the scale
for most factors. Raters are encouraged to use the "not observed" block, if appropriate
(it should be noted that the instruction does not mandate minimum periods of
observation for either supervisors or reporting officers.)
The supervisor is responsible for describing the duties performed. He/she also
evaluates the ratee in four sections:
1. Performance of Duties Section, Consists of a narrative and five
performance factors rated on the scale described above.
2. Interpersonal Relations Section. Consists of a narrative and two factors
measuring how an officer affects or is affected by others.
111-43
3. Leadershln Skills Section. Consists of a narrative and four factors. One
of these factrvs is entitled Evaluating Subordinates. This factor is
described as follows:
"The extent to which an officer conducts, or requires
others to conduct, accurate, uninflated, and timely
evaluations for enlisted, civilian and officer personnel."
The behavior identified with the midpoint on this scale is described as follows:
"Prepares evaluations which are timely, fair, accurate, and
consistent with system standards. Required narratives are
concise, descriptive, and contribute to understanding
subordinates' performance and qualities. Seldom gets
reports returned for correction/adjustment. Provides
constructive counselling where needed. Does not accept
inaccurate, inflated, or poorly prepared reports from
others."
4. Communication Skills Section. Consists of narrative and three factors
which measure the officer's ability to communicate in a positive, clear,
and convincing manner.
The reporting officer may comment on the supervisor's evaluation. He/she then
rates the officer in two sections:
I. Personal Oualltles section consists of a narrative and five personal traits
related to the officer's character.
111-44
2. Renresentin2 the Coast Guard section consists of a narrative and four
factors which measure an officer's ability to bring credit to the Coast
Guard through appearance and actions.
The reporting officer writes a narrative section which describes the ratee's
demonstrated leadership ability and overall potential for promotion and command.
He/she then rates the overall potential on a scale of I to 7. There is a space on the
form for a label (added at Coast Guard Headquarters) showing the reporting officer's
rating history for officers of this grade.
Discriminatine Factors
The Coast Guard Evaluation Office reports that the current system is not
experiencing substantial inflation. Therefore, the selection boards can review the reports
on their face value without the need to search for hidden discriminators. However, the
promotion board procedures are informal and are kept confidential. The Evaluation
Office does not have data showing what sections of the OER are most important to these
boards. The majority of the OER is oriented toward performance description rather
than evaluation. However, it is prudent to assume ti it the reporting officer's potential
rating, when reviewed in the light of his/her rating profile, is a significant factor.
Feedback
The Ctast Guard places responsibility on each reported-on officer to seek
feedback and counselling. The OER support form is but one means of gaining such
feedback, and use of this form is optional for grades above lieutenant (junior grade) (0-
2). The OER form provides substantial information to the ratee; and, since inflation is
not widespread, the majority of reports provide useful information to the ratees on their
job performance. The OER copy furnished to the ratees does not contain the reporting
Il-4•
officer's rating profile, but the system is open, and ratecs can view this profile at
Headquarters or write for a copy.
Ouality Control
The central themes in the Coast Guard quality control procass for the OER
system are extensive review of reports at all levels and involvement of the chain of
command in supervising the rating chain.
The review process starts at the local levJ where reports are reviewed first for
administrative accuracy and then for excessive inflation. (Note that periodic reports on
Coast Guard officers are batched and that all reports on officers of a certain grade are
being reviewed at one time.)
At Coast Guard Headquarters, reports are routed through the assignment officers
who screen the reports for administrative accuracy and for internal consistency. In
particular, the reports are checked to ensure that the narrative comments support the
numeric ratings in each section. Reports containing administrative errors or inconsistent
ratings are referred to the Evaluation Office. Many of these reports are returned to the
rating chain for correction with an analysis of the errors or inconsistencies.
Returned reports with inconsistent ratings are usually referred to the reviewing
officer for resolution. Compliance with this quality control program has becn high. In
recent months, 90 percent of rejected reports have been returned to Headquarters with
additional narrative and, surprisingly, 50 percent with changed numeric ratings.
It has not yet been necessary to adopt any special interventions focused on the
reporting officers. The strong support of the chain of command has been adequate to
control inflation. A strength of the Coast Guard OER system is that the officer corps
111-46
accepts it. This acceptance has been developed by and is maintained through a strong
eaucation program.
United States Navy
The current fitness reporting system was instituted in 1974 and has not changed
substantially since then. The system is well accepted by Navy officers, particularly
reporting seniors who think they understand the system and believe that they are
communicating well with selection boards.
A distinguishing feature of the Navy fitness report (FITREP) is that there is only
one evaluator and only one signature appears on the form. This evaluator, the reporting
senior, is normally the officer designated in law as the commander. Thus, for most
Navy officers the FITREP is not prepared by his/her supervisor but at a higher level.
Another distinction evolving from this procedure is that the preparation of FITREPs is
an important function of command and, at least in theory, more responsive to direction
from the Navy leadership.
Purpose
The prime use of the FITREP is to support the decisionmaking process of
promotion selection boards, and reporting seniors view it so. A secondary purpose that
the Navy views as valuable is to support judgments about future assignments. The
instruction on preparation of the FITREP cites ten purposes, among which is counseling
of junior officers. These other purposes are not viewed as particularly useful; and
counseling, especially, is not done well in conjunction with the FITREP.
The FITREP is prepared annually for all officers but lieutenants (junior grade)
who are evaluated twice a year. FITREPs are prepared in batches by grade so that all
111-47
FITREPs for any particular grade are submitted at the same time. The FITREP is also
submitted upon the transfer of the reported-on officer or the reporting senior.
The process begins thirty days prior to the end of the reporting period when the
ratee has the opportunity to provide information to the reporting senior about the
performance of his/her duties during the reporting period. There is no specified format
for this information and the reporting senior is not required to include any of it in the
FITREP. Also during this period, the ratee's supervisory chain provides information to
the reporting senior. This also is an informal procedure, not specified in the instruction.
At the end of the rating period the reporting senior completes the FITREP.
He/she enters a duty description and a narrative describing the job performance and
potential for promotion. The reporting senior evaluates the ratee on twelve performance
factors and six personal traits using a scale of I to 9. He/she also indicates whether or
not the ratee would be desired as a subordinate in each of five types of possible future
duties, using the same scale. Finally, the reporting senior makes a promotion
recommendation. The reporting senior indicates the rank of the ratee (I of 3, 3 of 3,
etc.) among those officers of any particular grade recommended ror early promotion.
There is an appraisal worksheet for use by reporting seniors in preparing the
FITREP. In contrast to the procedures of the other services, the worksheet is not used
by the ratee and remains in the reporting senior's possession when the FITREP has been
completed.
The completed FITREP is forwardee to the Navy Military Personnel Command
without further review. A signed copy of the FITREP is given to the ratee. In the case
of junior officers (0-3 and below), the copy is given at the time of completion. For
other officers the copy may be given to the ratee at the time the relationship is severed.
i11- 4L3
Eoms
An example of the Navy FITREP form is displayed in Appendix F. The
FITREP form requires the use of an optical character reader font. All but the narrative
portions are entered into the automated personnel system. Subsequently, this system
produces numeric summaries of each officer's performance record for use by selection
boards.
Following the administrative information, there is space for a description of
duties assigned. There is then space for the reporting senior to rate on twelve
performance factors and six personal traits. The reporting senior also indicates the
desirability of having the ratee assigned under his supervision in five types of jobs
(command, operational, staff, joint/OSD, or foreign shore). Finally, there is space for
an overall performance evaluation. All of these are rated on a scale of A to I (I to 9),
"A" being the highest. In the use of the overall performance evaluation (labeled "mission
contribution"), the reporting senior is required to show the distribution of ratings for all
officers of that grade being evaluated at that time.
Finally, the form provides space for the reporting senior to comment on the
promotion potential of the ratee. The scale is 1 to 3 (promote early, promote, do not
promote). The reporting senior is required to show the peer distribution among all
officers of the grade given a rating of "promote early" (I of 3, 3 of 3, 3 of 6, etc).
However, this peer distribution is used only for officers in grades lieutenant commander
through captain (0-4 through 0-6).
Dlscriminatlng Factors
Navy promotion board procedures have a bearing on the relative usefulness of
various ratings on the form and deserve a brief summary. In contrast to the Air Force
and Army, where every panel member reads every file and records a vote, in Navy and
111-49
Marine Corps boards, selection is by iterative voting by the panel based on briefings
given by one of the panel members. In each iteration, each panel member is given a
small number of files (about five) for detailed review. After this review, the panel
assembles in a briefing room where each panel member briefs his files to the other
panelists using visual aids consisting of numeric summaries of all previous FITREPs and
qualitative summaries of previous experience and qualifications. The panel members
vote on each officer simultaneously and secretly at the conclusion of that briefing.
After voting on all officers in the zone, the clear winners and losers are removed, the
files are redistributed, and another cycle occurs. This process is fol;owed until the
number of selectees allowed is attained.
An advantage of this procedure is that the briefer can spend much more time
reviewing each file he is given than if he were required to look at the entire zone. This
suggests that a better job can be done in integrating all aspects of the FITREP to arrive
at a judgment and that any one factor has less importance in discriminating among
officers than is the case in other systems such as ihe Air Force and Army. This
explanation also supports the statement made to the study team by the Department of the
Navy representative that the narrative is the most important discriminator on the form.
The briefer has time to read the narratives on all the FITREPs and relate them to other
rating sections.
Other factors cited as being important discriminators are the promotion
recommendations (including the peer ranking) and the job description. Members of
promotion boards have observed that promotion recommendations are evaluated in the
perspective of the importance of the billet. For example, a promotion ranking of "3rd
of 20" in a training command billet is recognized as weaker than a "4th of 8" in a
deployed -quadror for the fighter pilot community.
111-50
Feedback
Although providing performance and career counseling is an objective of the
,officer evaluation system, the Navy believes that the feedback mechanism is not very
effective. The FITREP, in particular, is perceived to be an unacceptable counseling
tool. This situation derives from the fact that commanders tend to inflate the ratings of
less than excellent officers. Therefore, the FITREP does not communicate an officer's
strengths and weaknesses. Reporting seniors are encouraged to show reports to ratees
(and are required to do so for junior officers). However, for officers in grades
lieutenant commander and above, reporting seniors are not required to conduct
counseling nor to show reports. There is no alternative mechanism, such as the Army
OER support form, to foster counseling.
Quality Control
There is a substantial amount of inflation in the Navy evaluation system. For
example, reporting seniors recognize that ratings of less than "A" for performance factors
and traits are regarded as derogatory by promotion boards, so there are few ratings of
"B" or less. Similarly, narratives are puffed up; although the feedback from promotion
boards shows that most reporting seniors are communicating effectively on performance
and potential through the narrative. The ranking among peers remains an effective
discriminator for many reported-on officers although some reporting seniors are known
to game the system by artificially subdividing the population of officers rated in order
to generate more "1"and "2* promotion rankings. However, the ranking system does not
apply to officers in the grades of lieutenant (0-3) and below.
The Department of the Navy has not chosen to intervene in the fitness reporting
system, Consequently, there is no central management of a quality control system ror
officer fitness reports.
1II-51
Uni1ted States Marine Corps
The Marine Corps has also revised its officer eva.uation system recently in
reSponse to an inflation in ratings. The current Performance Evaluation System (PES)
was installed in 1985 in response to a study which indicated that the degree of inflation
posed a threat to the credibility of the promotion system.
Distinguishing features of the PES are that counseling has been removed from
the PES and that those marines rated as outstanding in *general value to the service' are
ranked amoag each other.
Like the Army, the Marine Corps has recognized the pressures on immediate
supervisors to inflate evaluation reports and has installed measures to counter this
tendency. Some of these measures include:
1. A policy which forbids the rating chain from showing completed reports
to the ratee;
2. Strict requirements for accelerated promotions; and
3. Requirement to rank the outstanding agairnst one another.
Pu rpose
The primary purpose of the PES is to support the central selection, promotion,
and retention of the best qualified marines. A secondary purpose is to aid in the
assignment process an(* other personnel management actions.
The recent study of the Marine Corps evaluation system concluded that
counseling is antithetical to the purposes of an evaluation system and a major source of
inflationary pressure. Therefore, while effective counseling is encouraged, a substantial
effort has been taken to sepai-ate the counseling process from the PES.
111-52
Process
A report is not submitted on a marine unless he/she has been under the
supervision of the reporting senior, who is the marine's immediate supervisor, for 90
days. The FITREP is submitted under the following general conditions:
I. Annually, batched by grade;
2. When the ratee's duty changes or he/she departs the unit;
3. When departing for extended temporary duty or long term schooling;
4. When there is a change in the reporting senior; or
5. Upon promotion.
At the end of the reporting period, the reporting senior prepares the FITREP,
assisted in administrative processing by the supporting personnel office. He/she rates
seven duty performance factors, fourteen personal quality factors, and estimates the
ratee's "general value to the service." The reporting senior also completes a narrative
describing duty requirements, performance, and general value to the service.
The reporting senior forwards the report to the reviewing officer who is
normally the reporting senior's supervisor. The reviewing officer is responsible to
ensure that the reporting senior has complied with the spirit and instructions of the
Marine Corps order governing the PES. The reviewing officer may add comments,
especially if he/she disagrees with the evaluation performed by the reporting senior.
The completed FITREP is transmitted to Headquarters U.S. Marine Corps where
it is reviewed and entered into the official personnel record of the marine reported-on.
Administratively incorrect or inconsistent reports are returned to the rating chain for
correction. Copies are not maintained in unit files nor routinely furnished to the ratee.
111-53
Ratees are annually furnished a copy of the Master Record Brief, a report containing the
numerical ratings from all FITREPs in his/her record. On entering the zone for
promotion, each marine is furnished a complete copy of the microfiche containing all
previous FITREPs. Additionally, marines can view their FITREPs at Headquarters, U.S.
Marine Corps.
Encm
One form is used to evaluate all marines in grades sergeav (E-5) throu,3h colonel
(0-6). An example of this form is displayed in App'-ndix F. The administrative data is
entered with an optical character reader foat. Note that there is no space to enter a
duty description, only a title. Additional duty requirements must be placed in the
general narrative section.
The reporting senioi evaluates seven performance factors and fourteen personal
qualities on a six point scale. He/she then estinmates the ratee's "gereral value - the
service" on a ten point scale. The reporting senior is required to show how he/she has
distributed ratings in this section ('general value to the service*) for all other marines of
the same grade during this rating period. The reporting senior then completes a
narrative section.
On the reverse of the form, the reporting senior is req,2ied to show the rank of
the ratee, if he is rated an outstanding (10) in 'general vaau. to the service,' among other
marines of that grade also rated as outstanding. Finally, the reporting senior is required
to list the names of all maiines of that grade for whom he/she is the reporting senior.
The reviewing officer is provided a space to make comments. These comments
are mandatory if he/she does not agree with the evaluations or comments by the
reporting senior. Reviewing officers are encouraged to add a comment showing the
ranking of the ratee among all marines of that grade whom the reviewing officer is
111-54
responsible to review. The intended purpose is to evaluate the marine reported-on
across a wider segment of his/her peers. This technique is especially encouraged when
the reporting senior only rates one or two marines of a particular grade.
Discrlminatin atr
Marine Corps promotion boards are conducted in about the same way as are the
Navy boards. Therefore, the comments on discriminating factors in the previous section
apply. Beyond this, the Marine Corps representatives informed the study team that the
most important discr-minators for promotion boards are:
1. The trend in the numeric ratings;
2. The rank among peers rated as outstanding in 'general value to the
service"; and
3. The narrative.
EF£Elaik
Feedback to the ratees on performance of duties or career development is not a
part of the PES. Reporting seniors and reviewing officers are specifically forbidden
from using the FITREP as a part of counseling. Reinforcing this practice is a
prohibition against even showing the FITREP to the marine report,.l-on. Although the
Marine Corps encourages counseling of subordinate officers, such counseling is not
related to the evaluation process, and there are no forms or other a 2s in the PES to
assist marine officers in this task.
Quality Control
Improving quality control of the PES was one of the initiatives resulting from the
1985 study. The goal of the quality control program is to limit the impact of inflation
111-55
on the effectiveness of the PES. At Marine Corps Headquarters, the Promotion
Evaluation Branch is responsible for quality control. This branch screens approximately
205,000 reports a year, of which about 6,000 are returned for corrections. A review of
a list of most common reasons for rejecting reports reveals that the Marine Corps is not
able to audit for internal consistency to the extent of the Coast Guard, and most of the
errors are in failure to follow the instructions. However, these screenings, and the
knowledge that they. are done at Headquarters, are reported to positively affect the
quality of the FITREP accepted. Other elements, previously mentioned, that act to limit
the inflation of reports include:
1. Requirement to rank those rated as outstanding;
2. No show policy;
3. Strict limits on accelerated promotions; and
4. Enhancement of the reviewing officer's responsibility to in,'' "
certification of the accuracy of the report and the requirerne..t to
comment on reports that do not act-irately reflect an officer's
performance and potential.
Foreien Service
Foreign Service officers of the Department of State are evaluated annually
through a process Aimilar to those used by the armed services. The assignment and
personnel management policies of the Foreign Service are simlar to those used in the
Air Force. Specifically, Foreign S' officers are subject to:
1. Frequent reassignments to oversea locations on an involuntary basis;
2. Competitive promotions based on a grade pyramid;
111-56
3. An up or out policy. Foreign Service officers not keeping up with their
peers in promotions are selected for release by promotion boards (if they
do not self-select by resigning).
4. Central management of the personnel function to include centralized
promotions.
For these reasons, a review of performance appraisal in the Foreign Service is
appropriate in the context of lessons that could be applied to the Air Force officer
evaluation issues.
PurDose
The primary goal for personnel evaluation is to provide a just basis for career
tenure, promotions, and separations. Other goals include:
1. The allocation of within-class salary increases and performance pay;
2. Support to the assignment process;
3. Planning for training; and
4. Improvements in efficiency through feedback on performance and
collaborative goal setting.
Process
An annual report is submitted on each Foreign Service officer as of April 15th of
each year, provided the ratee has been under the supervision of the rater for 120 days.
Other reports are submitted covering any period of at least 120 days culminating in a
change of duty or a change in rating officer (including transfer).
111-57
The Foreign Affairs Manual requires that the rater anW ratee agree in writing on
the duty requirements and performance standards within 45 days after the beginning of
the rating period. This understanding is recorded on the evaluation report. The rater is
required, in addition, to review performance at least twice during the year.
(Representatives of the Office of Performance Evaluation indicated that these
requirements are honored more often in the breach than in observance.)
At the end of the rating period, the rater prepares the evaluation report and rates
the employee on overall performance as well as potential. The rater is expected to show
the evaluation to the ratee and discuss it. The rater is the ratee's supervisor.
The rating officer's supervisor is designated as reviewing officer. The reviewer
checks the report and prepares a narrative assessing the ratee's performance and
potential.
The report is then forwarded to the ratee for comment. Space is provided for
the rated officer to comment on the period of performance to include specific
accomplishments, areas not otherwise addressed in the report, and aspects which may
need clarification or correction. The employee is also encouraged to remark on career
goals including training and future assipnments
Every bureau within the Department of State and every post abroad with more
than tan Foreign Service members establishes a review panel which reviews all
evaluation reports. The functions of these review panels include:
1. Checking reports for accuracy, consistency, inadmissible comments, and
conformity with rules and policy;
2. Referring poorly prepared -eports outhe reporting chain for correction;
and
111-58
3. Identifying on each report the officers responsible for any late
submissions.
Reports are then forwarded to the Office of Performance Evaluation where they
are maintained in manual form only. Typical procedure for Foreign Service officers
who are dissatisfied with their evaluation reports is for the officer to file a union
grievance (most Foreign Service officers are union members). The 8,000 to 10,000
evaluation reports submitted each year typically generate about 100 grievances.
One form is used in evaluating all Foreign Service officers. This form is
displayed in Appendix F. The form is almost entirely narrative (which suits the
Department of State, a writing culture group). Despite the ample amount of white space
on the form, the typical report has addendum sheets attached.
Part one of the report is a narrative description of the work requirements of the
position, which is to be prepared at the beginning of the rating period. There is a
section in which the ratee may explain, at the end of the period, special circumstances
influencing his/her ability to meet the work requirements.
Part two is a narrative evaluation of the overall accomplishments in the job
during the period, prepared by the rater. Part three is a narrative evaluation of potential
together with a five point rating scale, also prepared by the rater. The Office of
Performance Evaluation has observed that both parts two and three are greatly inflated.
Most Foreign Service officers expect a top block rating for potential and a narrative that
complements this rating.
There is a subsection in part three in which the rater is to cite areas in which the
ratee should concentrate his/her efforts to improve performance. This section is widely
111-59
gamed so as to show innocuous or frivolous faults. Rarely does a rater put candid
remarks about employee weaknesses in this section.
In part four, the rater is required to indicate the dates on which counseling
sessions were held. Foreign Service officers generally do very little counseling (as
reported by the Office of Performance Evaluation representatives) and this compliance
section does not help in improving performance.
Part five is a narrative covering boil, perfo;mance and potential which is
completed by the reviewer. He/she is asked to certify that the report is adequately
documented. The reviewer's comments are also subject to inflation.
In part six, the rated employee provides his/her views on the period of
performance. This is completed after the rater and reviewer have completed parts one
through five. Therefore, it is an opportunity to rebut any negative comments.
Finally, there is a section in which the review panel may certify their review of
the report.
Discriminating Factors
There is little on the form to review apart from the narratives, the work
requirement statement, and the overall potential scale. Yet the inflation in rating of the
overall potential makes that factor useless in discriminating. Nevertheless, the promotion
boards report that they are able to discriminate among officers being considered through
close reading of the evaluation report files.
Feedback
Feedback is an integral part of the Foreign Service evaluation reporting process.
The mechanisms for feedback are mandatory counseling sessions and the referral of OER
111-60
reports to ratees for comment. Yet inflation in the reports renders the reports
themselves less than useful for counseling purposes. Perhaps this influences the general
reluctance to perform counseling which was reported to the study team.
Oualltv Control
The system design provides for quality control through a reviewing officer and a
review panel. However, the system is not now working to control inflation nor does it
result in uniform compliance with such administrative requirements as timely submission
of reports.
The Office of Performance Evaluation does not have adequate staff to perfbrm
substantial amounts of quality control. However, they do read each report (staff of
sixteen, 8-10 thousand reports, mostly arriving in May). Most of the reports which are
returned for correction contain inadmissible comments in the report or administrative
errors that cannot be corrected in the Office of Pecformance Evaluation.
A revision of the evaluation system is in progress at the Department of State to
deal with rating inflation and the excessive amounts of narrative. The proposed
solutions being considered include a system of rating the rater (similar to U.S. Army or
U.S. Coast Guard) and computerization of the evaluation process.
IMPLICATIONS FOR THE AIR FORCE
This subsection will address some of the central tendency observed smong the
other services discussed above. There are some features, for example, that reflect
lessons previously learned by other services that have application to the issues facing the
Air Force. Table 111-2, at the end of this section summarizes the major features of each
service's OER system.
111-61
Purpose
While each of the services has a different list of objectives for the OER system,
the central theme of each is that it provides evaluation to support a central promotion
system. Most also state that the OER supports the centralized officer assignment system,
but as a secondary objective. The further the stated objectives depart from these two,
the less efficient the systems become to accomplish these additional objectives.
One purpose which appears contradictory to the central purpose is that of
feedback on performance. It is generally observed that raters, recognizing the
importance of the OER to the long-range career aspirations of the ratee, will not be
truly candid about current job performance in the OER. Also, the necessity to brief the
OER to the ratee as part of the feedback process results in inflated ratings. Two of the
services have recognized that contradiction by removing feedback on performance from
their list of objectives (USA, USMC) and the others acknowledge that the feedback link
is not working.
Protect the Ratee - Rater Relationshln
The uniformed services also recognize that there is a special relationship between
an officer and his supervisor that is unique to military service. A part of this
relationship is rooted in the dictates of military discipline and obedience to authority.
Second, there is a military concept of loyalty between the two that works in two
directions among officers. Finally, there is a sense of responsibility for the junior's
career development which is fostered in all the services. The requirement to evaluate
subordinates, and particularly to evaluate potential is threatening to this relationship.
Therefore, the services have taken steps to reduce the conflict. In two (USA, USCG),
the requirement to perform meaningful discrimination has been placed on the second
writer of the OER, the supervisor's supervisor. In the Navy, the supervisor doesn't even
111-62
write on the OER (except for those officers directly supervised by commanding
officers). Finally, in the Marine Corps, this relationship is protected by a no-show
policy and the complete separation of evaluation and feedback.
Inflation
All the services have suffered from unacceptable levels of inflation and all have
developed mechanisms to influence a distribution of potential ratings among officers of
a cohort along some scale. Two services rely on a forced, auditable peer ranking (Navy
and Marine Corps), and two use persuasion and a rate-the-rater system that has an
indirect threat for those officers who don't comply (Army and Coast Guard). The
Foreign Service has also begun to consider adopting such a rate-the-rater system.
Ouali.v Control
There is an evident movement toward managing the quality of OERs from the
service headquarters level. Three services (Army, Coast Guard, and Marine Corps) have
substantially increased their level of interventions in the system in recent years and
another has stated the intention to do so (Department of State).
111-63
0o%
1: 0 cc
) fI..
= ~4 o6 LU) hA-
Coo at
r-
4
cc 0
uca 6. mi~
ZU E00 4)6
CC OA
. 1
z
0
0 IL-
cc
- E r- r. Qco r
U - < 4)
s
CU L
0 -
.
v 4) 4)m
U.L 06 Q6
U C6U
C CC
r0
A. < WC .0
CL< 96
LL)
< < 11-64
U 06
-. I. - . v N "
0 " eo 0. CU- , -' -
,, "01 U
) Co
0n
0- "- E w C6
CL
E 0. C
06I ~
,-- - *- 0
Zi 0 .2 0 = .
cc
=
06
ao >%
0 0 III-6
-
Cc
4nA
0
~' 0
0 c 122 L
o co r: Gm M 6. a
all. Ad 0 CL
-u 111-6
IW
0a
~h.
6n E
~-) 0
z~ • < Q'
Ior
I-'I
00
o
-I- I.6
L"Eno ~ ~
6...
4
c t- % CA c
o 0
w m
004) 0 0 h. 06 4
0 u .
.4- W~
4)cc ~ -
o' - E- Mr-
0 7a v
co 0
w
a-
6. 0'
w) ~
V) C6 u c--5
'
2E
E
9 E
EU w CD
I- 00
00 E
CI .d
~d eq(~
U a C.. )
= . . M -0' &.
b0
0%-
~ ~ 4)VEb .. E
6. m cd
b m - I0.- m
C; 0 E
111-67
U.. u
< z
*.2 .
n 06
II IW
C6-. 0 0'
zz
0 E
-&
0
-c -
60
0 . z
90) c
0 c
z
u
r=
SECTION IV
FINDINGS: AIR FORCE OFFICER EVALUATION SYSTEM
This section discusses the current Air Force Officer Evaluation System, begiining
with a review of the major features of the OER, as determined in our information
gathering efforts. This part includes the purpose of the OER and a descriptiott o' the
OER preparation process as well as the form itself. It also discusses the discrimi'ning
factors operating in the current Air Force system, the provision of feedbac!ý i the
officer being evaluated, and the provisions for quality control of OERs.
The second part of the section discusses the issues identified by the s , uup
in our interviews and focus groups, including those which are cultural as w(%! 1.:. Ise
dealing with the OER form and process directly. The third part briefly .lwnr, - izes
these findings.
MAJOR FEATURES OF THE CURRENT OER SYSTEM
PURPOSE OF THE AIR FORCE OER
According to Air Force Regulation 36-10:
"The puipose of the officer evaluation system is to provide
the Air Force with information on the performance and
potential of officers for use in making personnel
management decisions, such as promotions, assignments,
augmentations, school selections, and separations. It is also
intended to provide individual officers information on
their performance and potential as viewed by their
evaluators."
IV-l
Our guidance from Air Force leadership has reinforced this statement, but has
placed emphasis on the objectives of accurately assessing current job performance,
differentiating among officers in potential for promotion, and facilitating the provision
of feedback to officers which will help them to improve their performance and thus to
increase their value to the Air Force. We have kept these purposes in mind throughout
the study, and our assessment of the Air Force OER has been performed with these
objectives as its criteria.
THE AIR FORCE OER PROCESS
The Air Force OER process begins when the Consolidated Base lPersonnel Office
(CBPO) determines that an OER is required for a given officer. AFR 36-10 lists all of
the events which require completion of an OER, but the most common are a PCS move
by the rater or ratee, or a change of assignment. As a minimum, an OER must be
completed at least every six months for lieutenants with less than three years of service,
and annually for all other officers through colonel. The rating officer receives two
copies of the computer-generated notice that the OER is required. This notice includes
the Ratee Identification Data for the OER, and it is recommended that it be verified by
the ratee. The rater then is responsible for collecting all the additional information
he/she needs to complete the OER. Typically, the rater may ask the ratee to provide an
update on his/her accomplishments during the rating period, and may solicit information
on the ratee's performance from other supervisors who have observed the ratee's work.
The rater completes the rater portions of OER, and then submits it to the
additional rater for completion of the next portion. The additional rater adds comments,
signs the form, and forwards it to the indorser for final comments and signature. The
indorser retu;'ns it to the CBPO for further processing and quality control in most cases.
IV-2
The above is the idealized route of an OER. Our interview and focus group
subjects indicated that the actual routing is more complex, with extensive
communications passing up and down the rating chain, and within the indorser's
organization, to determine the level of indorsement for any given officer's OER and to
provide the additional rater and the indorser with information to use in generating their
comments and recommendations. We were also informed by many officers that it is
common for the rater to ask the officer being evaluated to provide a rough draft of his
or her own OER, a questionable extension of the practice of providing the rater with an
update on activities and accomplishments during the rating period.
THE AIR FORCE OER FORM
The current Air Force Officer Effectiveness Report, AF Form 707, has been in
use since the end of the control era in 1978, although the current form is dated 1982. A
copy of the form is shown in Figure IV-l. The form consists of eight sections. Section
I contains ratee identification data, which is provided to the rater by the CBPO, and
verified by the rater and ratee. Part II is the job description, which calls for duty title,
key duties, tasks and responsibilities. Part III is the rating of specific performance
factors. As shown in Figure IV-l, the form provides for the rating of 10 specific
factors on a five-point scale, and requires narrative comments with specific examples of
each factor. The OER regulation, AFR 36-10, provides specific standards for use in
rating these factors, although our respondents report that this guidance is seldom
consulted.
Part IV is the first section of the reverse side of the OER, and provides space for
the rater to make recommendations for the ratee's next assignment. Part V is the overall
evaluation of potential, with a six point scale to be used by the rater, additional rater,
and indorser. Part VI, the rater comments section, is the last portion of the form
completed by the rater, and provides space for comments on the promotion
IV-3
FIGURE IV-j
AFR 36-10 Attachment 1. 26 October 1982 Effective 1 Novembew 1982
SAMPLE
I. RATEE IDENTIFICATION DATA (RaidAPR 36--
O•cefully before fitiing in env im")
1. N A M 9 (Las. Fstf. Middle Initial1) 2. *SAN (IRCb~ide Suffix) S. OADS dA. UA
c
SMITH. Jack II 231-34-5432 Captain A1321X
6. ORGANIZATION. COMMAND. ,OCATION 4- PAS COOD
345 Tac Ftr Wg (TAC), Mt Home AFB, ID MTOTDKLS
7.p ,cRIo
or07 REPORT I. 040. DAYS 00 S REASON FORREPORT
FROM. 13 Jul 81 T,,u, 31 Oct 82 SUPERVISION 120 Annual
ii. JOB DESCRIPTION I. OUVTTITLd, Enter --- PMi..
v
1. ."-.. d approved duty title as of the
L0CYOUTICS,
fASK5^ANO CSPONSISILIT193 closeout date of the report (paragraph 2a this
attachment).
Item 2: Describe the type and level of responsibility, the impact, the
number of people supervised, the dollar value of projects managed, and any
other facts which describe the job of this particular ratee.
Ill. PERFORMANCE FACTORS " "660
Sci*kW 6CL0OW
1
MCITI A8OV I ASOVr
•,;,dl rexample o performance required NOT OISI[VCD I5lg,&M STANARD SITAPI0AND ITADOARD STAN DARD
I, JOag KNOwLEOGE 'D•pth., cslretney.
What has the ratee done to actually demonstrate depth, currency or breadth of
job knowledge? Consider both quality and quantity of work.
2. JUDMA NT AND
0[C:ISIONSOU COnJrVrES r,
Mac
npof
w'e
fertand a0
1 E)
Does the ratee thank clearly and develop correct and logical conclusions?
Does the ratee grasp, analyze, and present workable solutions to problems?
oS
.
LACMN (ua.acpto L._J 'L__ L_.J -1 J L
3. II6,AN ANDO ORGANI'Zr WORK (TtFMely.
Does the ratee look beyond immediate job requirements? How has the ratee
anticipated critical events?
A. M4ANAGErMErNT o~r lRESOURlCElS
Does the ratee get maximum return for personnel, material and energy expendedl
Consider the balance between minimizing cost and mission accomplishment.
,esporuibiiry) 0)
How has the ratee demonstrated initiative, a~ceptance of responsibility, and
ability to direct and motivate group effort towards a goal?
. AOAAITY To TESS (Stable, o L LJ LJ
flexible, dependable0
How has the ratee handled pressure? Does quality of work drop off? Improve?
7. ONL MMUPNgCATIO0H (CkAts.
__._confident)J L __
€o.
PII .
o ,5IS rnI 0UITg
Atrd, aL...i L . .. J L . ...
How has the ratee demonstrated the ability to present ideas orally?
I.WRITTE[N COMMUNICATION fCltar.
How has the ratee demonstrated the ability to present ideas in writing?
dreg. cooper.,o , o L__ E_ L _e
a12_5
How well does the officer meet and enforce Air Force standards of bearing,
dress, grooming and courtesy? Is the image projected by the ratee an asset
to the Air Forcre' ..
0. v
N,.A,, 09
,L.ATIONS (Equal oppo,•,n._y
parrielon.sentslVit-IyJ
How has the ratee demonstrated support for the AF Equal Opportunity Program,
and sensitivity for the human needs of others? Evaluation of this factor is
MANDATORY,
AF ,V- -17 •.CVIOUggEIIONILLStUSED C'FICER EFFECTIVENESS REPORT
IV-4
FIGURE IV-I
Effective I November 1982 AFR 36-10 Attachmnet 1 25 October 1982
SAMPLE
IV. ASSIGNMENT RECOMMENDATION: 1. STRONOGER QUALIFICATION, Perserver nce
a. sucawsTEO joe lInclude AFSCP: I
1. ORGANIZATION LEVEL: A TIMING!
V. EVALUATION OF POTENTIAL:
Compare the milee'$
capability toasirumeIncrtaredresponsibilirywith that
of otherofficers whom you know in the sme grade. Indicateyour rating
by piecing an "'X in the designatedportionof the most appropriateblock
I DZI
X
L'1III
11111 1
iZ ZIZIW
RATZEN ^DON INDORI. RATER A004 INDOOWf RATER AODN INDOKS. RATER ADON 0#40O0S
RATER EP RA^TER Et WATER up RATER aR
VI. RATER COMMENTS
Oroanize comments within the standards of good writing. Do not use headings:
underline, or capitalize merely to add emphasis. Include those comments
required by paragrapii 3-15. Add any other comments not covered elsewhere
and not excluded by paragraph 3-14 which will increase the value and meaning
of the report. Amplify those positive aspects of the ratee's performance
deserving special note.
AME. GRADE.o I& OF SVC. COMo. ocATaoN D ILE DAT
JACK LAMB, JR., Lt Col, USAF Operations Officer 1 Nov 82
529 Bomb Sq (H) (SAC) rIBA- SIGATUR
Plattsburg AFB NY 012-34-5678FR
VII. ADDITIONAL RATER COMMENTS Qc)CONCUR ONONCONCUR
Review the ratings and comments of the rater for completeness and impar-
tiality. If the additional rater does not concur with any rating in
section III or V, or any commenmts, check the nonconcur block. To reflect
disagreement, initial appropriate blocks (section III) and mark additional
rater block (section V). Significant disagreement (para 2-26) requires
justification.
NAME. GRADE. SIR OF SVC, ORON. COMO, LOCATION DUTY TITLE DAT
FRANK HARRIS, COL, USAF Commander 2 Nov 82
529 Bomb Sq (H) (SAC) $SAN SIGNATURE
Plattsburg AFB NY 987-65-4321
Vill. INDORSER COMMENTS OCONCUR NNONCONCUR
Review the ratings and commenfs of the rater and additional rater for
completeness and impartiality. if the indorser does not concur with the
additional rater's comments or ratings, check the nonconcur block. To
reflect disagreement, initial appropriate block (section III) and mark
indorser block (section V). Significant disagreement (para 2-26) requires
justification.
mA.ME.* GRADE. SR OF SVC. O,.N. COMO. ,.OCATION UT V, TITLE DA1TE
James M. Robinson, Col, USAF Commander 4 Nov 82
380 Bomb Wg (SAC) $SA2N SI-NATURE r
Plattsburg AFB NY 234-56-7890FR M
ffl.
AF Form 707. (Reverse side.) |V-5
recommendation, as well as for any other information the rater wishes to provide. Parts
VII and VIII are for additional rater and indorser comments, respectively.
DISCRIMINATING FACTORS IN THE AIR FORCE OER
Our respondents indicated that the indorser comments, especially regarding
promotion, and the indorser's rating of potential, as well as the rank and position of the
indorser, have become the most important factors in differentiating between officers for
selection purposes. The explicit ratings of performance factors have become so inflated
that they differentiate only the most deficient officers, with virtually all others
"firewalled" in the highest block. Thus the words used by the indorser to communicale
his or her enthusiasm for the ratee and to justify the promotion recommendation have
taken on great importance.
The rank and position of the indorser, considered with his/her narrative
comments, arc perhaps the most important differentiators for promotion. Because of
this, indorsement inflation has occurred, and it has become necessary to place
considerable pressure upon the major commands to limit the highest level indorsements
they provide. In fact, the Headquarters, US Air Force, provides guidelines to the major
commands on the upper limit of reports for each grade which should be indorsed by
senior general officers. The pressure of these guidelines and other informal
communications has led to the establishment of elaborate but largely invisible procedures
within each command to determine which officers receive which levels of indorsement.
De facto quotas of high level indorsements are thus apportioned among the officers in a
manner quite similar in effect to the apportionment of "one" and "two" ratings during the
control era, although different in application and method. Officers in the field perceive
the similarity to the controlled era. In addition, it was widely reported to the study
team that indorsements are often managed so as to "peak" when an officer is about to
IV-6
meet a selection board, just as there was management of controlled ratings for this
purpose.
FEEDBACK TO THE RATEE ON PERFORMANCE
The Air Force regulation on Officer Evaluation, AFR 36-10, specifically states
that the OER is not to be used as a "counseling device", but it does instruct the
supervisor to counsel ratees "as the need arises" and suggests that periodic counseling is
advisable as well. The Air Force provides no formal counseling or feedback form,
however, to facilitate such a process. The ratee has access to his OER as soon as it has
become a part of the permanent record, although he/she is not given a copy as part of
the normal OER preparation and routing process.
Our focus group respondents were mostly in agreement that supervisors should
provide job performance feedback to their subordinates, although the term "counseling"
was not comfortable for some of them. Few officers reported receiving sufficient job
performance feedback at any time in their careers, and many admitted that as
supervisors they did not give as much feedback as they should. Some officers expressed
the feeling that, although they gave little formal counseling, their subordinates "know
where they stand", and nearly all said that they were quick to inform a subordinate
when his performance was L riously deficient. Many officers appeared uncomfortable
with the idea of compulsory periodic counseling, and they agreed that considerable
training would be required to prepare most Air Force officers to counsel effectively.
Some were familiar with the Army OER Support Form, but we found no consensus on
whether a similar counseling and feedback form would be effective in the Air Force.
Most officers who were asked felt that the Air Force was not currently in a position to
implement management by objectives (MBO) performance management techniques.
IV-7
AIR FORCE OER QUALITY CONTROL AND RATINGS CONTROL
The current Air Force system relies on the CBPOs to perform quality control
checks on OER forms, with the Headquarters USAF level retaining the responsibility to
"administer rating policy and to determine qualitative adequacy, rating trends, and
adequacy of command management" (AFR 36-10). Guidelines for quality control,
including statements on what subjects are appropriate and inappropriate for discussion
on the OER, are given in AFR 36-10.
The Headquarters, USAF quality control capability is resident at the Military
Personnel Center. There are approximately three manpower spaces devoted to OER
policy development and interpretation. Quality control of Air Force OER ratings
distributions is the responsibility of the major commands and agencies.
There is currently no published system of ratings control or distribution in the
Air Force, and no control is imposed on the numerical ratings of performance factors or
of potential for promotion. However, our bricfings and interviews revealed that there is
an unpublished mechanism in use to limit the number of three and four star level
indorsements given within the major commands. As discussed above, this pressure to
limit the number of high level indorsements has given rise to fairly elaborate unwritten
guidelines within the commands, which serve as an implicit control mechanism. In our
interviews and focus groups, officers indicated that they were aware that such a system
exists, though few were able to describe its operation in their own commands. Some
officers expressed dissatisfaction with the *invisibility" of this system, and clearly wished
it were more open, but many were quite accepting of the status quo.
ISSUES AFFECTING OFFICER EVALUATION
Our information gathering activities yielded much data on the Air Force OER
system, and in our analysis of this data it became clear that several major issues could be
IV-8
identified. These issues chiefly are the outcomes of interactions between the people,
(Air Force officers), and the OER system. These interactions produce reactions: values,
opinions and beliefs which must be taken into account if modifications are to be
successful. We have organized these issues into four categories:
i. Air Force Culture
2. OER Process
3. OER Content
4. Non-OER Promotion Issues
AIR FORCE CULTURE
Over the past few years a great deal has been written about the topic of culture
as applied to corporate environments. Through our information gathering in the Air
Force we observed a number of cultural characteristics and beliefs which have a very
inmportant bearing on the question of how likely it would be for a new OER process to
be successful. The following is a description of these characteristics and beliefs.
All officers are above average
The focus group discussion revealed a strong belief that because of the successive
screening processes an individual must go through to become an Air Force Officer, the
resulting group is an elite corps well above an "average" population in many ways. From
a statistical standpoint it seems quite likely that the selection process would indeed
produce an above average population in terms of intelligence, education, persistence, and
energy level. The consulting team members strongly concurred that the group of Air
Force officers with whom we had come in contact were comparable or superior to most
professional and managerial groups we had worked with in other client settings.
IV-9
The implication of this very strongly held Air Force belief is that for an officer
to be labelled as "below average" is a very severe blow to his/her ego and perceived
career potential. Our respondents indicated that this factor was a major cause of the
very strong negative reaction which the "controlled" system elicited. Thus, any newly
designed system should avoid the need to label as "below average" any officers who are
viable candidates for futvre promotion. In today's Air Force culture any rating of
"below average" is a strong si,"kl to the individual to seek his/her future career
elsewhere.
Unwillingness to differentiate onenlv
Two major reasons were given for the unwillingness of most officers to
differentiate openly among the officers they must rate. The first goes back to the
previous discussion. Since there is a strong feeling that all officers are above average,
rating officers strongly resist any system whereby they must identify those officers who
are below average. In our interviews, however, there was some willingness to ident;fy
the truly outstanding individuals, and the individuals whose performance or potential is
so poor that they should be released from the Air Force.
A second factor concerns the closeness of the superior/subordinate relationship.
Here, officers feel that to advise an individual that he/she isn't meeting performance
expectations is demotivating and may have negative effects on the individual's job
performance. In the absence of potential merit increases or bonuses for short-term
performance, rating officers feel they have to give "pats on the back" through the OER
system, even to those whose performance is acceptable but not outstanding. The
superior/subordinate relationship, along with the group cohesiveness encouraged by tl-c
Air Force culture, also leads to officers' feeling an obligation to "promote their people".
It is a matter of pride for an officer to have his or her subordinates receive promotions,
and reflects adversely upon his/her ability to develop subordinates if they are passed
IV-10
over. The importance of this value sometimes appears to override the need to select the
best possible leade-s for the Air Force. However, most officers expressed the belief that
there are n..,,y more good officers than there are promotion opportunities at the higher
grades. They consequently believe that there seldom is a conflict between promoting "one's
own" and promoting the best leaders for the Air Force.
Up or out system
Because of budget requirements, legislative controls and a number of other
factors, the Air Force system requires an officer either to be promoted at each
oppnrtunity or to leave the service at some point prior to completion of a full career. It
is this fact that places so much of a burden on the OER system. There is no parallel in
private industry whereby one performance appraisal can, in effect, dictate a decision to
lay off a person many years in the future. While we did not take a random sample, the
bulk of officers we questioned believed that the "up or out" system was good for the Air
Force insofar as it assured that officers would continue to be motivated to perform well
throughout their careers.
The controlled QER system
Our interviews and focus groups indicated that the controlled system has left
deep scars within the officer ranks. It has an almost uniformly negative image and
people are quick to relate instances of "good" officers leaving or being forced out of the
ser- :e because of a "three" rating. There is thus a negative feeling toward any type of
statistically-based controls on ratings. However, as our interview and focus group
discussions of the problems of inflation unfolded, many participantsoffered suggestions
which amounted to some type of control. Thus, the desire to curb rating inflation is
expressed as a willingness to see some type of "controls" implemented at an appropriate
level. Most frequently mentioned in such discussions is the Wing level. It is also clear
IV-, I
that if a system that limited ratings in some way were to be installed, a terminology
avoiding the word "control" might avoid the worst of the negative reactions.
Distrust of promgrotn board sensitivity
There appears to be a feeling, among junior officers in particular, that
individuals on promotion boards may look at surface data only, and therefore miss many
o( the more important aspects of an officer's record. For instance, some officers were
concerned that if the level of indorsement declines from one OER to the next. the board
will automatically treat this as a very negative factor without looking any further, when
in fact the person had changed assignments to where he/she was much further removed
organizationally from an indorser of the same rank. One source of this belief is the
common knowledge that boards cover so many candidates in so little time. A simple
division of time by candidates yields only a few minutes per candidate, so the general
feeling among many junior officers is that no in-depth reading or understanding can be
achieved. Promotion board memoers report, however, that they need spend little time
on those records that clearly go in either the "yes" or the "no" piles. They then report
spending much more time with those on whom there is more doubt (the records in the
"gray" zone). Also, as one might expect, promotion board members report that they dk
look behind +thesurface facts when inconsistencies appear in a record.
Careerism/focus on perinherals
Because of the lack of differentiation in OER ratings a cultural phenomenon of
"focusing on peripherals" has developed. That is, many officers feel that since they
cannot stand out on the basis of their ratings they must pursue certain types of
education and assignments, which may have nothing to do with preparing them to
assume greater responsibility, in order to provide ihe promotion board with the proper
"image". A corollary to this phenomenon is the feeling of unfairness caused by the fact
IV-12
that certain primary assignments make it much more difficult to accomplish these
peripheral activities. For instance, certain aircrew members may find it impossible to
attend evening classes to improve their educational attainments on a regular basis, if
much of the time they are away on temporary duty (TDY).
These then are some of the cultural issues we discovered which surround the
OER and promotion process. The next sections deal with some of the issues concerning
the process and form itself.
OER PROCESS ISSUES
Nomination process for determtnine indorsements
An extensive system currently exists for differentiating among officers on OERs
for the purpose )f promotion recommendations. Because the ratings ha%'.- become so
inflated, the differentiation no longer appears in the ratings thcmselves, but rather is
fouuid in the level of the final indorsing official and the words which that individual
uses or does not use to recommend the officer for promot'on. Clearly, higher level
indorsements indicate more favorable OERs. The choice of who will receive the highest
indorsements is made with great care. This choice is the result of considerable dialogue,
both verbally and in writing, between levels of command to determine who are the best
performers and those mo-t worthy to "push' for promotion. Thus, the overt rating
process for which the OER form was designed has really been replaced with one which
is not visible to the ratee. While most ufficers we interviewed were well aware of the
fact that the level of the indorsing official was the primary differeitiator, there was
little spontaneous conversation in the focus groups on how the decision of who will
indorse the OER is made. It may be that officers do not wish to offset the positive
feelings they receive from inflated OERs with a more critical examination of how they
will or will not be differentiated from others in the promotion decision.
IV-13
"Creative" use of laneauze
Because officers feel they must "firewall" the ratings, and because the form
requires a description of performance to justify each rating, the result is that much
description of meritorious behavior is exaggerated. This results in an ethical and an
administrative issue.
Many officers report that they are disturbed about having to say things which
they do not truly believe, but they feel forced to do so to avoid destroying the career of
an acceptable officer. In general, the level of ethical discomfort expressed was not
severe, but in a few cases it was quite intense. In addition, there is some feeling that by
encouraging such behavior in the writing of OERs the Air Force is setting the wrong
example for what might be expected in other areas of behavior, especially for junior
officers.
The need to provide verbal descriptions for superlative ratings also creates an
administrative burden. That is, since the rating officer must back up any rating with
"facts" about the person's performance that justify the ratings, rating officers spend a
good deal of time marshalling their facts. The process becomes a maximization game.
The rating officer knows he/she must fill ten spaces for the performance ratings and a
larger space for the rater comments. The rater also knows that promotion board
members normally will not read the comments on the front of the OER. Therefore,
his/her "best" facts are saved for the rater comment section on the back. However,
given this number of spaces to fill, many separate facts n'ust be described, and a good
deal of time is spent collecting and documenting them. In addition, some rating
categories are more easily observed in peripheral activities than in the major assignment
(such as oral communication for a fighter pilot). Such ratings are often made on the
basis of a performance as peripheral as conducting a tour of an airplane for a grammar
school class, rather than on flying performance.
IV-14
______|_|_____--_____________________________________________
Administrative burden
Some of the sources of the administrative burden of OER preparation were
discussed in the section above. In addition, the need for absolute correctness and
neatness with no erasures, and the unwritten ground rule that all spaces must be filled
with verbiage, has led to the situation where OERs often are retyped many times and
proofread by officers many times at the originating unit, and read and reviewed for
correctness at higher level units as well. Although word processing equipment is used in
some cases, it is estimated from survey data that Air Force officers may spend an
aggregate 650,000 hours a year in the writing process alone. Adding to this the repeated
proofreadings, the typing time, the successive reviews and indorsements, the total time
involved in the OER process is enormous. Most importantly, this time is all spent in the
process of documenting performance; it is not the very productive time that might be
spent by rating officer and subordinate in a performance planning or review session to
actually improve performance.
Control of inflation
While reactions to the control program that was instituted in the 70's are still
very negative, many officers expressed the belief to the project team that there was a
need for some way to remedy the current inflated ratings situation. Most often the Wing
level was mentioned as a logical place for a review and differentiation process to take
place, and for controlling influences to be applied.
Frenuency of OERs
The yearly time cycle of an OER is not an issue with the officer corps but
certain aspects were mentioned as problems. The six-month interval for lieutenants'
OERs is felt to be overly burdensome and not very useful, since a lieutenant typically
shows little change in his/her level of performance in six months. The other problem
IV-15
mentioned was the requirement to produce a report on an individual because of a change
of assignment in either the rater or ratee, when the period of the report was only a few
months. The same problem of lack of sufficient time for observation of significant
performance changes applies in this case.
Implementatlon of chan.e
The Air Force is a relatively conservative institution with a strong staff
orientation. In such organizations, except under crisis conditions, change must be
evolutionary rather than revolutionary. Thus, new systems must be tied to old and must
flow out of established values and practices. Given the strong concentration of authority
in the major commands it is imperative that the command staffs be part of developing
and implementing any change to the OER system. Our respondents felt strongly that any
change would need reinforcement through as many channels as possible.
Need for tralning
The officers we spoke to all agreed upon the need for training raters, reviewers,
indorsers, personnel staff, promotion board members and anyone else involved with the
OER so that they will be prepared for their changed roles in any new system, While the
requirements to accomplish such training may be very substantial, It will be necessary If
any significant cultural change Is to take place, Training and information distribution
deficiencies were seen by many officers as having contributed to the failure of the
controlled OER,
OER acceslbillltv
There are two issues here, one concerning the availability of past OERs to the
rating and indorsing officers during the preparation of an OER and the other having to
do with tht number of past OERs which are made available to the promotion board. On
IV-16
the first issue there was some concern that raters and/or indorsers referred to previous
OERs in preparing the current one. Some officers interviewed believe that this is unfair
in :he case of someone who may have had a bad experience (such as a personality
conflict with his rater) in the past, but who has performed differently over the period of
the current report. By referring to past reports for making current ratings, a rater
would, in effect, be usurping the function of the promotion board which is charged with
reviewing the entire record.
The second issue is the question of how long OERs should be kept in the
personnel and selection record. Presently, the record consists of all OERs from the time
the officer was commissioned, but there are reasons why this may be inappropriame. For
example, many senior officers, who had been in the Air Force during the controlled
OER period, felt that they or their peers were still feeling the ill effects of that period,
since many still had "3" ratings from that time in their selection folders. They were
certain that if a selection board had to decide between two folders which were otherwise
equivalent, the one with a "3"from 1977 would be at a disadvantage. The expression "a
one-mistake Air Force" was another phrase we heard referring to the perception that one
poor OER, even when followed by years of fine performance, could jeopardize an
officer's career. This was seen by most officers as unfortunate, if not unjust.
Feedback to officers belne rated
For the most part, the officers we interviewed expressed strong interest in
obtaining feedback on their performance from their immediate superiors. They agreed,
however, that the OER wu not an effective vehicle for accomplishing this. This desire
for feedback was keenest among younger officers--a phenomenon that is not unlike that
found in private industry. The current generation of professionals coming out of our
colleges is much more attuned to an "open" environment where performance feedback,
IV- 17
career planning, and the use of individual initiative are an expected part of the job
environment.
CONTENT OF THE OER FORM
Job descrintion
It was unanimously stated that the job description was an important part of the
OER and definitely should be retained. There was, however, a feeling that the
description could be improved by greater concentration on what the officer actually does
and on the scope of his or her responsibility and authority (e.g., number of people,
budgets, etc.).
Greater focus on lob verformance
Many officers believe that the OER as it is now constituted encourages excessive
attention to peripheral activities at the expense of the primary job and performance in
that job. The performance rating factors were 3een to engender this problem especially
for rated officers in flying jobs. These jobs provide little opportunity to demonstrate
performance factors such as "oral communication" or "management of resources", but
since a rating of "Not Observed" is culturally unacceptable, the rater must find
something to justify his ratings. It is in these cases that peripheral duties, such as
management of a coffee fund, or presentations to community groups, may be assigned as
opportunities for the officer to perform on these factors. Not surprisingly, many rated
officers feel that this is not a productive use of their time, nor is it seen to promote the
best long-term interests of the Air Force. The general feeling was expressed that too
many factors were being rated that were not directly related to job performance in many
jobs, There was a strong desire to rate factors that were directly pertinent to
performance in the primary position together with significant additional duties.
IV-18
Performance ratines
There was general agreement that because of inflation the performance ratings no
longer perform the function for which they are designed. There were, however, few
suggestions for improvement of these ratings. In those instances where differentiated
ratings were discussed, respondents talked about identifying the extremes rather than
finding differences at all levels of performance. Also, where differentiation was
discussed, the suggestion was made that such differentiation could best be introduced at
the Wing level.
There was almost universal agreement that the required comments on the
performance ratings should be eliminated since they are not useful. Promotion board
members acknowledged that they did not read these descriptions of performance except
in very, very rare cases. While the suggestion was made that perhaps these comments
are useful for assignments, our discussions with those responsible for assignments
indicated that they were not read for that purpose either.
Format of narrative iportions
Air Force Regulation 36-10 suggests that narratives be written in straight prose
style and discourages the use of headings, underlining, or capitalization to add emphasis.
Many officers felt that bullt ig and similar techniques should be used to shorten the
required prose and to highlight the points that are most important. Such techniques are
used currently by some of the other services on their OERS.
Statement of nromotabllltv
Promotion boards indicated that they put considerable weight on what the
indorsing officer writes about promoting the individual. Thus, an indorsing officer can
inadvertently hold a person back from being promoted by not making an overt statement
IV-19
about "promotion now" even though he/she has described the officer's performance and
potential in glowing terms. It appears that a more structured process for obtaining a
statement of promotability from indorsing officials would avoid potential
misunderstandings.
NON-OER PROMOTION ISSUES
Role of auuumentation
Today, nearly all officers are augmented to the regular Air Force by their
seventh year. It is possible that some greater degree of selectivity in augmentation may
serve to eliminate people with lesser chances for a long and successful career at a time
when they are more employable on the outside and to assure an almost universal
promotion to major for all who are interested in an Air Fo-ce career and pzss through
the augmentation screen. This is, however, a subject which has implications far beyond
our ability to generate the appropriate facts and we merely raise it as an issue that might
be pursued more aggressively by the Air Force staff.
Picture in the folder
A good deal of hostility is expressed over the inflated importance of details
which have become associated with the photograph of the officer in the selection folder.
Variables such as the skill of the photographer, how photogenic the officer is, or
individual likes and dislikes of those serving on promotion boards are all factors which
are seen as unnecessarily biasing in relation to the picture. Many officers would prefer
removal of the photograph from the folder.
Instruction to boards
It appeared to us that selection boards receive a good deal of instruction on
techniques for making their selections and coming to agreement but only very general
IV-20
guidance on the criteria for selection. It seems that if the Chief of Staff were trying to
emphasize certain criteria then specific instructions about such factors should go to
promotion boards. This could relate to such policy issues as the Chiers desire to view a
record of good performance in cockpit jobs as sufficient reason for promotion through
lieutenant colonel. The instruction mechanism could also be used to assure that boards
pay particular attention to the needs of the service at any particular time for particular
types of skills or backgrounds. In general, more pointed instructions about the
philosophy the Chief of Staff is trying to reinforce can be given to promotion boards as
one of the major factors in the reinforcement system.
SUNINIARY
This section has identified many issues and problems relating to the Air Force
OER system. Some of these are vitally important to the functioning of the system while
others are minor or peripheral issues which will not be given high priority in the search
for ways to improve the OER.
The issues and problems which the study team considers most important are those
relating to:
1. The honesty and integrity of the OER system;
2. The adequacy of the OERs focus on job performance;
3. Means for differentiating and identifying promotion potential;
4. The provision of performance feedback to the officer being evaluated;
5. Discipline or control of OER ratings and indorsements;
6. The administrative burden associated with the OER process.
IV-21
Of all the issues we identified,these are the ones which relate most directly to th,..
fundamental objectives of the OER system, as stated in AFR 36-10 and as expressed in
the guidance we received from Air Force leadership. Thus these are the ones which
must be addressed by any conceptual designs for an improved OER system. The next
section will discuss the process by which the study team developed its proposed
conceptual designs to deal with these issues and will present the three designs in detail.
IV-22
SECTION V
CONCEPTUAL DESIGNS FOR THE AIR FORCE OER
This section describes the process by which the conceptual designs were initially
formulated and refined. The specific designs are then explained in detail.
FORMULATION OF CONCEPTUAL DESIGNS
The first step taken by the project team in developing conceptual designs for Air
Force officer evaluation was to determine what tests would be applied to each design in
order to determine that they have potential use to the Air Force. Given all of the
previous input, the project team developed the following set of design criteria as being
the most pertinent against which to test any recommended design:
An improved OER system should:
i) focus on job performance, not peripherals;
2) provide differentiation in potential for promotion;
3) be acceptable to the Officer Corps;
4) provide means for developing subordinate officers; and
5) minimize administrative burden
GUIDING CONSIDERATIONS
In addition to the design criteria outlined above, the project team worked with a
number of considerations which had emerged from interviews and discussions with
members of the Air Force officer corps as well as from corporate knowledge and
experience of human resources management. These guiding considerations are discussed
below.
V-I
Alternative OER Designs Should Reflect the Lareer Air Force Culture.
This consideration takes into account that the Air Force officer corps is a group
of highly trained professionals which perceives itself to be above average in ability and
performance. Along with this perception is the historical inclination by the Officer
Corps to place great emphasis on rewarding subordinates and assisting in their promotion
opportunities by rating subordinates very highly on their OERs.
In conflict with these realities is the fact that the Air Force, like all other
services, must work within the constraints of the "up or out system" which mandates
selection of an ever smaller population at each officer grade. This conflict breeds an
unwillingness to differentiate openly for appraisal purposes. In consequence, the Air
Force OER process, like many other performance appraisal systems, has been
characterized by high inflation in overall ratings.
The controlled OER (1974-1978) struck directly at the inflation problem by
requiring a forced distribution of ratings. Initially, the top 2 blocks were controlled
such that no more than 50% of the officer corps could be in these two blocks. The
perception at that time was that a 3 rating or below was akin to the end of an upward
Air Force career track. Terminated in 1978, the controlled OER generated a great deal
of anxiety and loss of morale which are well remembered today.
A lesson to be learned from this era is that the requirement to rate a subordinate
in an "unpromotable"category, real or perceived, is at odds with the culture and probably
will not be accepted. A second lesson is that avoidance of design features which resemble
the controlled system should ease implementation and acceptanceof a new system.
V-2
Alternative OER Designs Should Encourage Change In Cultural Attitudes and
Habits Concernine the OER.
This consideration recognizes that over time and many changes to the OER,
certain cultural habits surrounding the OER have become ingrained within the Officer
Corps. These habits include not only the inclination to give high ratings on potential
across the board, but also puffery in narrative comments. In addition, there is the
understandable tradition of seeking the highest level indorsement possible.
To encourage change in these habits the project team decided that alternative
OER forms and indorsement patterns should be sufficiently different to require raters,
indorsers, and promotion boards to adopt new modes of behavior and not merely apply
old habits to substantively different report forms.
Judement. not Statistics Should be the Ultimate Method of Making Career
Decisions
While numerous interviewees mused about the possibility of being able to "score,
OERs to make a promotion decision, it is the project team's firm belief that this is the
wrong direction in which to head. The Air Force created promotion boards for good
and sufficient reason. The human brain is far more powerful than any computer even
envisioned at the present time. Also, the field of psychoohysical measurement (the
physical measurement of psychological phenomena, e.g., a rating of "leadership traits") is
worl.•s behind computer technology. To suggest that these technologies replace the
judgment of a small group of experienced and mature officers in the interest of
"fairness" is folly. We have therefore directed our efforts not toward mathematical
exactitude, but to produce the richest colle:tion of information practically obtainable for
promotion boards to use in their deliberations.
V-3
Alternative OER Deslens Should be Practical to Implement.
Apart from the criteria of minimizing administrative burden, the projec't lk.Rm
felt that any alternative OER design should be formulated to take advantage of avaiiable
technology to the extent possiblc. This would apply to storage as well as processing of
OER information for both individual rater and promotion board purposes.
Practicality as a consideration also extended to implementation of an alternative
OER system. Again, drawing from lessons of the controlled OER, the project ,11:im
believed that gradual and perhaps evolutionary implementation might be more acc:*-.Iable
to the officer corps than an abrupt full scale implementation. For example,
alternative OER design assumed voluntary conformance with rating procedures
sufficient conformance did not occur, then stronger review techniques could be ad,;,,,
the system as needed.
RANGE OF FEASIBLE ALTERNATIVES.
Given the criteria established for an improved OER system together witli ol-!
guiding considerations, a range of feasible alternatives was determined to e•!.
Although the initial alternatives formulated by the project team varied accor' ,
certain individual features of form, process and content, this range can best be exp)-,'seQ
in terms of degree of change -- from alternatives causing the current OER s%,.
change very little to alternatives causing rather radical change to the OER process
The preliminary designs shared some common components. All of the pi
designs assume greater usage of computer technology than currently exists. In ac,•itinn,
all of the designs retain job performance factors, although the number of fact i,•
been reduced. In each design. however, the requirement for supporting narrative ; , ,;•
rating on each performance factor has been eliminated. In addition, each de-i . 1ý4
incorporated a space for the rater to define job accomplishments for the ratin .
V-4
Finally, each design assumes use of an off-line OER worksheet for job counselling
purposes.
The designs varied one from another primarily in the way discipline would be
introduced. This variance ranged from no change in the current, covert indorsement
system to overt control of the top block.
Once the preliminary design ideas had been formulated, the project team entered
into a second stage interview process to test major elements of the designs by gathering
the views of selected members of the Air Force officer corps.
TESTING AND REDESIGN OF CONCEPTS
The interview guide used in the second stage interview cycle is given in
Appendix E. These interviews, held with 20 Air Force officers ranging from 0-3 to 0-
6, were fairly informal discussions to determine respondents' reactions to the various
design features and to obtain their opinions on issues surrounding implementation of
these features. A summary of the results from the interviews is given below while a
complete tabulation of the results of these interviews, broken out for junior and senior
officers, is shown in Appendix E.
The overall impression from these interviews is that there is a desire for a
streamlined and discriminating OER process.
Computerization of OER processing was strongly supported as was the proposal
to use pre-developed job descriptions which could be revised or amended at the time of
OER preparation. The idea of having a separate OER for company and field grade
received fairly strong support but was accompanied with concerns over increasing the
administrative burden. Retention of the twice-a-year OER for lieutenants received very
little support (only 27% of the respondents were positive overall).
V-5
A proposal to institute an off-line OER work sheet for use in setting goals and
reviewing past performance received very favorable reaction from the respondents. By
contrast, proposals to show a developmental goal for an individual officer on the formal
OER form or to show the officer's strongest performance area were not well received.
A number of officers believed such additions would simply be gamed and that raters
would have a difficult time in forming such opinions.
Officers did want to retain the graphic scale for potential but did not have strong
feelings about omitting numeric scales on performance factors.
Elements which would introduce greater discipline in ratings also received strong
support. Such elements included limiting the Wing Commander to giving top potential
scores to only 10% of ratees; providing rater histories to superv',sors; and showing rater
and indorser tendencies to the selection boards.
The preliminary designs were reviewed in the light of these findings and
appropriate revisions were made. The final forms of the conceptual designs are
explained next.
CONCEPTUAL DESIGNS FOR OFFICER EVALUATION
This section presents three conceptual designs for Air Force officer evaluation.
Presentation of these conceptual designs will be in three main parts. First, a set of
features will be discussed which will be uniform across all of the designs. These are
features which the study recommends for adoption, no matter what specific design for
evaluation may be chosen. Second, the variable features of the three designs will be
presented. Finally, each of the conceptual designs will be compared to the design
criteria which were presented on page V-I.
V-6
UNIFORM ELEMENTS OF THE CONCEPTUAL DESIGNS
There are a set of features which the study team believes should be adopted by
the Air Force and incorporated into any evaluation system which may be selected.
These features are:
1. Use computer technology to reduce the administrative burden and provide
reports and summaries not now available to the evaluation system;
2. Improve job descriptions incorporating computer technology wherever
feasible;
3. Provide a separate OER worksheet to assist in the evaluation process and
to enable off-line counseling and feedback;
4. Enhance the information given to the promotion boards bearing on the
discrimination among officers;
5. Provide additional training to the participants in the OER process.
Use Comnuter Technologv
Currently, OERs are largely hand-processed, although many activities employ
word processing equipment to generate OERs. Our recommendation is that the Air
Force take greater advantage of available data processing capability, to include: using
ADP equipment to store OER data, tracking the schedule of OERs (in coordination with
other personnel actions), and providing some review and quality control functions. In
addition, statistical analysis of OERs can and should be performed by computer, A
centralized database for OERs (probably at MPC) could provide information as needed
to be distributed to (command, wing, or base level) data bases, and in turn, receive input
V-7
from them for storage, tracking, and analysis. The evolving "PC3" system would be one
potential host for such a database and its software.
The increased use of computer technology is envisioned in each of the three
conceptual designs that form the core of this section. A computer would be useful in
generating reports on rater and endorser tendencies, in tracking the distribution of top
block ratings and in analyzing the pattern of senior levels of indorsement.
Computer technology offers the promise of a major reduction in administrative
costs in the preparation of OERs. By linking the computer to an advanced printer, the
need to procure, distribute, and store forms can be eliminated. A related, indirect cost
,avings that could be realized is in the elimination of the many iterations in producing
OERs to conform to the current notion that exceptionally high standards of typing, word
and line spacing are required. We also suggest that software be developed which will
provide user-friendly, menu-driven data entry screens for use by either rater/endorser
or clerks.
Improve Job Descrlntions
Nearly all of our Air Force sources, in interview and focus groups, expressed the
opinion that the job description is an important part of the OER. and th-t it should be
strengthened and made more informative. The job description can provide important
information to selection boards, especially for officers whose jobs are not well-known
"standard" operational positions.
Our recommendation is that standard "shell" job descriptions be prepared for as
many officer jobs as possible and stored in a central database. The rater will update the
"shell" description as needed, add specifics where applicable, and ensure that the final
job description provides a clear, complete picture of the officer's duties and
responsibilities. (We envision participation by the ratee in this process, through the
V-8
medium of the OER worksheet, at the beginning of the rating period.) This product
should provide promotion boards and other OER users with accurate, up-to-date
information to aid their decision-making, while the process of defining the job should
facilitate job counseling and communication between the rater and his/her subordinates.
An illustration of what such a shell might look like and how the rater might modify it
are displayed at Figure V-I.
It should be clear tha. %;is
recommendation is not offered as a means to inhibit
the freedom of the rater to describe/establish job requirements, but rather as a job aid
with the potential to make job descriptions more useful both for promotion boards and
for job incumbents.
Provide Senarate OER Worksheet
Again, through the first round interviews, we found that many young officers
want the opportunity for job counseling from their superior officers. This need for
institutionalized counseling was also part of the overall guidance for the project
objectives.
After evaluating the findings about other organizations and some of the opinions
expressed by officers, the study team decided to recommend that a separate OER
worksheet and counseling form be used to support communications between the rater
and ratee, This worksheet would be used at the beginning of the rating period to
document the rating chain and to clarify the job requirements. At the end of the rating
period, the worksheet would be used by the ratee to cite accomplishments during the
period and by the rater to counsel the ratee on performance and career development. A
model of such a worksheet is displayed at Figure V-2.
V-9
FIGURE V-i
SAMPLE JOB DESCRIPTION
A. Computerized Shell. (This model job description would be provided to
the rater from the computerized OER data base).
MATERIEL MANAGEMENT OFFICER, SUPPLY SQUADRON,
The Materiel Management Officer (MMO) directs and supervises the
administration, maintenance and availability of supplies and equipment in the Materie;
Management Branch of the _ Supply Squadron. The MMO is responsible to the
Supply Squadron Commander/Chief of Supply for the efficient management of all items
in the supply accounts. The Materiel Branch monitors stock levels, projects future
supply needs, responds to requests covering a wide variety of items, and protects against
shrinkage or theft of supplies.
Principal challenges include responding promptly and effectively to normal and
emergency supply requests, supervising subordinates, and assuring adherence to very
stringent and detailed administrative controls. Additional challenges include determiniig
priorities for responding to conflicting requests and using ingenuity when normal
channels do not suffice.
Important dimensions include:
Account class:
Number of subaccounts: ==
Value of equipment accounts:
Personnel supervised: Direct Indirect
Officers ..
Enlisted ....
U.S. civilians -- --
Foreign nationals ....
V-10
FIGURE V-I (Continued)
SAMPLE JOB DESCRIPTION
B. Modified Job Description. (This is an example of how a rater might
revise the shell job description to fit the particular circumstancei at that
job site).
MATERIEL MANAGEMENT OFFICER, 1776TH SUPPLY SQUADRON, ANDREWS
AEM
The Materiel Management Officer (MMO) directs and supervises the
administration, maintenance and availability of supplies and equipment in the Materiel
Management Branch of the 1276th Supply Squadron. The MMO is responsible to the
Supply Squadron Commander/Chief of Supply for the efficient management of all items
in the supply accounts. The Materiel Branch monitors stock levels, projects future
supply needs, responds to requests covering a wide variety of items, and protects against
shrinkage or theft of supplies.
Principal challenges include responding promptly and effectively to normal and
emergency supply requests, supervising subordinates, and assuring adherence to very
stringent and detailed administrative controls. Additional challenges include determining
priorities for responding to conflicting requests and using ingenuity when normal
channels do not suffice. MMO services and balances the needs of several organizations
located in Andrews AFB. such as the Reserve and Systems Command HO. Acts as Chief
of Suoolv in the absence of the Sauadron Commander.
Important dimensions include:
Account class: I. i.tIL..IV
Number of subaccounts: Note
Value of subaccounts: Note
Personnel supervised: Direct Indirect
Officers
Enlisted 3
U.S. civilians 3
Foreign nationals
Note: This sample job description was prepared by interviewing an incumbent materiel
management officer. The missing data was not available at the time of the interview but
should be available to the rater if sufficient advance notice were given.
V-1I
FIGURE V-2
OER WORKSHEET AND COUNSELLING FORM
PART I RATEE IDENTIFICATION DATA
1. NAME 2. SSAN 3. Grade 4. DAFSC
5. ORGANIZATION. COMMAND, LOCATION 6. PAS COD)E
PART II RATEE - YOUR RATING CHAIN FOR THE EVALUATION PERIOD IS:
NAME GRADE POSITION TITLE
RATER
ADDITIONAL RATER NAME GRADE POSITION TITLE
(ifany)
"INDORSER NAME GRADE POSITION TITLE
PART III RATEE - YOUR UNDERSTANDING OF THE JOB REQUIREMENTS IS:
JOB TITLE:
Significant duties and responsibilities:
PART (V RATEE - LIST YOUR SIGNIFICANT ACCOMPLISHMENTS DURING THE PERIOD
REPORT PERIOD _ TO
dam date
sipsia'l date
V-12
FIGURE V-2
PARTV RATER IDENTIFICATION DATA
1. NAME 2. SSAN '3. Grade 4. DAFSC
5. ORGANIZATION, COMMAND. LOCATION 6. PAS CODE
PART VI DESCRIPTION OF RATEE'S JOB
7. PERIOD OF REPORT 8. NO. DAYS OF SUPERVISION 9. REASON FOR REPORT
From: IThru:
10. JOB TITLE:
11. JOB DESCRIPTION
PART VII COMMENTS ON JOB PERFORMANCE
PART VIII AREAS OF CONCENTRATION FOR IMPROVEMENT OF PERFORMANCE
PART IX AREAS OF CONCENTRATION FOR CAREER DEVELOPMENT
signature date
V-13
The OER worksheet provides a means for a ratee to influence his/her report by
providing specific information on the manner of performance of duties to the rater.
This merely provides structure and a specific form to what has been an informal
procedure. However, adding the requirement for the ratee and rater to agree on the job
description and job requirements at the beginning of the rating period provides a means
to positively influence job performance.
The other feature of the worksheet which is proposed as a means of improving
job performance is the comment of the rater on job performance at the end of the
rating period. The subsection labeled "areas for . . . improvement" was included
specifically to encourage the rater to identify negatives if they exist and to influence
changes in the direction of desired performance. The Air Force culture is such that it is
not likely that rating officers would be led to include such comments in the OER itself.
This concept proposes that the worksheet, not the OER, will be the principal
mechanism providing feedback to the officer corps on performance. The decision not to
rely on the OER for feedback on performance recognizes that the primary purpose for
the OER is to discriminate among officers for the purpose of making selections
(primarily for promotion). The use of one form for both counseling and discrimination
would create conflicting demands on the author (the rater is asked on the one hand to
provide documentary evidence, which will help get a good officer promoted and, on the
other hand, to list that officer's weaknesses needing improvement.) Resolving this
conflict has been the most difficult challenge to revisers of OER for decades. The
solution proposed here is to divorce the OER from the counseling process.
V- 14
"ReviseInformation Provided to Promotion Boards
This element addresses the file information provided to the selection boards on
each officer under consideration for promotion. First, it is recommended that the
number of OERs in the promotion folder be limited. Current practice dictates that all
the evaluation reports generated during an individual's career be it, luded in the
promotion folder. We are proposing to limit the number of evaluation reports to all
reports generated in the present grade, or five evaluation reports (whichever number is
higher). For example, if an individual has received four evaluation reports as a captain,
then these four reports, and the last OER as a first lieutenant, would be included in the
promotion folder. Similarly, if a lieutenant colonel has received six evaluation reports,
all six would be part of the promotion folder.
This measure would have considerable impact upon the Air Force officer corps.
First, it would reinforce the message that the performance evaluation system has been
re-focused to accentuate current or recent performance. In addition, it would take some
pressure off both the rater and ratee; since the OER would not have the long-term
impact that it has today. This should result in more candid and accurate evaluations.
Finally, it would focus promotion board members' limited time on those reports which
should have the greatest impact on the promotion decision.
Second, there is a group of special category organizations (SPECtT) which,
according to Air Force regulations, receive preferential manning considerations as a
matter of policy. In a study of major, lieutenant colonel, and colonel temporary
promotion boards for fiscal years 1972-1974, 25 agencies identified as SPECAT were
recognized as having "higher quality" officers than did the highest MAJCOM. It is
recommended that such a study be updated and those units identified which, by
regulation, receive special consideration in terms of the quality of officers assigned Wn
are shown to have significantly higher promotion board scores than the MAJCOMs. It is
V-15
further recommended that the list of such organizations and a summary of recent
promotion selection rates be provided to each promotion board with instructions that the
board is to recognize that the proportion of outstanding officers who are assigned to
such organizations is probably significantly higher than most other units.
Finally, it is proposed as a part-of each of the conceptual designs that pertinent
rating tendencies be furnished to selection boards. Through the use of the computer
technologies recommended earlier in this section, the rating/indorsing history of the
persons or commands (depending on what level is chosen to provide the discrimination
on individual OERs) can be displayed to the promotion boards. Through such reports,
individual OERs can be interpreted accurately to differentiate those reports which are
inflated from those which represent the canuid judgment of the writer about the rated
officer's potential.
Train All Partici1ants
Any change in administrative procedures would require additional training for
those responsible to execute this procedure. However, any substantial change in the
officer evaluation system will require training and educating the entire officer corps.
This is true because the OER process affects every Air Force officer as a participant. It
is even more significant in light of the study finding that successful implementation of
any major changes in the system will require changes in Air Force culture that go far
beyond procedure. Thus, training is a major activity addressed in the implementation
plan presented in Section VI. To ensure continued success in any officer evaluation
process, training must be on-going and continuous.
V-16
CONCEPTIONAL DESIGN 1: DIFFERENTIATION THROUGH COMMAND
PERSUASION
This alternative OER design, recognizing the strong culture surrounding the
current OER process and the potential stress that will be associated with any change,
seeks to improve the process while retaining the method of providing discrimination
among officers that, to date, has widespread acceptance, i.e., level of final indorsement.
Distinguishing features of this design are:
I. The list of performance factors has been reduced in number and the
requirement to comment on each has been eliminated.
2. The rater is no longer required to evaluate potential.
3. The discriminating factor will continue to be level of indorsement.
Process
The OER will be prepared annually and batched so that all reports for officers of
the same grade are closed out on the same date. Since the discrimination for potential is
to be the level of indorsement, and since there is a closed process following command
lines to determine which officers receive the higher level indorsements, it appears
prudent to rate all officers in a peer group together to provide a fair assessment of each
officer in the command. The argument supporting this statement is that if the major
commands are going to discipline the system, then competition among officers must be
within the command. Otherwise, the commands will be competing with each other for
promotion opportunity, an anarchical situation that would work to defeat the system of
discrimination proposed.
V-17
The identity of the rater, additional rater, and indorser would remain the same as
under the current system. The allocation of indorsements at each level of command
would be determined in accordance with major command policy.
At the completion of each rating cycle, the military personnel center would
produce a report which displays the indorsement tendencies of each major command and
separate activity. This report, together with the analysis of the distribution of quality
officers to SPECAT units, would give promotion boards the tools needed to intei'pret
OERs and to select the best Air Force officers for promotion.
OER Form
A model form that could be used in this design is displayed at Figure V-3. In
this scenario, the rater will provide numerical ratings for each of a list of six job
performance factors on a five point scale. The performance standards will be displayed
in the OER regulation. The rater will also provide comments on duty performance. The
regulation will emphasize that the narrative should focus on the performance factors and
that it should emphasize accomplishments, not adjectives.
There is space for a career development recommendation. This is a narrative in
which the rater may make any comments about the future development of the ratee as a
career Air Force officer. Appropriate comments would include future assignment
patterns, training and education, and self-improvement. In this section, the rater will
make a recommendation on whether or not to augment a reserve officer.
On the reverse side of the form the additional rater and indorser will add
narrative comments on performance of duties and potential and evaluate potential on a
six point scale. The rater will not evaluate potential.
V-18
FIURE V-
CONCEPTAL DESIGN I
OFFICER IDE~NTFICATION DATA
1.NAME 2. bbAN I. GRADE 4-. DAFSC-
,5.DUTY titLE -...
6. PAS CODE
7. ORGANIZATION, COMMAND, LOCATION
8. PERIOD OF REPORT '9. DAYS OF SUPERVISION 10. RE•A•SON FOR REPORT
FROM THRU BY RATING OFFICIAL
[ ii.
JOB DESCRIFPION
OflMR ASSIGNED DTI.ES
ASSESSMENT OF PERFORMANCE BY RATING OFFICIAL
JOB PERFORMANCE FACTORS
I ,
APPU CATION OF TECHNICAL KNOWLEDGE AND SKILLS m
M
PLANNING AND ORG ANIZATION OFWORK ~C ]C ]C
THE EXERCISE OF LEADERSHIP C -C r-] -] -] "-
MANAGEMENT OF RESOURCES ]]C] ]
IDENTIFICATION AND RESOLLTION OF PROBLEMS ] ] C C -- C]
COMMUNICATIONS = c z]C] ] ]
COMN'IS ON PERFRMANCE (Defh *Uaffmti for raung puiod)
NAME, GRADE, BR OF SVC, COMD, LOCATION DUTY TITLE DATE
SSAN "-INATU9t. FRTNCIAL
V- 19
CAREER DEVELOPMENT RECONbqDATION
EVALUATION OF POTENTIAL
Coupa the nmu's caf sum r m0ibkliY wwith " of
odwmr offon whom yco kow inlue srnp'd. IndMf your mfu by
plWiWg fm -)ra f doIpMd pofmi of UI most WfiW blIct
RATIU ER RATIM ER RATME E RAT1 Et
tzwest
COMMEYTS by ADDITlONAL RAT
NAME, GRADE, BR OF SVC, CX,(D, LOCAniON DLrTIYTIU DA17E
SSAN IGNATURE OF RATING OFFICIAL
COMMIETS BY INDORSER
NAME. GRADE, BR SVC, ORON,CZM• LOCATION DUTY Trr DATE
SSA• S;IGNATUkE OF; GE•S•
OFFI=
CIRT[FICATION OF REEORT BY CONSANtYAOm4CY
NAME, GRADE. BR SVC, ORON, COMD, LOCATION DUTY 1TIT DATE
SSAN SIGNATURE OF CERTIFYING OFFICIAL
V-20
This design enhances the evaluation of job performance by reducing the number
of performance factors to those which are demonstrably pertinent to all jobs. Then by
tying the rater's narrative to these factors it can be expected that a more meaningful
description of job performance can be attained. This expectation is heightened by the
fact that the rater is directed to focus on the performance, not the potential. There also
is an expectation that the narrative will focus more on accomplishments and less on
puffery, although this may be an unreasonable expectation. The rater-ratee relationship
is protected by retaining the discrimination at the level of the indorsement.
The results of the study team's interviews suggest that, absent meaningful
numeric ratings, promotion boards can discriminate among officers based on narratives
and level of indorsement. The thrust of this design is to enhance the discipline which
the major commands are already providing the system. The effect would be to increase
the level of discrimination specificity on each report and to give Air Force leadership
more visibility of (and influence over) the process of differentiation being performed by
the major commands. This result is achieved by generating more detailed reports the
indorsement patterns in each command and by requiring that annual reports be batched.
Feedback from Air Force officers of all grades suggests that the enhancements to
morale offered by inflated reports are important to the culture. The effects of the
changes offered in this design are to retain a morale-enhancing report that discriminates
for promotion purposes and that substantially reduces the administrative burden now
experienced throughout the Air Force in preparing OERs. What this method does not
accomplish is to eliminate grossly inflated ratings and their conmitant dangers.
V-21
CONCEPTUAL DESIGN 2: DIFFERENTIATION THROUGH RATER
PERSUASION
This alternative OER design concept would alter the existing Air Force CER
system substantially. Therefore there is a risk that the culture woulcd not adapt to the
change and the decision would not be accepted by the officer corps. The major
features, however, are now being used ;n other uniformed service OER systems. As
such. they have been demonstrated to be feasible, and there is an existing set of
information concerning the effectiveness of each feature used. (This does not suggest
that, removed from the parent services' cultures and their integrated OER systems, each
feature will work in the same way in an Air Force environment and context).
The distinguishing features of this design are as follows:
1. The rater is required to focus on duty performance only.
2. The indorser provides the principal information used in discriminating
among officers.
3. Raters/indoraers would be persuaded to distribute their rating scores along
the available scales by publication of their rating tendencies for use both
in interpreting their ratings and in evaluating their own leadership
abilities. This concept is sometimes referred to as the "rate the rater'
technique.
Process
The OER will be prepared annually, and batched so that all reports for officers
of the same grade are closed out on the same date. The purpose of this procedure is
used to reinforce the guidance to indorsers to consider all officers of a grade when
V-22
preparing the promotion recommendation so as to achieve a realistic distribution of
scores.
The rater should be the ratee's immediate supervisor. This is the person who
determines what the duty requiremeuts will be and who is best situated to evaluate how
well the ratee accomplishes the duties.
Criteria will be established for the selection of indorsing officers to ensure that
responsible, mature officers perform this duty;, but unnecessary inflation of level of
indorsing official will not be permitted. For example, the indorsing officer might be
designated as the rater's supervisor with the additional requirement that he/she be at
least a field grade officer and be at least one grade senior to the officer being rated.
There would be provision for an additional rater if there were a level of
supervision between the rater and the iuidorser. This might happen most often when the
additional rater was not at least a field grade officer or when he/she was not one grade
higher than the ratee. There would not be a space on the OER form for an additional
rater's narrative. Rather, that narrative would be attached on an additional sheet. This
is predicated on the belief that additional raters would only be needed on a small
minority of the reports.
The report will be prepared on a computer so that, when completed and reviewed
at the installation, the administrative information and quantitative ratings will be a part
of the data base at the base level. This data base can be shipped electronically to the
Air Force Military Personnel Center. At the base level the ratings would be used to re-
compute the ratings histories of both rater and indorser. These historical summaries
would then be available for review by their supervisors when subsequent evaluations are
prepared. Thus when officer "A* is evaluating officer "B", "A" should consider "B's'
evaluation history and whether "B" complies with Air Force policy. The operative policy
V-23
here is that the ability to make candid, realistic evaluations of subordinates is a measure
of good leadership.
At the Military Personnel Center, the updated data base would be used to
electronically generate a label showing the rating history of each rater and indorser.
This label would be affixed to the record copy of each official OER. Thus the selection
boards and assignment officers would be able to evaluate ratings for performance and
potential in respect to the rater's and indorser's long term tendencies, isolating and
discounting the worth of those ratings being inflated. The concept envisions that a three
year running average would constitute the rating history for each officer with evaluation
responsibilities.
Finally, it is proposed that a report showing each officer's rating history be
prepared and placed in the selection folder when he/she is being considered for
promotion.
OER Form
A model of the form that could be used in this design is displayed at Figure V-
4. The rater will provide numerical ratings for each of a list of six job performance
factors on a seven point scale. The performance standards will be displayed in the OER
regulation. The rater will also provide comments on duty performance. The instructions
will emphasize that the rater is to structure his/her narrative around the job performance
factors as an outline and that the narrative should focus on deeds, not adjectives.
The indorser prepares the reverse of the form beginning with a career development
recommendation. This is a narrative section in which the indorser may make any
comments about the future development of the ratee as a career Air Force officer.
Appropriate comments would include future assignment patterns, training and education,
and self-improvement.
V-24
FIGURE V-4
CONCEPTJAL DESIGN 2
RATEE IDENTIFICATION DATA
1. NAME 2. ,SAN ' 4.
ADAFSC
5. DUTY TITLE 6. PAS CODE
7. ORGANIZATION, COMMAND, LOCATION
8. PERIOD OF REPORT 19. DAYS OF SUPERVISION 10. REASON FOR REPORT
THRU BY RATING OFFICIAL
11. JOB DESCRIPTION
OTHER ASSIGNED DUTIES
ASSESSMENT OF PERFORMANCE BY RATING OFFICIAL
SFACRS DNM=DOES NOT Consistently MEET the performance standards,
MSE-MEETS and SOMETIMES EXCEEDS the performance standards
CE-CONSISTENTLY EXCEEDS the erformanue standards
DNM MSE CE
APPLICATION OF TECHMCAL KNOWLEDGE AND SKRLLS 0 N E C
E
0
PLANNING AND ORGANIZATION OF WORK 0 0 0 0 0 0 0
THE EXERCISE OF LEADERSHI 0 0 0 0 0 0 0
MANAGEMT OF RESOURCES 0 0 0 0 0 0 0
IDENTFCAn ON AND RESOLUTION OF PROBLEMS 0 0 0 0 0 0 0
COMMUNICATIONS 0 0 0 0 0 0 0
SPACE RESERVED FOR W. USE
[This rater's grading for all (grade) __ % % % % % % %
fr period (date) -to
CONM:ENTS ON PERFORMANCE (Deri:e somp~iahments for ring period)
NAME, GRADE, BR OF SVC, COMD, LOCATION DUTY TITLE DATE
MSAN SIGNATURE OF RATING OFFICIAL
V-
FIGURE V-4
CAREER DEVELOPMENT RECOMMIENDATION l
DNM-DOES NOT Consistently MEET thperonnsudr
OFFICERSHIP FACTORS MSE-mETs AN SOM mETES EXCEES =the
eormamicc,standards
CE-CONSISTENTLY EXCEEDS the ierforrmanci stnarss
DNM MSE CE
IITATIE 0 0 0 0 0 0
RESPONSIBLITY 0 0 0 0 0 0 0
DECSIVENESS OFUED W 0 0 0 0 0 0 0
ADAPTBiLTYTO sS 0 0 0 0 0 0 0
PROFESSIONALISM 0 0 0 0 0 0 0
SPACE RESERVED FOR MPC USE
rh is ra~ter's grading for all (grade) -
forperiod (date) to •
NDORSER COMMENTS
PROMOTION POTENTIAL
Do Not Promote Promote With Pee. Promote Ahead ofPam
SPACE RESERVED FOR MPC USE mm
tThis Indorser's ratings for (number) of (grades) _ during th period (date) to
NAME, GRADE, BR OF SVC, COMD, LOCATION DLTY T=IT DATE
SSAN SIGNATURE OF INDORSINO OFFICIAL
REVIEWING OFFICER
CONCUR EJ NONCONCUR
Comrnments (only Ifnonconcur)
REVIEWER'S NAME, GRADE, BR OF SVC, ORGN, COMD, LOCATION DLTrY TITLE DATE
$SAN SIGNATURE
II i • V-2i
Next the indorser would evaluate five officership factors on the same seven point
scale. Again, the standards would be displayed in the regulation. These traits are
assigned to the indorser under the philosophy that traits are more closely related to
potential than to current performance and the burden of estimating potential should be
placed on the indorser rather than the rater. Finally, the indorser would evaluate the
promotion potential of the ratee (scale of I to 7) that reflects the potential of the ratee
to perform the duties associated with the next higher grade, in comparison with all other
Air Force officers of the ratee's grade. The indorser will also provide a narrative that
justifies the officership ratings and the estimate of potential.
The report should be reviewed by the indorser's supervisor unless the indorser is
in the grade of colonel or higher. Under most circumstances, when a reviewer is used
he/she should be in the grade of colonel or higher. The purpose of the review is to
ensure that a senior Air Force officer has viewed the report. In interviews conducted by
the study team, colonel is the lowest grade where it was observed that officers
consistently expressed concern about a relationship between a credible OER system and
the future well-being of the Air Force officer corps.
The focus of quality control measures will be on the behavior of indorsing
officers. This behavior can be influenced by publishing the indorser's rating history in
two forms. First, on each OER a computer generated indorser rating history reveals to
selection boards whether the indorser is complying with the spirit of the regulation. An
indorser who inflates all reports degrades the value of those OERs which he/she
prepares. Second, a computer generated rating history will be placed in the selection
folder of each officer being considered for promotion showing how that officer has
performed the responsibilities incumbent on indorsing officers. These computer
generated reports will create stress for those indorsing officers who do not comply with
V-27
the spirit of this OER concept. In addition, inflation of scores can be influenced by a
thorough education program for indorsing officers. This program should provide
periodic updates of information about statistical trends in OER inflation, a means of
reassuring indorsers who comply and pressuring those who do not.
The OER process protects the relationship between an officer an his immediate
supervisor by not requiring the supervisor to furnish the most obvious promotion
discriminators in the OER. The indorser, who is forced to provide quantitative
discriminators, is separated from the ratee by one level of supervision; and the indorser
is thus presumed to be more impartial to the conflict between the needs of the
individual (recognition through promotion to a higher grade) and that of the organization
(select the best qualified through Air Force-wide competition).
Even with the computation of rater histories, the rater can not be expected to
contribute much discrimination using job performance and officership factors on the
front side of the form. The culture would not permit this much of a change in behavior
from the current traditions. However, these factors should be included -- somewhat for
the discrimination (a chance to separate the sub-marginal) but more for the purpose of
educating the officer corps on the Air Force expectations about performance of duty and
the qualities of officership.
The principal discrimination on the OER will be the indorser's rating for
potential. This rating would not be specifically controlled; however, by requiring that
annual reports be batched by grade and through persuasion it is reasonable to suppose
that the majority of indorsers can be influenced to distribute their ratings along the
potential scale. The value of a maximum rating will be degraded in the cases where an
indorser gives everyone a maximum score. This distribution of scores will be the basis,
observed over a time period, that provides a number of reports on each officer for
discrimination among levels of potential for promotion.
V-28
CONCEPTUAL DESIGN 3: DIFFERENTIATION THROUGH TOP BLOCK
CONSTRAINT
The third alternative OER design also alters the existing Air Force OER
substantially. In this third alternative, discipline is introduced overtly through a 10%
limitation on the number of top block ratings allowed. This alternative runs the risk of
being negatively compared to the controlled system although specific identification of a
small percentage oi" high achievers is now being done through the covert indorsement
allocation process.
The distinguishing features of this design are:
I. This entire system is envisioned as a computer-based process. That is, all
information on an OER is entered directly into a remote terminal/PC,
where it is stored for future access while certain decisions are made about
its viability. It is not released to the official record until it has been
validated.
2. Rating officers make differentiations between officers but only it the
extremes.
3. The indorsing officer is limited to rating only ten percent of the officers
in each grade in the top block for potential.
OER Process
This design does not incorporate a change in the current timing of OERs. That
is, they would continue to be based on anniversary dates, change of assignments, etc.
The major change in this system is that OERs would not enter into the official record
until the end of each year. Using current computer technology, OERs would be written
or entered on a personal computer or computer terminal so that the ratings are
V-29
immediately "banked." In addition, a printout of the form (which is printed entirely by
the computer) is signed and sent through the chain of command to any intermediate
commanders, who enter their indorsements on the form, and into the computer data
bank. The form is ultimately forwarded to the wing commander. The wing
commander's promotion rating is entered into the computer, but not on the physical
form which is maintained at wing headquarters until the end of the year. At that time,
the wing commander's mtings are validated against the ten percent limitation (see the
following section).
As will be explained later, the primary promotion recommendations will be made
by the wing commander or equivalent level. The wing commander will be limited to
recommending no more than 10% of each grade for below the zone promotions. The
form will allow intermediate supervisors to make a recommendation on promotion, but
these recommendations will not have to meet the 10% test. These intermediate
recommendations are vehicles for supervising officers to encourage the promotion of
their best people, those with the greatest potential for greater responsibility in the Air
Force. Clearly, it is in the interest of intermediate raters to be selective in their ratings
since if they rate all officers as "promote early," they would in effect be leaving the
decision entirely to the wing commander, with no real input from themselves.
This identification of highest potential together with some amount of variation in
performance ratings provides the promotion board with more overt and factual input
than is now available. It is anticipated that this input will be most useful initially in
making decisions on below the zone promotions. However, with the passage of time, as
the number of OERs in a file builds, individuals will:
!. Be rated as outstanding on some performance factors and not ot:Jers;
V-30
2. Receive different ratings on the same factors for different time periods;
and
3. Receive different indorsements at different times.
Given this type of variation, boards will be able to reliably differentiate between
officers in a much wider spectrum than just identifying the "top* ten percent.
"Wing commander" is used here as the most typical command level at which
rating distributions would be tested. For commands which are not organized into wings,
an equivalent level would have to be determined. Also, for levels above the wing level,
the indorsing officer would be at least a step removed from the individual, at a rank of
0-6 or higher. In any case, the final indorser must have at least ten officers of the rank
to be indorsed reporting through the chain of command to him/her or the OER would
be forwarded to the next level for indorsement.
This concept also envisions that an additional rater will evaluate the ratee. This
additional rater will be the rater's supervisor, unless the rater's supervisor is a wing
commander or the equivalent in which case there will be no additional rater. Space will
be provided for a narrative where the additional rater can comment on both performance
of duties and potential. There will also be a space for a promotion recommendation.
As each OER is indorsed, and the promotion recommendation entered into the
computer, the computer will *bank" these ratings against the indorser's "account". This
bank will be available for examination by the indorsing officer and/or his designated
staff members (through use of an access code) at any time during the year. Thus, the
officer (and his/her staff) will be able to verify his/her own records as to whether the
indorsing officer is staying within the 10% top block limitation. At the end of the year,
the total pattern can be reviewed and changes made. This is intended to give the
indorsing officer a chance to review his/her recommendations in light of all officers
V-31
rated. This is done simply by changing the recommendation in the computer. When the
indorsing officer is satisfied with his/her final ratings, the recommendations are entered
on the hardcopy OERs, which are then signed and forwarded to the appropriate
MAJCOM and ultimately to MPC. The process is then begun again for the new year.
As the performance ratings are entered by the original rater (or staff person),
they are also "banked" against the rater's "account." It is envisioned that this account
will contain a running, three-year average of performance ratings given by each rater
for each officer grade. This account can be maintained in the exportable OER data
base. Each rating officer will be supplied with a computer report at the end of the year
on the distribution of ratings he/she has given. This distribution will go to the rating
officer and his/her immediate superior. Space has been provided in the job performance
factors section of each OER to display the rater's rating distribution history. This
distribution will be produced by the computer at the end of the year and before indorsing
officers make their final review. This information will also be on the OER when it is
considered by the selection board.
It is recommended that the FY 72-74 study of Special Category Units (SPECAT)
be updated to identify those units which, by regulation, receive special consideration in
terms of the quality of officers assigned and are shown to have significantly higher
promotion board scores than the MAJCOMs. It is further recommended that the list of
such organizations be provided to each promotion board with instructions that the board
is to recognize that the proportion of outstanding officers who are assigned to such
organizations is probably significantly higher than 10%. This design does not
recommend having indorsing officers rate promotion potential within such organizations
against a standard that is different than the 10% for the entire Air Force.
V-32
FIGURE V-5
CONCETU.AL DESIGN 3
RATEE IDENTIFICATION DATA
1,NAME 3. GRADE 4. DAFSC
6. PAS CODE
7. ORGANIZATION, COMMAND, LOCATION
S. PERIOD OF REPORT 9. DAYS OF SUPERVISION I0 ESN FOR REPORT
FROM THRU BY RATING OFF•CIAL
II. JOB DESCRIMIION
OTHER ASSIGNED DUTIES
JOB PERFORMANCE FAC'TORS
•TID:G Not Observed
DNM MSE CE
APPLICATION OF TECHNICAL KNOWLEDGE AND SKILLS
PLANNING AND ORGANIZATION OF WORK DNM MSE CE
THE EXERCISE OF LEADERSHIP DNM MSE CE
MANAGEMENT OF RESOURCES DNM MSE CE
ID.NTIFICATION AND RESOLUTION OF PROBLEMS DNM MSE CE
COMMUNICATIONS DNM MSE CE
SPACE RESERVED FOR MPC USE (%) (%) (%)
(Th7a rawr' ratings for all (grade) in (year)] I___II
DNM - DOES NOT Coniuisently MEET the performance standards MSE - MEETS and SOMETIMES EXCEEDS the performance standards
CE - CONSISTENTLY EXCEEDS the performae standards
COMMEFiTS ON PERFORMANCE (Defoe accomplishments for the rating period)
NAME, GRADE, BR SVC, ORGN, COMED, LOCATION DAmh
SSAN SIGNATURE OF CERTIFYING OFFICIAL
V-33
FIGURE V-5
CAREER DEVELOPMENT RECOMMENDATION
ADDMONAL RATER COMMENTS
PROMOTION RECOMMENDATION
= DO NOT PROMOTE PROMOTE WITH PEERS PROMOTE AHEAD OP PEWR
NAME, GRADE, BR SVC, ORGN,COMD, LOCATION DUTY TITLE DATE
SSAN IGNATURE OF RATING OFFICIAL
INDORSER COMMENTS
PROMOTION RECOMMENDATION
DO NOTPROMOTE -= PROMOTE WrTH PEERS = PROMOTE AHEAD OF PEERS
NAMNE, GRADE, BR SVC, ORGN. COMD, LOCATION DUTY TITLE DATE
SSAN SIGNATU RE OF INDORSING OFFICIAL
CERTIFICATION OF REPORT BY COMMANDIAGENCY
NAME, GRADE. BR SVC, ORON, COMD, LOCATION DUTY TmIT DATE
SSAN SIGNATURE OF CERTIFYING OFFICIAL
V-34
I I O N
E=r
The proposed OER form for this design is displayed as Figure V-5. This design shows a
reduction in the number of performance factors to six, on the basis that the more the
overall performance is fractionated the less the rater is able to distinguish between the
individual aspects which are frequently interdependent and the more the overall attitude
toward the individual or "halo effect" will operate. Also, this list isolates those aspects
which are separate and critical to the widest variety of jobs. Narratives for each factor
will not be required. These performance factors are:
1. Application of Technical Knowledge and Skills;
2. Planning and Organization of Work;
3. The Exercise of Leadership;
4. Management of Resources;
5. Identification and Resolution of Problems; and
6. Communication.
This design also provides for only the rating officer to fill out the performance
factor ratings. Each factor will be rated in 3 categories:
1. Does not consistently meet the requirements of the job.
2. Consistently meets and may sometimes exceed the requirements of the
job.
3. Consistently exceeds the requirements of the job in significant and
substantial ways.
In the Comments on Performance section, the rater makes narrative comments on
what the individual has accomplished during the rating period. Orienting the comments
in this manner clearly directs the rater toward talking about things that have to do with
the primary job. This should be as factual as possible, with the use of descriptive
adjectives kept to a minimum. Key points should be bulleted or highlighted to draw the
attention of those reading the OER.
V-35
The Career Development Recommendation is a narrative section in which the
rater may make any comments about the future development of the ratee as a career Air
Force officer. Appropriate comments would include future assignment patterns, training
and education, and self-improvement. In this section, the rater makes a recommendation
on whether or not to augment a reserve officer. This section ends the portion of the
OER prepared by the rater.
Space is provided on the form for a unit administrator to certify that the report
is correct. It is envisioned that this will be completed at the end of the reporting year
by the administrative office having visibility of the wing commander's evaluations
during the past year. This section would be completed when the administrator had
certified that the number of top block promotion recommendations during the year had
not exceeded the 10% limit.
Rationale
Given the history of "firewalled" ratings, it is the intention of this system to have
rating officers make some differentiations between officers but only at the extremes.
While this is certainly far from an ideal system it is one which may be workable, given
the recent OER history and the Air Force culture. Furthermore, because different
people will be considered outstanding on different performance factors at different
times, it will, over time, be possible to make much broader distinctions between records
than just the extremes.
Specifically, the system was built to recognize that
1. Air Force officers are not a random selection from the general
population, but rather an elite group of highly achieving individuals.
V-36
2. -In any elite group, there is still a range of talent, including those
individuals who stand noticeably above their peers, having an unusually
high level of skill and energy for recognizing problems or opportunities
and applying the leadership to deal with them. The opposite is just as
true, that no matter how select the group, there are always some
individuals who fail to live up to the standards.
3. Since most officers are well qualified to perform any assignment for
which they have the technical skills, it is not necessary to make fine
differentiations in either performance or potential for most of the officer
force. There are, however, certain highly challenging and vital positions
for which it is necessary to identify that small percentage of our officers
who perform best in particular aspects of their current positions and pre
the natural leaders among their peers.
EVALUATION OF CONCEPTUAL DESIGNS
Section IV presented several critical design criteria which the study team derived
from our data analysis. These criteria are not all equally well satisfied by all three of
our conceptual designs for the OER. We realized that it was probably not feasible to
satisfy all of these criteria in any one design, so each design concentrated on particular
criteria, and often failed to completely satisfy some of the others. Table V-I presents a
summary of our evaluation of the extent to which each of the three designs is likely to
satisfy each criterion, if it is implemented as we suggest. The following paragraphs
evaluate each design, in turn, against the five criteria.
V-37
TABLE V-1
CONCEPTUAL DESIGNS COMrPARE•D TO DESIGN CRrTERIA
PRO1ARILITY OF SATISFYING CRITERION
COMMAND RATER TOP BLOCK
DESIGN CRITERION P PRSUASION CONSTRAIN
FOCUS ON JOB PERFORMANCE HIGH HIGH HIGH
PROVIDE DIFFERENTIATION ON POTENTIAL MODERATE HIGH MODERATE/HIGH
BE ACCEPTABLE TO OFFICER CORPS HIGH MODERATE MODERATE
PROVIDE MEANS FOR DEVELOPING SUBORDINATE MODERATE MODERATE MODERATE
OFFICERS
MINIMIZE ADMINISTRATIVE BURDEN
SHORT-TERM LOW LOW LOW
LONG-TERM MODERATE HIGH MODERATE/HIGH
V-38
CONCEPTUAL DESIGN 1 - COMMAND PERSUASION
Focus on Job Performance
Conceptual Design 1, the one which requires the least change from current OER
practices, does provide an improved focus on job performance, with the number of
performance factors being reduced to six and the narrative comments on each
eliminated. The regulation accompanying this form would emphasize that the rater
should focus on job accomplishments in writing his narrative.
Dlfferentlitlon on Potential
Differentiation of potential would be provided much as it is on the present form,
although the additional information provided to selection boards should give more
insight into the true value of the potential rating. This design is therefore moderately
likely to improve the differentiation of potential.
Accentabilitv to Officer Corns
This design would probably be quite acceptable to the officer corps because of
its similarity to the current form and process: it requires few painful adjustments. This
is one of the strong points of this design, and one of the reasons for its inclusion.
D'•elogin, Subordinates
This design and the other two are virtually identical in the way in which they
provide for the development of subordinate officers; therefore they will not be
separately discussed. All would be accompanied by an off-line counseling form which is
designed to facilitate the provision of performance feedback and career counseling to the
officer being rated. The study team feels that this will constitute an improvement over
the current system, which lacks a formal feedback mechanism, but that its real success
V-39
will depend upon the effort devoted to training officers to provide effective counseling
and feedback to subordinates. The effectiveness of the off-line counseling provisions
will also depend upon the Air Force leadership's commitment to and enforcement of the
counseling and feedback requirement.
Admlnistrstive Burden
Conceptual Design I will have little effect on the administrative burden of the
OER system in the short term, although the removal of some narrative sections and the
use of automation in form preparation will reduce the burden somewhat. The tracking
of indorsement histories will require some administrative investment in the short term to
develop an automated system, but in the long term is likely to reduce the burden on the
commands and the selection boards.
CONCEPTUAL DESIGN 2 - RATER PERSUASION
Focus on Job Performance
Design 2 has a strong focus on job performance, separating the performance
factors, which have been chosen to be applicable to all Air Force officers jobs, from the
"Kofficership"factors. The instruction accompanying this form would give clear examples
of exemplary behaviors for each factor, further emphasizing the focus on how well the
officer performs his primary duties.
Differentiation on Potential
Design 2 provides distinct rating factors for officership or potential, which are
rated by the indorsing officer. These would support the overall potential
recommendation by the indorser. This design, therefore, provides for clear and explicit
rating of potential, separate from job performance, and is likely to yield better
differentiation than the current system, without the current *covert" component.
V-40
Accentabllitv to Officer Corns
Conceptual Design 2 should be moderately acceptable to the officer corps,
although there will be some risk in this respect, since it requires some major changes in
rating behaviors. The major risk with this design is that officers will continue to
perceive that any rating or indorsement other than top block will be devastating to their
career as it is now. Only time and experience would reduce this fear, and the risk is
that the officer corps would not give it that time. The keys to such acceptability will be
the effectiveness of the training and indoctrination which accompany the introduction of
the design, and the widespread credibility of the Air Force leadership's commitment to
the new system. The mechanisms for controlling rating inflation should be acceptable if
they are applied uniformly across all officer grades and commands.
Administrative Burden
This design, like the first, will require administrative effort to be invested in
startup procedures, such as development of software to produce statistical summaries and
rater/indorser histories. However, once the system is in place and operating it should be
simpler and less burdensome for the officers and the MPC than the current system, since
it will be highly automated and it decreases the amount of narrative material to be
written and edited.
CONCEPTUAL DESIGN 3 - TOP BLOCK CONSTRAINT
Focus on Job Performance
Conceptual Design 3 has a strong focus on job performance, with an improved
job description and simplified performance factor ratings. The performance factors
have been chosen to be applicable to the widest possible variety of Air Force officer
V-41
jobs, and to represent truly critical behaviors. Narrative comments on performance will
be required to deal with accomplishments on the job.
Differentiation on Potential
Design 3 provides for the differentiation of potential for promotion by the
indorser's explicit promotion recommendation. Indorsement level will not be used to
provide this differentiation. The limitation of 10% top block promotion
recommendations by the wing commander will force the selection of the very best
officers for this rating, although there will be no differentiation among the large number
of good but not outstanding officers on this item. However, over time and through a
series of reports, discrimination can be made through a much wider range than 10%.
Therefore, we estimate that this criterion will be quite likely to be satisfied by this
desigit.
Accentabllltv to Officer Corn
It is our opinion that this design is moderately likely to be accepted by the
officer corps, after some initial resistance to the idea of explicit constraint on ratings.
As with the other designs and other criteria, much will depend upon the credibility of
the Air Force leadership's commitment to this design, and upon how well this
commitment is communicated to tho officer corps.
Administratlve Burden
Design 3 will be similar to Design 2 in the requirement for a fairly heavy
administrative investment in the initial implementation phases. A mechanism will be
needed to track wing commander rating distributions and to keep statistics on
performance ratings. However, once the system Is up and running, the administrative
V-42
burden should be reduced from that of the current system. There will be less narrative
to write and edit, and much of the work will be computer-aided.
Viewed against the criterion of acceptability to the officer corps, Design I is
predicted to do the best, since it requires the least change in "business as usual". The
other two designs are somewhat more threatening to the status quo, and are likely to
meet stronger resistance. They will require carefully developed and intensive training
and information programs to insure acceptance.
All three designs use the same method, an off-line counseling and feedback
form, to provide a means for fostering the career development of subordinate officers.
As mentioned above, the success of this method will depend largely .upon the
preparation, training and reinforcement provided to the officers who must work with it.
The criterion of minimizing the administrative burden of the OER system is best
accomplished in the long run by Design 2, with Design 3 nearly as efficient. Design 1,
with the least change from the current system, is not expected to reduce the burden as
much. All would require a front-end investment of resources to develop the requisite
hardware, software, documentation, etc., but Designs 2 and 3 would eventually return
this investment with automation and aiding of some of the more onerous OER functions.
V-43
SECTION VI
IMPLEMENTATION PLAN
This section presents the recommendations of the study for implementation of a
revised officer OER system into the Air Force. Obviously in an effort as large as
implementing a new OER system there are literally thousands of details which must be
addressed before the system becomes a reality. Such an effort is clearly beyond the
scope of our contract or our capabilities. What follows are our conclusions about the
major issues and aspects of implementation.
The need for a detailed and well thought-out plan for introducing the new
system can be best appreciated through review of the lessons learned from the controlled
OER era (1974-1978). That OER system is not viewed as successful, and one of the
reasons given for its failure was the way it was introduced into the Air Force. This
recommended implementation plan takes account of the mistakes and successes of that
period, as reported in the Air University study of May 1979 (Phillips, 1979).
This plan is based on an assumption that the Air Force will select a new OER
system concept that is substantially different both in process and form from the current
OER system. Adopting a minor revision to the current system (such as conceptual
design 1) would not require as long to complete, although the case could be made that
all of the steps described below would be necessary.
A conclusion presented elsewhere in this study is that the principal flaw of the
current system lies neither in the process nor in the form but in the culture surrounding
the OER and the resulting behaviors which have inflated scores and compromised the
value of the ratings placed on the OER forms. Consequently a strong emphasis should
be placed on actions necessary to influence a change in officer attitudes about the OER
VI-l
process. A substantial number of such recommended actions are grouped below under
the topic of training. However, the scope of actions needed is broader than training,
and an effort has been made to integrate this indoctrination program throughout all
phases of the implementation plan.
The plan is divided into eight phases:
1. feasibility assessment and final decision;
2. design;
3. development,
4. testing;
5. full scale training;
6. full scale operation;
7. evaluation; and
8. refinement and maintenance.
Each of these phases will be discussed below. Table VI-I at the end of this section is an
implementation schedule. This schedule suggests that, in an orderly transition, the first
rating periods under a new OER system could begin about twenty-four months after a
decision is made to proceed.
FEASIBILITY ASSESSMENT AND FINAL DECISION
The plan assumes that the Air Force, at the staff level, will select one of the
OER concepts under consideration. The first phase of this implementation plan is to
prepare the concept for scrutiny by the top leadership and to make a decision to commit
significant Air Force resources to implementation. A second assumption is that, rather
than entering the Planning, Programming, Budgeting System to compete for resources,
the implementation will receive suffi-.ient priority to be funded by diversion of
resources from other missions.
VI-2
In this phase the Air Staff and the Military Personnel Center will test the
feasibility of adopting the changed OER system and estimate the resources in terms of
dollars, manpower and time needed to successfully adopt the new system -- in other
words, conduct a feasibility analysis. An important aspect of feasibility is the
assessment of how the proposed change in the OER system will affect other systems in
the larger human resource management function.
A part of this feasibility analysis should be to present the recommendation to the
major commands and stiff agencies for comments. These comments should be
incorporated into a decision briefing for Air Force senior leadership.
The outcome of this phase will be the decision to implement the change and an
allocation of the resources necessary to execute the change.
DESIGN
So far the change to the OER system has been worked out in terms of outcomes
and process. In the design phase of implementation the specifications of the system will
be written as well as the specifications for each subsequent phase of the implementation
plan.
It would be of great future benefit to the success of the revised OER system to
integrate the major commands into the planning process so that they share ownership of
the outcome. For this reason, and to provide a staff knowredgeable of a wide spectrum
of Air Force issues, it would be beneficial to assemble a multi-command task force to
complete the detailed implementation plan.
In this phase the detailed plan will be developed to implement the change. Some
aspects requiring particularly fine detail include:
1. systems requirements and specifications;
VI-3
2. identification of implementing agencies (Air Staff, MPC, Air University,
contractor, etc.):
3. test plan;
4. training,
5. publicity;
6. time-phased start-up; and
7. evaluation.
The outcome of the design phase will be a detailed plan encompassing each phase
of the implementation program. A particularly significant element of this plan is that of
evaluation. In the evaluation plan the design team will write the standards by which the
success of the implementation will be measured. The importance of designing the
evaluation plan early is that evaluation can begin early and the developers and
implementers have an on-going evaluation as a control to assist them in maintaining
standards of quality throughout the implementation cycle. A second significant aspect of
the design phasc is the designation of the lead agency and suppor"'.g activities to
accomplish the implementation.
Public relations activities should begin immediately after the decision is made to
proceed with a revision to the OER system. This activity should be integrated with each
phase of the implementation and, therefore, is not appropriately a separate phase.
During the design phase the Air Force officer corps should be informed that the
decision has been made to revise the OER, that design of the revised system is
underway, and of the reasons militating for a change. Thorough planning for publicity
in the design phase will be highly supportive of success in shaping officer attitudes
about the OER change.
VI-4
DEVELOPMENT
In the development phase the materials, programs and systems envisioned in the
design will be created. These are the tangible assets of the revised OER system which
must be in place before the changeover to a new process and form can be made. The
development phase will also produce those training and education materials that will be
used to influence officer attitudes and behaviors toward the cultural changes needed if
the revised OER system is to be a success.
Development need not be deferred until all design work has been completed.
The proposed milestone schedule at Figure VI-I suggests that design and development
can proceed to some extent in parallel with a phase lag in development to preclude the
double effort.that could result wnen a design change is made in a sub-system for which
products might have been developed otherwise.
Some of the activities during the development phase include the following:
1. Validate the information management system requirements and write the
detailed systems specifications.
2. Procure or identify existing information processing equipment which will
be used to support the revised OER system.
3. Write, test, and debug the software which will be needed to enter,
process, storc and retrieve the OER data to be developed in the new
system. (This may be a step on the critical path toward completion of a
successful implementation.)
4. Write and validate the OER and related forms to be used in the new
system.
VI-5
.5. Prepare revised regulations, instructions and supporting information that
will be used by administrators, raters, and indorsers under the new
system. An important subset of this information would be that
documentation of the automated information system needed by users.
These materials should be prepared, coordinated, and published prior to
the next phase.
6. Develop training materials to be used in training of users and
administrators of the new system.
7. Prepare additional publicity and promotion materials.
TESTING
A test of the new OER system should be conducted prior to proliferating the
system Air Force-wide. This test should be constructed to simulate as closely as possible
its projected use when fully in place. For that reason, the test should not be conducted
until the completion of the development phase.
The test should be conducted in representative smaller units of each of the major
commands and several of the more significant separate activities (Air Staff, MPC, Air
University, etc.). The size of each test unit should be restricted to the smallest necessary
to exercise the system fully and to yield a statistically significant sample of reports. On
the other hand, as many different commands should be included as resource availability
will allow.
Some mechanism should be included in the tent which will heighten the realism
of the exercise. (One of the lessons learned from the controlled OER period was that
the test did not reveal the extent of resistance to the change which the officer corps
would express when the new system was fully operational.) An example of a mechanism
VI-6
which might make the test more realistic would be a requirement for the rater and/or
indorser to brief the report to the ratee and for the Air Force to collect attitude data
from all three by means of a survey conducted in the evaluation of the test.
Some actions which should be conducted in the testing phase include the
following:
1. Select and notify the test units;
2. Train representatives from each test unit to train their units and
administer the test;
3. Train administrators, raters, and indorsers in the test units;
4. Conduct a rating cycle using the new system;
5. Evaluate the results. Some issues to be evaluated would include:
- administrative procedures;
- effectiveness of information systems;
- the distribution of ratings.
- the usefulness of the OER data to selection boards;
- counseling compliance and its effectiveness;
- officer attitudes about the revised system; and
- success of the training programs.
6. Following the test evaluation consideration should be given to adjusting
the system to account for lessons learned from the test.
VI-7
The study team believes that the best control group is either an external set of
units or a set of previous reports on the same officers. Doing simultaneous reports
under new and old systems is likely to introduce an auto-correlation error that will
confound the results. Therefore, such a technique would not provide an effective
control.
FULL SCALE TRAINING
Lessons learned from the implementation of the controlled OER in 1974 suggest
that a good training program is essential to the successful conversion to a different OER
system. Therefore, the training phase should be carefully planned and vigorously
executed. The training conducted for the test units as a part of the previous phase
should be carefully evaluated and the results incorporated- into the full scale training
programs.
Training is needed in two major areas. First, there is an obvious need to train
officers in the procedural steps they will take in executing the OER system cycle. As a
part of this aspect of the training program, prosisiuns should be made for training that
will change officers' attitudes about the OER proc,-ss. It is an observation of the study
team that it would not be practical to design an OER system which cannot be "gamed"
by officers determined to do so. Therefore, in concert with the persuasion and control
mechanisms built into the system, the training program should seek to create an attitude
in the officer corps in which the majority of officers comply with the spirit of the
revised system.
A second area on which training should be focused is that of the counseling of
subordinates. The expel ience of the other Services and that of the firms observed in
private industry parallels that of the Air Force -- counseling is a task that supervisors
are reluctant to do, which most do poorly absent adequate preparation, and one for
VI-8
which good training programs can increase the effectiveness of most supervisors. This is
a chronic rather than an acute challenge and thus suitable for a long-range training
perspective. In that regard counseling may be a subject best addressed through a
combination of pre -commissioning and professional military education programs.
Steps which may be included in the training program include:
1. Develop sets of training programs suitable for use in uniu as well as in
the various institutional environments;
2. Train major command and separate activity training teams:
3. Major command and separate activity teams train raters to perform
evaluations and counseling; train indorsers to evaluate and maintain
quality control of OERs;
4. Train the promotion secretariat in the revisions and to prepare materials
for orientations of promotion board members; and
5. Begin revised training/education in the QER sytem in the Air Force
institutional programs.
FULL SCALE OPERATION
Air Iorcc-widc implementation of the revised OeK system is dependent on the
speed with which the supporting systems can be developed and proliferated. The
milestone schedule at Figure VI-I suggests that evaluations under a revised OER could
begin two years after the decision to proceed is made.
The principal question concerning full scale operation is, what schedule should be
followed in converting from evaluations uging the Air Force Form 707 to the new form
and procedure? The operative consideration is that the revised OER system requires
VI-9
that a cultural change be effected among the officer rorps. This change must be such
that evaluators are more candid in their ratings. Therefore, it is desirable that the
conversion be accomplished in a short period of time, and that the Air Force not operate
two O£R systems simultaneously which have different perspectives on what honest and
candid evaluations should say about officers who are being evaluated.
The transition should be initiated with a close-out report for all officers using
AF Form 707. This will be the opportunity for all units that are now manipulating the
system to complete whatever distribution of indorsements they are working toward.
Having a close-out report for all officers means that all start under the new system from
the same point and have more or less equal opportunity to receive favorable evaluations
in the future.
It would be desirable to make all the close-out reports effective on the same day,
but such a procedure would create an extraordinary administrative burden. Therefore,
the transition should be planned to occur, by grade, over a period of not more than 90
days.
Following the close-out, reporting would begin on a routine basis for each grade.
The transition will be the smoothest if the sequence is in inverse grade order (begin with
Colonels). Thus, in the transition to the new system, each evaluator (rater and indorser)
is already being evaluated under the system before he/she is required to complete a
report. It is also prudent to schedule the close-out report for lieutenant colonels
immediately prior to a primary zone promotion board for selection for colonel.
Therefore, lieutenant colonels, who have relatively low promotion opportunity, will be
the last grade group to meet a promotion board under the new system. Similarly, the
promotion boards for selection to captain and major, where the promotion opportunity is
relatively high, should be scheduled so that many officers meet the board with an
evaluation under the new system in their file. The high selection rate of these officers
VI-10
should be publicized to demonstrate that the new system will operate fairly and that the
right officers (high performing) will be promoted.
Steps in the full scale operation phase include:
1. Expand the information program;
2. Disseminate regulations and instructions;
3. Install and test hardware, and software;
4. Phase out AF Form 707 with close-out reports by grade;
5. Begin reporting under the revised system, also by grade; and
6. Continue training.
EVALUATION
There is a need for continuing evaluation from the outset of the implementation
period, but a well thought-out and energetic evaluation phase should begin with full
scale operation under the revised OER system. The evaluation program should be
centralized in the Air Force rather than being delegated to the major commands, as it is
today. Also, there should be provision to continue the evaluation phase indefinitely into
the future as an Air Force headquarters function. (In this regard there is a separate
recommendation, elsewhere in this report, that the Military Personnel Center OER
quality control capability be augmented.)
Some of the items which should be evaluated include:
1. Operation of the developed technology;
VI-l I
2. Compliance of raters and indorsers with the instructions and the spirit of
the new system. This should include an evaluation of the distribution of
ratings,
3. Quality of OER related information furnished to promotion boards;
4. Promotion board results using the new OER input;
5. Compliance with the counseling provisions of the system; and
6. Officer corps attitudes concerning the changes.
REFINEMENTS AND MAINTENANCE
An effective evaluation program will provide the basis for making changes to
improve the operation of the OER system. In this regard it is the view of the study
team that future changes would be feasible and desirable if they could be accomp!izhed
by an evolutionary rather than a revolutionary process. Such changes could be viewed as
necessary maintenance to the system.
The concept designs proposed in Section V are thought to be feasible but may
not accomplish all that the ideal evaluation program would do. Some future refinements
which might be necessary or desirable include:
1. More stringent discipline to the distribution of ratings may be necessary
if inflation is excessive;
2. if counseling does not prove to be adequately performed, compliance
measures may be added to the system;
3. The Air Force may wish to institute performance improvement measures
that resemble management by objective more closely, such as participative
goal setting, for example.
VI-12
K 44AAt
F-2
"t 1
C
c
Ut 7 2 1. VI-I3
c 0
0
w) 0I
C 0 b(D 0
L) V) V=
=
V1-
13I
SECTION VII
CONCLUDING COMMENTS AND RECOMMENDATIONS
In the course of this project we have studied performance appraisal from a
historical perspective, as it is practiced in the private sector, as it is conducted in the
military services, and, of course, as it is conducted in the Air Force. While each
organization has some distinguishing needs or cultural characteristics, it may be said
overall that performance appraisal is at best an inexact science as well as a highly
emotional issue. Inflated ratings are typical and recurring in almost all organizations. In
short, performance appraisal is a very onerous but necessary human resource
management function.
Performance appraisal in the United States Armed Forces is differentiated from
almost all other organizations because of the up or out system. Most organizatioi use
performance appraisal for short-term compensation decisions, e.g., annual merit
increases, bonuses, etc. Performance appraisal in the Armed Services, however, is the
basic tool for shaping the officer workforce; the ultimate function of the r'o.ess is to
select an ever smaller population at each successive officer grade. With ihis ".-.c,Ught in
mind, the case could be made that the military services have a greater responsibility
towards achieving accuracy in performance appraisal than most organizations. This need
for accuracy in leadership identification is extremely important for each service, in part
because of the training and development costs invested in each officer, but more
importantly, to assure that the best possible leaders reach the higher grades. In addition,
this consideration extends to the need to provide individual officers with the information
necessary to make career and career development transition decisions.
The current Air Force performance appraisal instrument, the OER Form 707, is
probably as sound as most performance appraisal instruments used in large organizations.
The process surrounding this instrument, however, as well as the culture do not support
VII-1
efficient or accurate use of it, precisely because of the r
I-ssible negative implication of
such accuracy, i.e., a terminated Air Force career.
During most of the history of the Air Force OER, this cultural orientation
toward inaccuracy, seen in inflated OER ratings and gaming of the system in a
multitude of ways, has become ingrained as basically acceptable, and has become an
almost obligatory responsibility of principal raters.
A primary observation of this study is that it is not so much the OER form
which must be changed to introduce control, nor is it the process. The ingrained
cultural attitude of the officer corps must be reoriented from acceptance of inaccuracy
in OER preparation to a reauirement for accuracy. We realize that such an
attitudinal/cultural change would have to occur gradually and would have to be
reinforced from several different sources.
RECOMMENDED INITIAL STEPS
DEFINE THE PURPOSE(S) OF OER
Air Force regulations cite no fewer than six purposes for the OER, substantially
more than the number of purposes for evaluation systems reported by other
organizations. The Air Force should focus the purposes for which the OER is to be
used on those for which it is most effective, and communicate those purposes to users of
the system.
PROVIDE STRONG LEADERSHIP SUPPORT
First, the Air Force leadership should clearly define and publish the exact
purpose(s) of the OER as it is intended to be used on a day to day basis. Along with
this definition should come criteria for the selection boards for promotion decisions,
which would again be public knowledge. (For example, the Chief of Staff's desire to
VII-2
view a record of good performance in cockpit jobs as sufficient basis for promotion
through lieutenant colonel.) Different criteria are relevant for different grades, and
these differences should be articulated and published so that junior officers become
familiar with and internalize the fact that their perspectives and leadership abilities must
grow if they are to continue to be promoted to higher grades throughout their career.
In addition, it is essential that the Air Force leadership give a strong signal that
it is committed to a candid, accurate OER process. This could include such actions as
advising MPC to return OER's from raters, indorsers, or commands with inflated
distributions or advising the selection boards to give less credibility to the ratings of
such raters, indorsers, or commands. "Accuracy in PER preparation" could also be
included as a performance factor on the QER.
RECOMMENDED CHANGES TO OER PROCESS
INSTITUTE NEW RATING PROCEDURES
Although we believe that an attitude change toward the PER process is more
important than a "fix" of the current form, wve do not want to discount the assistance
that procedural change could lend in achieving cultural change.
As described previously in this report, there are many habits in PER writing and
rating which have become institutionalized. Adoption of one of the conceptual designs
given in Section V would, at the very least, appear different from the current process
and would require changes in how an PER is prepared.
In addition, adoption of the second or third conceptual designs should mandate
substantive change in the ratings officers receive. Of these two alternatives, we believe
that the alternative of having the wing commander select 10% for top block ratings
would be the more acceotable alternative to the officer corps. This is recommended
VII-3
because the results of the data collection showed that Air Force officers are willing to
differentiate the top and bottom extremes or performance but are uncomfortable making
finer distinctions or differentiating among the majority of competent officers as would
be required more in the second alternative.
PROVIDE FEEDBACK ON PERFORMANCE
Each of the three conceptual designs described in Section V includes provisions
for off-line job/career counseling. In addition to the valuable advice a subordinate
could receive from his/her supervisor, we see such counseling as another opportunity for
institutionalizing a commitment to accuracy in evaluation.
This institutionalization could occur if the rest of the overall scenario was
functioning as recommended. For example, we have recommended that criteria for
selecti,)n be better defined to the boards and that these criteria be made public
knowledge. In turn, through PME and other training, raters would learn these criteria,
receive instruction on how to counsel subordinates relative to these criteria, and finally,
receive guidance as to the importance of giving advice as well as accurate assessments of
performance during the off-line counseling sessions.
Over time it would become apparent to the population at large that OER
assessments and promotion results were congruent with each other, and the system would
develop the required credibility.
REDUCE THE FREQUENCY OF OERS FOR LIEUTENANTS
The current Air Force policy is for lieutenants to be formally evaluated every six
months, The study conclusions are that lieutenants should be evaluated on the same
basis as all other officer grades (yearly). There are two reasons supporting this
recommendation. First, not enough additional information accumulates in a six month
VII-4
period for a rater to add significantly to the previous report of performance. We
recognize the need for added feedback at this early stage, but feedback could be
provided through non-OER channels. Second, reducing the number of evaluation
reports would significantly decrease the administrative burden of performance
evaluations upon the units.
RECOMMENDED IMPLEMENTATION ACTIONS
Implementation of a new OER form will, of course, be the first opportunity to
publicize the s in policy. We assume that this will be done through promotional
literature, PME, GER-specific training, and guidance through the chain of command.
We would also expect that a rather high percentage of the officer corps will view the
new form as simplý another drii; in piocedural change.
For this reason we recommend that heavy emphasis be placed on advertising the
other steps recommended above. No matter how thorough the implementation phase is,
there other steps are required to form the foundation as well as the maintenarce
structure for a real and continued commitment to accuracy in OER preparation.
PROVIDE TRAINING AND INDOCTRINATION SUPPORT
A commitment to accuracy in PER preparation s,, uld be supported by
anpropriate instruction being included in pre-commission training, transition training,
and Professional Military Education (PME) schools and courses th, ughout an officer's
career. The idea here is to bring about and conInually fuppnrt a code of accuracy --
akin to an honor system -- toward the OER.
This training, as well as the other actions recommended, could also assist in
removing some of the discomfort which some officers, particularly younger ones, feel
toward the current system. Apparently there is a heavy emphasis in the current training
VII-5
and indoctrination materials concerning the honesty and integrity of the Air Force
officer corps and systems. Some officers see the current and conflicting system of
allocating indorsements covertly and firewalling ratings publicly as being in contradiction
to "honesty and integrity.*
CHANGE INFORMATION PROVIDED TO SELECTION BOARDS
Limit the Number of QER s in the Promotion Folder
Current practice dictates that all the evaluation reports generated during an
individual's career be included in the promotion folder. The Air Force should consider
limiting the number of evaluation reports to all reports generated in the present grade,
or five evaluation reports (whichever number is higher). For example, if an individual
has received four evaluation reports as a captain, then these four reports, and the last
CER as a first lieutenant, would be included in the promotion folder. Similarly, if a
lieutenant colonel has received six evaluation reports, all six would be part .' he
promotion folder.
This measure would have considerable impact upon the Air Force officer corps.
First, it wuuld reinforce the message that the performance evaluation system has been
re-focused to accentuate current or recent performance. In addition, it would take a
fair amount of pressure off both the rater and ratee, since the QER would not have the
long-term impact that it has today. This should result in more candid and accurate
evaluations.
enif , Special Catggg.ry -,aninztions (SPECQA'2,I..
According to Air Force regulations, certain organizwtions receive as a matter of
policy, preferential manning considerations. In a study of FY72-74 major, lieutenant
colonel, and colonel temporary promotion boards, 25 agencies identif:ed as SPECAT
VII-6
were identified as having "higher quality" officers than did the highest MAJCOM. It is
recommended that such a study be updated and identify those units which, by
regulation, receive special consideration in terms of the quality of officers assigned and
are shown to have significantly higher promotion board scores than the MAJCOMs. It is
further recommended that the list of such organizations be provided to each promotion
board with instructions that the board is to recognize that the proportion of outstandirg
officers who are assigned to such organizations is probably significantly higher than ten
percent.
Reduce Importance of Photo in the Promotion Folder
A considerable degree of hostility was expressed to the study team over the
inflated importance of details which have become associated with the picture in the
folder. Variations such as how good the photographer is, how photogenic the officer is
or individual likes and dislikes of those serving on promotion boards are all factors
which are seen as unnecessarily biasing in relation to the picture. It is recommended not
to eliminate the picture from the promotion folder, but to reduce its size (e.g., to 3" X
5"), in order to decrease the amount of attention given to potentially biasing minute
details.
OTHER ISSUES
Several issues not directly associated with officer evaluation were idenfified
during the data collection and analysis stages of the project. The scope of the study did
not allow for development of each of these issues into a well substantiated conclusion
and recommendation, but the project team was motivated to mention several of these
issues because of the breadth of coficein observed among Air Force officers interviewed.
V11-7
CAREER DEVELOPMENT ISSUES
First, the tearn observed widespread uncertainty over the fundamental question of
what the desired or expected career paths for Air Force officers are. It is suggested that
a more precise concept of professional development should be articulated by the Air
Force to the officer corps. For example, in today's Air Force, is it valid for an
individual to say that he/she just wants to be a pilot? The answers to these and other
career-related questions should be pursued, along with an assess'ment of theii impact on
the performance evaluation system.
Second, it was observed that many junior and mid-grade officers are reluctant to
admit or are ignorant of their reasonable promotion expectations. The existence of the
grade pyramid is a fact bearing heavily on attitudes about the GER system, yet the
observations accumulated by the project team suggest that the Air Force has not clearly
articulated the implications of this grade pyramid for the career planning of officers.
Finally, there are a group of career development issues that center around the
phase points for promotion. Among these are:
1. The large opportunity for b)elow the zone promotion selection has a
profound impact on the OER system, Among other implications, it
encourages widespread "gaming" of the distribution of top i:,Jorsements.
2. The selection for promotion to major has profound ps~chologicrýl effect
on officer attitudes; as this is the first point where significant rui..bers of
competent officers are selected out of the Air Force. The phase point
occurs at a time when it may be difficult for the officer selected out to
transition back to a c;% .ian carep, because of his/he- age and lack of
recent, civilian experience. Under the current OER system, many of
VII-8
these officers have not been prepared for the prospect that they might be
released. The anxiety extends far beyond the cohorts who might be
effected.
It is the conclusion of this study that these issues are not readily addressed by
changes to the OER system. Rather, it is recommended that the Air Force look to other
career development solutions to these challenges.
AIRMAN PERFORMANCE REPORT
Senior non-commissioned officers are evaluated using the Airman Performance
Report (APR). This report is allowed to escalate above the level of immediate
supervision for final indorsement, in a manner similar to the OER. It is recommended
that, if the Air Force chooses to change the OER process, an evaluation of the APR be
immediately undertaken with a view toward coordinating the two systems and the
policies which underlie them.
VII-9
APPENDIX A
REFERENCES
Beacham, S. (1979). "Managing Compensation and Performance Appraisal under the Age
Act." Management Review. January.
Brinkerhoff, D. W. and Kanter. R. M. (1980). "Appraising the Performance of
Performance Appraisal." Sloan Management Review, Spring, pp. 3-16.
Bureau of N.itional Affairs (1974). Labor Polic' and Practice -- Personnel Management,
Was'iington, D.C.
Bureau of National Affairs (1975). 'Employee Performance: Evaluation and Control,"
PersonnelPolicies Forum, No. 108, February 1975.
Cascio, W. F. (1982). Applied Psychology In Personnel Management, Reston, Virginia,
Reston Publishing Co., Inc.
Cascio, W. F. and Valenzi, L. R. (1978). "Relations Among Criteria of Police
Performers," Journal of Applied Psychology, 63, pp, 22-28.
Cook, D. (1968), "The Impact on Managers of Frequency of Feedback." Academy of
Management Journal, 11 (2) pp, 263-77.
Cummings, L, and Schwab, D. (1973), Performance In Organi:ations. Glenview,
Illinois: Scott Foresman & Co.
Eichel, C. and Bender, H. (1984). Perfor!Ttancc Appraisal.- A Study of Curren"
T7chniques. American Management Ossociation, New York.
French, W, L. (1982). The Personnel Process., ttuman Resources Admintistration and
Dryclopmcm, Boston, Massachusett: Houghton Mifflin Co.
Glueck, W. F. (1978). P'ersonnel,; A Diagnostic Approach, Dallas, Texas: Business
Publications, Inc.
Gordon, L, V. and Medland, F. F. (1965), 'The Cross-Group Stability of Peer Rating%
of Leadership Potential," PersonnelPsychology, 18, pp. 173-77.
Greenbcrg, G. (1968). "Determinants of Perceived Fairness of Performance Evaluations."
Journal of Applied Psychology, 71 (2), pp, 340-342.
flay Associates (197S). Survey Uf Human Resources PriJ.. N'-w York: Ilay
Associates.
Ilollander, L. P. (1965). "Validity of Peer Nominations in Predicting a Distant
Performance Criterion." Journal of Applied Psychology, 49, 434-438.
Kane, J. S. and Freeman, K, A. (1986). "MIJO and Performance Appraisal: A Mixture
"1htl'a Not a S',lution, Part I." 1'er.ionnel, December, pp. 26-36,
A-1
Korman, A. K. (1968). "The Prediction of Managerial Performance: A Review."
Personnel Psychology, 21, pp. 295-322.
Landy, F. J. and Fan, J. L. (1980). "Performance Rating." Psychology Bulletin, 87, (1),
pp. 72-107.
Latham, G. P. and Wexley, K. N. (1980). Increasing Productivity Through Performance
Appraisal. Massachusetts: Addison-Wesley.
Lazer, R. I. and Wikstrom, W. S. (1977). Appraising ManagerialPerformance: Current
Practiceand Future Directions. New York: The Conference Board.
Locher, A. H. and Teel, K. S. (1975). "Performance Appraisal - A Survey of Current
Practices." Personnel Practices,pp. 245-247.
Meyer, H. H. (1980). "Self-Appraisal of Job Performance." Personnel Psychology, 33.
pp. 291-295.
Meyer, H. H., Kay, E., and French, J. R. P., Jr. (1965). "SDlit Roles in Performance
Appraisal." HarvardBusiness Review, 43 (1), pp. 123-129.
Phillip, Thomas D., Major (1979). Evolution of the Air Force Officer Evaluation
System: 1968-1978. The Air University, Maxwell Air Force Base, Alabama.
Porter, L. W., Lawler, E. E. and Hackman, J. R. (1975). Behavior In Organizations.
New York: McGraw-Hill.
Schneicr, C. E., Beatty, R. W., and Baird, L. S. (1986). "How to Construct a Successful
Performance Appraisal System." Training and Development Journal, April, pp.
38-42.
Tarnowieski, D. (1973). The Changing Success Ethic (An AMA Survey Report)." New
York: American Management Association.
Taylor, R, L. and Zawacki, R. A. (1984). "Trends in Performance Appraisal: Guidelines
for Managers." Personnel Administrator. March, pp. 71-80.
Wexley, K. N. and Yukl, G. A. (1977). Organizational Behiaviors and Personnel
P.%ychology. Illinois: Richard D. Irwin, Inc.
OTHER READINGS
Bjerke, David G., Cleveland, Jeanette N., Morrison, Robert P., and Wilson, Wirham C.
(1986). "Officer Fitness Report Evaluatiou Study." Unpublished report. Navy
Personnel Research and Development Center, San Diego, California.
Davis, B. L. and Mount, M. K. (1986). "Design and Use of a Performance Appraisal
Feedback System.' Personne Administrator, pp. 91-97.
Harper, S. C. (1986). "Adding Purpose to Performance Reviews." Training and
Development Journal,pp. 53-55.
A-2
Kelly, C. M. (1986). "Reasonable Performance Appraisals." Training and Development
Journal, January 1986, pp. 79-82.
Lewin, A. Y. and Swary, A. (1976). "Peer Nominations: A Model, Literature Critique
and a Paradigm for Research." Personnel Psychology, 29. pp. 423-447.
Martin, D. C. (1986). "Performance Appraisal 2: Improving the Rater's Effectiveness."
Personnel Psychology, August 1986, pp. 28-33.
United States Air Force (1982). "Officer Evaluations." Department of the Air Force
Regulation 36-10.
United States Air Force (1983). "You and Your Promotion System." Department of the
Air Force Pamphlet 36-32.
A-3
APPENDIX B
SUMMARY OF PERFORMANCE APPRAISAL METHODS
Numerous techniques or formats have been developed in attempts to evaluate
ratee performance accurately, reduce the judgmental and mea.;urement difficulties
associated with performance appraisal, assist in providing feedback to ratees, and lessen
the administrative burden appraisals place on an organization. Each type of appraisal
method has, of course, both advantages and disadvantages, depending on the specific
objectives intended for it and the organizational setting in which it is to be employed.
The purpose of this appendix is to describe the major performance appraisal
methods in use today. Evaluations of the potential usefulness of these methods to the
Air Force is contained in Section III of the text of the report. The following is a list of
methods to be described:
Method
A. Graphic Rating Scale B-2
B. Trait Appraisal B-2
C. Narrative Essay B-3
D. Work Sample Tests B-3
E. Critical Incident Technique B-4
F. BARS/BES B-5
G. Behaviorai Observation Scales B-8
H. Behavior Discrimination Scales B-10
I. Weighted Checklist 1-13
J. Simple Ranking System B-15
K. Forced Choice B-17
L. Forced Distribution Ranking B-20
B-I
M. Paired Comparison B-20
N. ,M.;Aed Standard Rating Scales B-22
0. Management By Objectives B-24
A. GRAPHIC RATING SCALE
The graphic rating scale is an appraisal method in common use, particularly for
positions below managerial levels.
All rating scales share the properties of calling for the rater's judgment of the
ratees job performance along an unbroken continuum (e.g., excellent to unacceptable), or
into discrete categories (e.g., superior, satisfactory, unsatisfactory) within a continuum.
in the typical appraisal using graphic rating scales, the rater is given a list of job
dimensions and told to rate the employee in each of the dimensions using the scale. A
major problem with such scales is that words like "superior" and "average" have different
meanings to different raters, which affects the reliability of the in3trument.
Contemporary versions are likely to use scales featuring descriptive statements of
different levels of performance for each dimension. Choices along the scale for each
dimension may be assigned points, and total scores may then be computcd for each
employee. The Performance Faciors section ,f the Air Force Form 707 is an example of
a graphic rating scale technique.
B. TRAIT APPRAISAL
The ratee is understood as an individual composed of various amounts of
initiat*,-e, cooperativeness, loyalty, creativity, commitment and the like. The trait
approach is based on such personality characteristics. In this approach the appraiser
B3-2
focuses on the personality traits of the employee, and uses these to rate the employee's
performance. For instance, employee A shows initiative, therefore, is committed to the
job. The emphasis is on the potential predictor for performance and not performance
itself.
A typical trait performance appraisal form contains a number of employee
qualities and characteristics to be judged, such as leadership, emotional stability,
attitude, job knowledge, communication skills, ability to adapt, and so on. These traits
are then evaluated on rating scales. The scales may be broken into many parts or points,
and the appraiser is required to mark zgainst which point best describes the employee.
For example, on employee dependability, the points may be a) above average; b) usually
dependable; c) sometimes careless; and d) unreliable It is also usual to find a question
like, "What traits may help or hinder the employee's advancement?"
The trait approach is more inclined towards the individual as a person, and rates
the individual as such, rather than his or her job performance.
C. NARRATIVE ESSAY
The rater prepares a written subjective report of the performance of the ratee.
Specific issues or performance in given areas can be highlighted by the rater.
Frequently raters are asked by their organizations to indicat., the ratees' performance in
certain areas, e.g., equal employment opportunity and affirmative action.
D. WORK SAMPLE TESTS
Individuals being rated are given tests, usually hands-on type exercises, of
specific critical skills of their job. Tnese tests are then scor,.d to determine the
individual's proficiency in the job.
13-3
E. CRITICAL INCIDENT TECHNIQUE
Job incumbents and/or supervisors are asked to develop incidents that
discriminate between successful and unsuccessful performance, or those behaviors which
are crucial to the job.
This method requires the observer (usually the supervisor) to be knowledgeable
of the requirements and goals of a given job. He/she must be a person who sees these
people perform the job on a regular basis, so that they may describe to a job analyst
incidents of effective and ineffective job behavior that they have observed over the past
six to twelve months.
The specific steps in conducting a job analysis based on the critical incident
technique is as follows:
1. Introduction - The job analyst tells the observer to determine what makes
the difference between an effective and ineffective (Dosition) (e.g., a
secretary, engineer, or technician). The analyst must then explain exactly
what he/she means by effective and ineffective.
2. Interview - The observer is asked to think back over the past six to
twelve months and come up with specific incidents that they themselves
have seen occur, without mentioning any of the specific employees'
names. They are asked to report at least five effective and five
ineffective incidents, and in order to collect a representative sample of
incidents it is recommended that at least 30 people be interviewed for a
total of 300 incidents.
B-4
This method focuses on key dimensions of responsibilities which then help in the
selection and appraisal of personnel for such positions. Examples of critical incidents
are:
POSITION: PERSONNEL OFFICER
In classifying a position, fails to take into account other functions
in the unit or in the larger organization which impact the position
being classified.
In discussions related to filling a difficult position, will explore all
possible mechanisms for filling the position and talk to program
officials to ascertain cause of difficulty in locating applicants
before making a recommendation.
Does not ask employees for additional information which might
help in becoming qualified for a position.
Agrees with supervisor's request that an overgraded employee be
overlooked during the review period.
Identifies potential interpersonal conflicts due to differences in
personality, age, race, etc., between parties to a grievance before
making a decision.
F. BARS/BES PERFORMANCE APPRAISAL SYSTEM
BARS/BES, developed by Smith and Kendell in 1963, is based on job analysis,
notation of critical incidents and a rating scale. The critical incidents of the employee
must be observed by the supervisor. This system deals with expected behavior.
B-5
This system requires the manager to work with the employee to achieve mutually
acceptable goals and desirable behavior. BARS/BES forces the supervisor and the
employee to communicate ideas which promote better understanding as well as ensuring
behavioral changes to improve employee performance.
Critical Incidents
Illustrate what the employee has done or failed to have done that have resulted in
unusual success or failure. They are NOT opinions or generalizations concerning the
employee.
BARS
Behaviorally Anchored Rating System - Uses a rating scale and behavioral
anchors (or critical incidents) related to the criterion being measured.
BES
Behavioral Expectation Scale - Focus on expected performance.
Development Of BARS/BES System
Group I - Using job analysis, critical incidents are gathered describing
competent, average and incompetent behaviors from categories relevant to the job. Ex:
Math/technical, administrative ability. Each category corresponds to criterion for
evaluating the employee.
Group 1! - Group allocates each critical incident to a criterion category. If
incidents are not assigned to the same dimension by 80%, those incidents will be
onfitted, thus eliminating ambiguous incidents.
B-6
Group III - Members receive a booklet containing criteria categories plus a list of
incidents defining each criterion. Group rates each incident typicall, using a 7-point
system (7 - outstanding, I - poor job performance). The numeric vaue is derived from
the mean of all the members' ratings. These become the ANCHORS on the rating scale.
Anchors aid the supervisor when defining the employee behavior. Items will be worded
as: "could be expected to work overtime" rather than "works overtime".
RATING SCALE TO DETERMINE ANCHORS RELATED TO CRITERION OF
"PERSEVERANCE" (COMPUTER PROGRAM)
How perseverant is the employee?
Could be expected to keep working until difficult task is
completed.
Could be expected to continue working on task beyond normal
working hours.
_Could be expected to continue on task until an opportunity arises
to work.
Could be expected to need frequent reminder to continue on task.
Could be expected to ask for new assignment rather than fa:e
difficult task.
Could be expected to stop work on difficult task at first indication
of complexity of the task.
B-7
G. BOS - BEHAVIORAL OBSERVATION SCALES
BOS is a behaviorally based appraisal measure whereby judges rate incidents
obtained in the job analysis in terms of the extent to which each incident represents
effective job behavior.
The specific steps in developing a BOS Appraisal System are as follows:
1. Individuals who are aware of the aims and objectives of a given job, who
frequently observe people performing that function, and who are capable
of determining whether the job requirements are being performed
satisfactorily are interviewed. These individuals are asked to describe
incidents that are examples of effective or ineffective behavior (critical
incidents). Incidents which describe essentially the same behavior are
grouped into a behavioral item.
2. Clusters of behavioral items which are similar are grouped together to
form one overall criterion or behavioral observation scale (BOS). The
grouping can be done by job incumbents or analysts.
3. Incidents are placed in random order and given to a second individual or
group who reclassifies the incidents. Interjudge agreement is assessed by
counting the number of incidents that both groups agree should be placed
in a given criterion divided by the combined number of incidents both
groups placed in that criterion. If the ratio is below a previously agreed
upon number, the items under the criterion are reexamined to see if they
should be reclassified under a different criterion and/or if the criterion
should be rewritten to increase specificity.
B-8
4. The BOS criterion are examined regarding their relevance to content
validity. People who are intimately involved with the job evaluate the
system to see if the criterion include a representative sample of the
behavioral domain of interest as defined by the job analysis.
5. A 5-point Likert scale is assessed to each behavioral item. Percentages
are assigned to the five points on the Likert scale, designating the number
of times an employee has been observed engaging in a particular
behavior.
6. A decision must be made as to whether the scales will be weighed. This
is needed because each scale or criterion contains a different number of
bcha-viural items. An overall performance rating is usually compiled by
averaging across all criterion regardless of the number of items used in
each criterion. The score received on each BOS criterion can be used to
compute the overall performance rating for each incumbent.
Example of one BOS criterion for evaluating managers.
For each behavior a 5 represents almost always or 95% to 100% of the time; a 4
represents frequently or 85% to 94% of the time; a 3 represents sometimes or 75% to
84% of the time; a 2 represents seldom or 65% to 74% of the time; and 1 represents
almost never or 0% to 64% of the time.
B-9
Overcoming Resistance to Change:!
1. Describes the details of the change to subordinates.
Almost Never 1 2 3 4 5 Almost Always
2. Explains why the change is necessary.
Almost Never 1 2 3 4 5 Almost Always
3. Discusses how the change will affect the employee.
Almost Never 1 2 3 4 5 Almost Always
4. Listens to the employee's concerns.
Almost Never I 2 3 4 5 Almost Always
5. Asks the employee for help in making the change work.
Almost Never 1 2 3 4 5 Almost Always
6. If necessary, specifies the date for a follow-up meeting to respond to the
employee's concerns.
Almost Never 1 2 3 4 5 Almost Always
Total _
Below Adequate Adequate Full Excellent Superior
6-10 11-15 16-20 21-25 26-30
H. BEHAVIOR DISCRIMINATION SCALES
In "Behavioral Discrimination Scales: A Distributional Measurement Rating
Method," Kane and Lawler state that the BDS "represents an attempt to achieve the ideal
operationalization of the distributional measurement model."
The steps of BDS:
1 A pool of statements describing the full range of satisfactory and
unsatisfactory job behaviors and/or outcomes is generated. This should
be accomplished by having supervisors and their subordinates list all job
functions. Then the subordinates should list all of the satisfactory and
unsatisfactory ways of carrying out these duties.
I Latham, Gary P. and Wexley, Kenneth N., Increasing Produeuvity Through
Performance Appraisal, 1982, p. 56.
B-10
2. All incidents should be pooled to avoid duplications and all other
incidents that are similar should be grouped together. This is called
performance specimens and is done so that the number of items rated on
each object is reduced. A general statement is then written to express the
behavior.
3. The performance specimens are then inserted on a questionnaire
administered to at least 20 supervisors and their subordinates. There are
two different forms of questionnaire3. Each questionnaire is given to half
of the sample. One form asks three questions in regard to each specimen:
a. During a normal six-month period, how many times would a
person have the opportunity to exhibit this behavior or outcome?
b. It would be moderately satisfactory performance to exhibit this
behavior or outcome on how many of these occasions?
c. I-low good or bad is the performance described by this behavior or
outcome? (I = very bad; 8 - very good.)
The other form is exactly the same except question two refers to
moderately unsatisfactory performance.
4. The results should be analyzed by converting question two responses to
percentages of question one responses for each specimen and then
computing the T-statistic for the difference between the mean
percentages of the two subsamples for each specimen. All specimens for
which the t-value doesn't reach .01 p should be eliminated.
B-Il
5. Each specimen's median occurrence percentage and mean rating on
question three are computed for the combined sample. With the extensity
(occurrence rate goodness) scale value for each specimen can be derived.
6. Next the appraisal form is constructed by listing each specimen in random
order at the left side of the form. To the right side of each specimen is a
column headed by the following question: To your personal knowledge,
how many times did this person have the opportunity to exhibit thi:
behavior or outcome during the appraisal period? (Note: If zero, so
indicate and proceed to the next item.) If the response is greater than
zero the rater is asked to complete the following statement This person
actually exhibited this behavior or outcome on of these
occasions.
7. The rating should be scored in the following manner:
a. The frequency assigned to the object on each specimen should be
converted to a percentage of his/her opportunities to exhibit the
specimen.
b. Extensity scale value corresponding to this percentage for each
specimen should then be determined.
c. The value should then be multiplied by its intensity weight, which
can consist of the specimen's t-value.
d. Overall performance is ready to be formed. This is obtained by
summing up the dimension scores.
B-12
Example:
"Kane and Lawler (1980) presented the following items for
grouping: 1) "Had to stop a press run to remove grease from a
roller." 2) "Had to stop a press run to make a paper adjustment
that should have been made before the press run started." 3)
"Failed to check the ink reservoir before a press run started." 4)
"Had to stop a press run to fix a mechanical problem that should
have been discovered in the routine inspection." These items were
grouped, and the following statement was written to reflect the
meaning: "Had to stop a press run because of a problem caused
by the failure to properly make normal checks and adjustments
before the run started." These are known as performance
specimens.
I. WEIGHTED CHECKLIST
The weighted checklist performance appraisal system was introduced by Knauft
in 1948. It consists of statements, adjectives, or individual attributes that have been
previously scaled for effectiveness in worker's behavior.
The most common type of item used in the weighted checklist is behavioral in
nature. The first step in constructing a weighted checklist is to generate a large number
of behavioral statements relevant to all aspects of the job. These statements should
represent all levels of effectiveness in that job. A list of rules for writing these
statements were developed:
B-13
1. Express only one thought per statement or scale.
2. Use understandable terminology, and eliminate double negatives.
3. Express thoughts clearly and simply, avoid vague and trait-oriented
statements.
The second step consists of having a panel of job experts then judge the extent
to which each statement represents effective or ineffective job beha-ior. One method
for accomplishing this is called 'equal-appearing interval.' This method asks the experts
to classify each statement into one of 11 categories ranging from *highly effective to
highly ineffective job behavior." The ratings are then summarized in order to identify
those statements which are consistently placed at some point on the continuum of
effectiveness. On the basis of this scaling procedure, the most reliable rated items are
selected for use on the checklist. The mean or median rating of effectiveness calculated
by the experts becomes the scale value for each item. Statements are then selected so
that every point on the continuum of effectiveness is represented on the checklist.
Items are usually randomized in terms of their relative levels of effectiveness,
and scale values are unknown to the rater. The rater simply checks those statements to
be descriptive of the ratee. The method of scoring is based either on the sum total of
scale values, or median score of the checked statements.
B3-14
Ratings by 15 Experts on Four Behavioral Statements
Using a Behavioral Checklist
Categories of Effectiveness
Highly Highl)
Ineffective Effective
1 2 3 4 5 6 7 8 9 10 11
Statement
3 5 7
.2 4 5 4 1 !
3 8 7
4 1 2 6 2 2 I 1
Examples of Items From Weighted Checklist
Performance Rating for Bake Shop Manager
SIcem
His window display has customer appeal. 8.5
He encourages his employees to show initiative. 8.1
He seldom forgets what he has once been told. 7.6
His sales per customer are relatively high. 7.4
He has originated I or more workable new formulas. 6.4
He belongs to a local merchants' association. 4.9
His weekly and munth!y reports are sometimes inaccurate. 4.2
He does not anticipate probable emergencies. 2.4
He is slow to discipline his employees even when he should. 1.9
He rarely figures the costs of his products. 1.0
J. SIMPLE (ALTERNATE) RANKING SYSTEM
Description Of The System
The simple ranking system is a comparative approach to the evaluation of
employee performance. Regarded as one of the oldest and simplest methods of
performance appraisal, this system is so popular that it is used, in practice, by many
personnel administrators to make decisions related to merit pay increases, promotions,
and organizational rewards. It aims at providing an overall ranking of a group of
employees.
B-15
Specifically, the simple ranking system involves comparing an employee against
other employees in a work group. It requires an appraiser to arrange employees in rank
order from the best to the poorest (or highest to lnwost). Although overall rankings are
commonly made, employees can be ranked on a number of separate factors such as
"ability to work with others" or "ability to grasp new ideas." Virtually, two or more
appraisers may be asked to make independent rankings of the same group of employees
and their lists are averaged to i-PID reduce biases.
Since it is practically easier to distinguish between the best and worst employees
than to simply rank them in descending order, an "alternation" ranking method is
commonly used. It is a very elementary variation of the order of merit ranking. It
places a group of comparable employees in simple rank order in terms of their overall
work performance, future potential, or other characteristics. This method is illustrated
by the following example.
Example:
Assume that an appraiser wants to rank ten employees: A, B, C, D, E, F, G, H,
I, and J cn the basis of their overall work performance. Looking at a list of these
employees' names, the appraisvr eliminates those whose work is so different that they
cannot be compared to the other members of the group (e.g., H and J). Then. the
appraiser looks over the remaining names (i.e., A, B, C, D, E, F, G, and I) and decides
which one he thinks is the best on the list (e.g., C). He draws a line through this name
(i.e., C) and writes it in the blank sp. -.
e labeled "I - Highest" at the top of the page (see
the figure). He then looks over the remaining names (i.e., A, B, D, E, F, G, and 1)and
decides which person is not as effective as any of the others on the list (e.g., G). He
draws a line through this name (i.e., G) and writes it in the blank space marked l -
Lowest" at the bottom of the page. He then examines the remainder of the names (i.e.,
A, B, D, E, F, and 1), selects the best (e.g., A), draws a line through his name, and
B- 16
places the name in the box labeled "2 - Next Highest.' Thus, the appraiser cin
"alternate" between thinking of the best and poorest employee on an increasingly smaller
list. He continues this procedure until he has drawn a line through each name on the
list. Apparently, the middle position in the rank order is the last to be filled.
Emnlovees to be Ranked Bnking
A 1) Highest ................................... C
B 2) Next Highest ......................... A
C 3) Next H ighest ...........................
4) Next H ighest............................
D
E
F 4) Next Lowest ............................
G 3) Next Lowest ............................
1 2) Next Lowest ............................
1) Lowest ...................................... G
K. TIlE FORCED CHOICE TE( IINIQUE OF PERFORMANCE APPRAISAL
The forccd choice technique was developed between 1940 and 1945 in an effort
to improve performance appraisal in the U.S. Army. The forced choice technique is
based on the assumption that any real differences that exist among workers in
competence or efficiency can be described in terms of objective, observable behavior.
The technique was intended to eliminate the appraiser from indicating how m'Ich or low
little of each characteristic an officer possessed. Instead, raters were instructed to
choose from several sets ,i tetrads (a set of four adjectives, two of a favorable nature
and two of an unfavorable nature) which would best and least describe the appraisee.
This t,-hnique was also intended to reduce the appraiser's ability to produce the desired
outcome due to its method of construction. Thus, favoritism and personal bias are
diminished.
B-17
Construction of the Forced-Choice Tetrads: Forced Choice rating elements are
sets of four phases, or adjectives, pertaining to job performance or personal
qualifications. Generally, a six-step procedure is used in constructing the tet'ads: (I)
Instruct a first group of appraisers who are familiar with the appraisees to write brief
essays which describe successful and unsuccessful felloer workers. These essays serve as
the source of the behavioral items relevant to the job (i.e., critical tasks). (2) Behavioral
items are extracted from the essays and put into list form. These items should cover all
important aspects of the jot and the number of items covering each aspect should be
related in some rational way to the importance of that aspect. (3) The list is distributed
to a ý,econd group. Each person in this second group is asked to select, from among
his/her peers, one person s/he knows well enough to confidently rate. For each item,
the rater assigns one of the following scores: "This item describes the appraisee (A) to
an exceedingly hifh or to the highest possible degree; (B) to an unusual C.- outstanding
degree; (C) to a typical degree; (D) to a limited degree; (E) to a slight degree; or (F) not
a" all." The evaluator is then asked to rate the person being appraised on a scale
showing his/her position with respect to overall .,ompetence in a representative group of
20 workers of the same grade. (4) Lists are collected and arranged in order of rating of
uve.rall competency and separated into Upper, Middle, and Lower thirds. An analysis is
conducted to determine, in each of the 3 groups, the frequency with which each of the 5
rating alternatives was chosen for each item. (5) Based on the above analysis, two values
are statistically computed for each item:
B-18
1. The Preference Value: Indicates the degree to which raters tend to rate
others too high or too low on a particular characteristic.
2. The Discrimination Value: Indicates those items which differentiate
between a good and a poor worker. In other words, these adjectives are
truly indicative of the degree to which the items measure the
characteristic which they are intended to measure.
(6) Each tetrad consists of two pairs of adjectives or phrases; each pair consists of two
items which are equal in preference value, but differ in discrimination value.
Obviously, the rater is not aware which adjective or phrase is the preference word and
which is the discrimination word. Each tetrad consists of a pair of favorable words with
similar preference, but dissimilar discrimination, indices; and a pair of unfavorable
words with similar preference, but dissimilar discrimination, indices (see example
below).
Scoring: The ratee receives a positive score if:
1. The item which is most descriptive of him/her is a discriminating
desirable characteristic.
2. If the item which is least descriptive of him/her is the undesirable
discriminating item (i.e., indicates poor job performance).
Read instruction sheet carefully before marking this section.
igjn IV. JOB PROFICIENC.
MOST LEAST
A. Cannot assume responsibility
B. Knows how and when to delegate authority
C. Offers suggestions
D. Too easily changes his/her ideas
B-19
Section V. PERSONAL QUALIFICATIONS
MOST LEAST
A. Coolheaded
B. Commands respect by his/her actions
C. Overbearing
D. Indifferent
L. FORCED DISTRIBUTION RANKING
Ranking techniques compare ratees' performance to that of others on the job or
in similar positions, as opposed to comparison against an absolute standard of
performance.
Forced distribution ranking is a comparative performance appraisal technique
where the rater places specific portions of the group of ratees into various categories
depicting different degrees of performance. The performance categories may be:
excellent, good, fair, poor and unacceptable. The rater is instructed for example to
allocate 10% of the ratees to the excellent category, 20% to good, 40% to fair, 20% to
poor, and 10% to unacceptable. The rankings are the result of the rater's subjective
opinion.
N1. PAIRED COMPARISON
Paired comparison is an appraisal technique in which each employee is compared
to every other employee to produce a ranking of employees on a particular trait.
B-20
The steps for developing the paired comparison technique includes the following:
1. A chart is made of all possible pairs of employees to be evaluated. The
names of the employees to be evaluated are placed on a chart in a
predetermined order such that each employee is compared with every
other employee in the group.
2. A separate chart is constructed for each trait. The traits include such
things as quality of work, cooperation, creativity, quantity of work, etc.
3. For each comparison of pairs, the evaluator judges one employee as being
better than the other on a particular trait. If an employee is better than
the other a (+) is placed in the appropriate box and if an employee is
worse than the other a (-) is placed in the appropriate box.
4. The number of times an employee is judged as being better than the
other is tallied. So, for each chart the evaluator totals the number of +'s
in each column to get the highest ranked employee,
5. Then, based on the number of better evaluations (+) received, a ranking
of employees can be formulated. An employee with the greatest number
of +'s would be ranked the highest on a particular trait, followed by the
next highest. This ranking would continue until you reach the employee
with the least amount of +'s, who would be ranked the lowest.
Example of Paired Comparison Rating for Tabulating Machine Operators.
Trait: ACCURACY. Which employee produces more consistently accurate
work? Which do you feel you do not have to check on much?
B-21
AS
COMPARED
TO ADAMS BAKER COOPER DALTON EMORY
ADAMS - + - -
BAKER + + + -
COOPER ....
DALTON + - +
EMORY + + + +
The list of employees on the top row are compared, one by one to each employee
in the left column. The appropriate mark is placed in each square to indicate the better
employee of the pair. For example, ADAMS is compared to BAKER. ADAMS is
chosen as the better employee so a (+) is placed in the square. The number of +'s are
added up for each person and results are as follows:
COOPER 4 (Ranked the highest)
ADAMS 3
DALTON 2
BAKER I
EMORY 0
According to the ranking, COOPER would be the most accurate employee and
EMORY the least accurate employee.
N. MIXED STANDARD RATING SCALES
(Blanz, F., and Ghiselli, E.E. The mixed standard scale: A new rating system.
• nel P'chologv, 1ý9-7-,
22, 185-200).
U-22
Items representing good. average and Door performance on a given dimension are
mixed randomly with items representing good, average and poor performance on other
dimensions. Each item is rated as follows: + ratee is better than the statement; 0
statement fits the ratee; - ratee is worse than the statement. Rater is not told the
dimension being measured by the statement. nor the level of performance renreetc.
Performance
Dimenionii Ratine
Job 1. The officer could be expected to misinform
Knowledge public on legal matters through lack of know-
ledge. (poor) +
Relations 2. Officer carefully answers rookie's questions.
W/Others (good) 0
Job 3. This officer never has to ask others about points
Knowledge of law. (good)
Job 4. This officer follows correct procedures for
Knowledge evidence preservation at the scene of a crime 0
(average)
MIXED STANDARD RATING SCALE SCORING
Statements
Good Averaize Poor Points
+ + + 7
0 + + 6
- + + 5
- 0 + 4
- - + 3
- - 0 2
Officer in our example received: Good -; Average 0; Poor + for job knowledge
dimension or a score of 4.
B-23
0. MANAGEMENT BY OBJECTIVES (MBO)
MBO is a process whereby the superior and subordinate members of an
organization jointly identify its common goals, define each individual's major areas of
responsibility in terms of results expected of him/her, and use these measures as guides
for operating the organization and assessing the contributions of each of its members.
MBO is a human system; a communication vehicle among the people involved in
it.
STRUCTURE
Roles and Key result Indicators Objectives Action Controls
Mission Areas Plans
Roles and Missions are stated by higher management; subordinates' goals reflect
their contribution toward the role and mission (sometimes stated in the annual plan or 5-
year plan).
Cascade of Goal-Setting Process
Board of Directors, and the Chief Executive
Division Vice-Presidents
Department Managers
Unit Managers
Individuals
The superior and subordinate meet and discuss objectives which, if met, would
contribute to overall goals of the organization. They iointly establish objectives for the
subordinate.
Key Result Areas are major aspects of the job where there are results significant enough
to warrant specific attention. Examples:
staff development cost control management communication
unit production client contacts contract negotiations
B-24
Inictr are measurable factors within a key result area on which it is worthwhile to
set objectives or performance standards. Examples:
output per workhour turnover cost per unit output
actual yj budget absenteeism training participation
Obiectives are statements of results to be achieved. Four elements make up each
objective:
1. action or accomplishment verb
2. single measurable key result
3. date or time period within which result is to be accomplished
4. maximum investment in money, workhours or both that we are willing to
commit toward accomplishment of the objective
Sample Objective: To reduce by 10% the cost of operation A by
January 1at an implementation cost not to exceed 50
workhours.
Action Plans are the sequence of actions to be carried out in order to achieve the
objective. Action plans fix accountability.
Controls are the means by which the accountable manager will keep informed of
progress; the way of ensuring their accomplishment. Controls should be visual (charts,
graphs) and should provide for adequate visibility in a timely fashion so that required
action can be taken as soon as it is seen to be required.
SAMPLE
Roles and Mission: To produce competitive products
Key Result Area: Cost control
Indicators: Cost per unit of output
Objective: To reduce by 5% the cost per unit off output
of product A by July 1 at an implementation
cost not to exceed 50 workhours.
Action Plan: I. Reduce waste 5% per unit output
(Production Manager)
2. Implement pre-production quality checks to
screen out minimum 1% unusable base units.
(Quality Control Supervisor)
B-25
APPENDIX C
PRIVATE SECTOR PERFORMANCE APPRAISAL INTERVIEWS
A telephonic interview survey was conducted with representatives of a sample of
large, well known industrial organizations. The purpose of these interviews was to
gather information about the performance appraisal systems in use in each of these
firms. Enclosure I is the interview guide used to conduct the interviews.
C-I
ENCLOSURE I TO APPENDIX C
PRIVATE SECTOR INTERVIEW GUIDE
Company: Contact _
Date:
1. Type and purposes of performance evaluation system
2. Process - who (rater supervisor, peers, committee)
- what (behaviors, outputs, performance, bottom line)
- when (timing)
3. Instruments/Forms
4. Feedback
5. Rater Training
6. Review Process
7. Controls
8. Additional information
C-2
APPENDIX D
INITIAL AIR FORCE INTERVIEWS
Early in the project, the Air Force OER study team conducted two series of
interviews with Air Force officers. The first of these series was with officers having
maior responsibilities for the functioning of the OER system. The purpose of this series
was for the study team to learn more about how the Air Force conducts performance
appraisals and what issues are in the minds of the major players in the system. The
information received during the course of these interviews has been incorporated into
the body of this report in Section IV, Findings: Air Force Officer Evaluation System.
A list of those persons interviewed is at Enclosure 1, page D-2. The interview guide is
displayed at Enclosure 2, beginning on page D-3.
The second series of interviews consisted of nine focus groups conducted with
small groups of officers (6-8) of varying skills and grades. The purpose of these
interviews was to learn what attitudes about the OER systems are characteristic of a
larger spectrum of the Air Force officer corps. The identity of these focus groups is
displayed in the text of this report at Table II-1, page 11-3. A summary of the
comments made in the course of these focus groups is at Enclosure 3, beginning at page
D-5. This summary is organized into fourteen topics. These topics were not restricted
to those identified in the interview guide, but rather those topics that developed during
the interactions among the focus group members. A copy of the focus group discussion
guide is at Enclosure 4, beginning on page D-25.
D-1
ENCLOSURE 1 TO APPENDIX D
AIR FORCE OFFICERS INTERVIEWED
Name Oreanizatjgl
Lt. Gen. Thomas J. Hickey Deputy Chief of Staff for Personnel,
HQ, USAF
Lt. Gen. John A. Shaud Commander, Air Force Training Command
Maj. Gen. Ralph Havens Commander, Military Personnel Center
Maj. Gen. Donald D. Lambertson Assistant DCS, Research, Development
and Acquisition, HQ, USAF
Colonel Gary Clark DCS, Personnel, Air Force Tra.'ning
Command
Colonel Charles Curran Military Executive to Assistant Secretary
of Defense (FM&P)
Colonel Lee Forbes Deputy Director, Secretary of the
Air Force Personnel Counsel
Colonel Vincent J. McDonald DCS, Personnel, Air Force Systems
Command
Colonel Donald Peterson Chief, Operations Officer
Assignments, Military Personnel Center
Colonel Paul E. Stein DCS, Personnel, Tactical Air Command
Colonel Michael Wright Chief, Mission Support Officer
Assignments, Military Personnel Center
Lt. Col. Donald R. Davie Chief, Officer Force Structure,
Office of the DCS, Personnel, HQ, USAF
D-2
ENCLOSURE 2 TO APPENDIX D
AIR FORCE (OER) PROJECT
INTERVIEW GUIDE
A. INTRODUCTION
1. PERSONAL INTRODUCTION
2. OVERVIEW OF HAY/SYLLOGISTICS BACKGROUND AND
CAPABILITIES
3. BRIEF DESCRIPTION OF PROJECT
a. Review and conceptual redesign of officer performance evaluation
system.
b. Three parallel efforts.
4. EXPLAIN FORMAT AND PURPOSE OF INTERVIEW
a. Unstructured, flexible format.
b. This interview has two major purposes:
1. Collect data about problems with and potential
improvements for the officer evaluation system.
2. Obtain information that will assist the project team in
conducting focus groups.
5. OBTAIN PERSONAL INFORMATION FROM INTERVIEWEE
a. Name, rank, pertinent demographics, and other relevant
information.
b. Primary mission/responsibilities.
c. OER-related functions Or Rccountabilities.
B. TARGETED INFORMATION (data we would like to obtain)
1. INTERVIEWEE'S KNOWLEDGE OF OER SYSTEM
a. How long have you been in a position of accountability in relation
to the OER system?
b. What is your overall experience as a rater, additional rater,
indorser, etc.?
D-3
2. EFFECTIVENESS OF CURRENT OER SYSTEM
a. Is the OER system achieving its purposes as stated in Air Force
policy a..d regulations? If not, why?
3. ADVANTAGES OF'CURRENT SYSTEM
a. What are some of the advantages offered by tt-,! evaluation system
currently in use?
4. DRAWBACKS
a. What are the main drawbacks of the officer evaluation system?
5. DIFFERENTIAL EFFLCTS OF OER SYSTEM
a. Is the OER system more or less effective depending on rank?
b. Can any differences in OER system effectiveness be attributed to
the nature of the "job" within the Air Force? (e.g., pilots, staff
positions, scientific/technical occupations.)
c. Are there any other factors which affect the effectiveness of the
OER system?
6. OER IMPACT ON THE INDIVIDUAL
a. Does the individual receive a "fair shake" from the current
evaluation system?
7. OER IMPACT ON AIR FORCE ORGANIZATION
a. What is the overall impact of the OER system on the Air Force
orgfmization?
8. IMPROVEMENT OF OER SYSTEM/PROCESS
a. What are your suggestions fc- improving the OER process?
9. IDENTIFICATION OF ISSUES
a. What are the key issues that need to be addressed in a project of
this nature?
b. Are there any other pertinent issues we bave not cove-ed in the
interview?
D-4
ENCLOSURE 3 TO APPENDIX D
SUMMARY OF FOCUS GROUP INTERVIEWS, BY TOPIC
TOPIC 1: Focus on Job Performance
GRADE: COMMENTS:
LT/CA PTA IN
(OPERATIONS) The OER should incluc>, specific flying related items,
which directly reflect a pilot's duty performarnce.
The job description box is important and it should be
expanded. Perhaps the job description could be
written in bullet form reflecting major duties and
responsibilities.
The OER should have two sections: one section would
evaluate specific duty performance (e.g., flying) and
another section would evaluate 'other things."
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJORS
(OPERATIONS) The job description section is one of the more
meaningful items in the OER form.
SENIOR CAPTAIN/MAJORS
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS) Human relations block is useless. Actual performance
of the job - flying, time in vault - don't count on the
OER. People are learning that flying is not important
to the Air Force. Categories (on the OER) are not
appropriate to people in operations, so we look for
additional roles but often exclude primary duties.
MAJOR/LT. COLONEL
(SUPPORT) It is especially difficult to create "facts" for page one
in the case of young rated officers whose job consists
solely of flying-related tasks. Conversely, it is easy
for junior support officers to provide facts to
document performance factor scores. A solution is to
eliminate the r ,rrative on page 1 of the form that
pertains to performance factors.
D-5
TOPIC 1: Focus on Job Performance (Cont.)
GRADE: COMMENTS:
LT. COLONEL
(OPERATIONS AND SUPPORT) Define officership; management vs. technical skills.
Current performance vs. management potential
emphasis should be defined.
Develop better performance standards for rating.
COLONEL
(OPERATIONS & SUPPORT) Most of the front side of the OER is not useful,
although the job description may be somewhat useful
and may be worth retaining.
"Credit for attendance" at PME or Master's program
does not reward what is best for the Air Force; should
rate on performance improvements resulting from the
education. There are difficui~tes in doing, however,
including the time required to. observe performance
change. PME and Master's are used as discriminators
by boards because they are easy to see, few other
discriminators can be found.
It is difficult to find culturally acceptable ways to
measure job performance; need to measure in terms of
output (performance), rather than input (PME, etc.).
GENERAL
D-6
TOPIC 2: Prtential Rating
GR12 COMMEliNTS:
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJORS
(OPERATIONS)
SENIOR CAPTAIN/MAJORS
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS)
MAJOR/LT. COLONEL
(SUPPORT) The traits which should be measured in identifying
future leaders are: Initiative - ability to make things
happen; Situational Awareness; Integrity; Decisiveness,
and Knowledge.
LT. COLONEL
(OPERATIONS AND SUPPORT) Define officership; management vs. technical skills.
Current performance vs. management potential
emphasis should be defined.
COLONEL
(OPERATIONS AND SUPPORT)
GENERAL
D-7
TOPIC 3: Differences Across Grades. Rated/Non-Rated
GRADE .QMM.EM:
LT/CAPTAIN
(OPERATIONS) It is somewhat unfair to be rated with the same form
that is used to evaluate administrative duty officers,
OER should be de-emphauized at the lieutenant level.
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJORS
(OPERATIONS) Some things, e.g., PME, Masters Degree, are very
important and this perception is supported by
promotion board statistics. Rated officers do not have
the opportunity to pursue these degrees.
There should be separate OER forms for rated and
non-rated officers. An officer suggested that they
also need separate promotion boards!
SENIOR CAPTAIN/MAJORS
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS) We need different forms for different grades, more
general language for field grades. Possibly should
have a form for rated/operations as compared to
support - maybe not, for that would be tough on a
board,
MAJOR/LT. COLONEL
(SUPPORT) Junior officers do not necessarily need to be evaluated
on the same form as seniors. Also, semi-annual
reports are not necessary,
LT. COLONEL
(OPERATIONS AND SUPPORT) There is an ongoing debate about the performance
evaluation issue for rated Ys support officers,
Raters/supervisors feel that they are forced to create
acceptable additional duties as assignments for rated
subordinates for the sale of the OER when these
people should be devoting all their time to flying.
They do not like a form driven system.
There should be two forms - rated and non-rated.
D-8
TOPIC 4: Administrative Burden
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJOR
(OPERATIONS) Inefficiency - OER requires too much effort for what
you get Out of it. A lot of time is wasted writing and
proofing the OER, to then have the promotion boards
look at the bottom line (indorser).
SENIOR CAPTAIN/MAJOR
(SUPPORT)
MAJOR/LT COLONEL
(SUPPORT) The front page of the form is ueiess apart from the
job descriptions. (However, the rumbers can be used
to eliminate sub-marginal officers). Yet providing the
narratives takes hours of work and some creative
writing to prepare.
Preparation of the OER form is an administrative
burden on units and raters. On average, each form is
retyped more than four times, and raters spend endless
hours preparing narratives, both for substance and for
form. In addit'ao, preparing the supporting
documentation requi'red to tecure the proper level of
indorsements ac '" sv,,stantially to the administrative
burden,
LT. COLONEL
(OPERATIONS & SUPPORT) The form takes too many hours to process for the
amount of time it is evaluated.
COLONEL
(OPERA'TIONS & SUPPORT) The OER requires too much effort and time to
complete for the benefits it provides; the burden is too
great.
The system is probably okay, if only the
administrative burden were reduced.
GENERAL
D-9
TOPIC 5: Contents of Promotion Folder
GRP%.oE:COMMENTS:
LT/C!CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT) Remove photograph from the file.
SENIOR CAPTAIN/MAJOR
(OPERATIONS)
SENIOR CAPTAIN/MAJOR,
(SUPPORT) Recommendations about the promotion and selection
system include placing a limit on how far back
promotion boards can look through folder. Also
recommend removal of photograph from file.
MAJOR/LT. COLONEL
(OPERATIONS)
MAJOR/LT. COLONEL
(SUPPORT)
LT. COLONEL
(OPERATIONS & SUPPORT) Remove the picture from the folder.
COLONEL
(OPERATIONS & SUPPORT)
GENERAL
D-10
TOPIC 6: IGte. itv and Honesty
GRADE: CO.ME.
LT/CAPTAINS
(OPERATIONS) There is a lot of competition between MAJCOMs to
promote their own people. This problem is
compounded by the differences in numbers of grades
in the MAJCOMs.
LT/CAPTAINS
(SUPPORT) There are many questions about the integrity of the
system from a rater's vi.wpoint. They are hesitant to
rate less than I at any time; average performance is
most difficult to rate and there is concern over gaming
the system. From the rater viewpoint, it is the rater's
personal policy about the system that determines how
an OER is written. If the immediate supervisor
cannot be relied upon to write a good OER or to
obtain good indorsements then the rater must be
visible to supervisor's supervisor and get his/her
support.
Five of the eight officers have written their own OER.
SENIOR CAPTAINS/MAJORS
(OPERATIONS)
SENIOR CAPTAINS/MAJORS
(SUPPORT) There is a feeling that personal integrity is not
supported and neither is the integrity of the promotion
system. There is a need to reward and recognize
leadership and willingness to stand up for convictions.
A simple personality conflict can ruin a career. To
protest the integrity of the system, there is need for
guidance frorm higher levels such as self-policing
system that would include periodic review,
reinforcement, and reemphasis of policing and
procedures.
MAJOR/LT. COLONEL
(OPERATIONS) OER's talk around the issues, one learns the words but
they are not truthful, none of it is truthful. Inflation
is unreasonable. You are reading lies, almost useless
(as a way to understand an officer's performance
level). Senior leadership doesn't get an accurate word
picture. Nobody reads all the lies which are written.
D-I I
TOPIC 6: Inteeritv and Honesty (Coat.)
GRADE: COMME
MAJOR/LT. COLONEL
(SUPPORT) Marginal performance is not documented. To get less
than the maximum (in numerical scores) an officer has
had to do something bad. However, the report is
coded so that marginal performance can be indicated
indirectly -- usually by saying "good but not
superlative."
LT. COLONEL
(OPERATIONS & SUPPORT) Some officers have had to write their own OER's,
while others feel that they have had to lie to maintain
careers or avoid hurting others.
Many believe that "the ratee is at the mercy of the
rater's eloquence* and that we're assessing writing
abilities of the rater not the person being reviewed.
There is a common knowledge of "the code" and how
to use it.
COLONEL
(OPERATIONS & SUPPORT)
GENERAL There is subtlety and "gaming" on the OER's that are
directed to the board, but they feel that they recognize
and see through the word picture to the facts.
D-12
TOPIC 7: £a lm
GRADE. COMM~t~
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT) Young officers feel it is necessary to learn the
unwritten guidelines of the OER and promotion
system. They also feel that it is extremely important
to "please your supervisor.'
The OER is a vehicle for going up the promotion
ladder, but young officers must guide their own
careers.
SENIOR CAPTAIN/MAJOR
(OPERATIONS) Some things, e.g., PME, Masters Degree, are very
important and this perception is supported by
promotion board statistics. Rated officers do not have
the opportunity to pursue these degrees.
SENIOR CAPTAIN/MAJOR
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS)
MAJOR/LT. COLONEL
(SUPPORT) Can't focus (the OER words) on actual performance.
Front side is hard to use (to describe performance).
Officers write their own, they often don't know their
rater. We make up jobs for junior officers (in order
to have something to say about) communications-oral
and written.
OER has powerful impact on career, it encourages
careerism and I'm concerned about our ability to fight
a war. Everything is careerism, not an effort to do (a
job) well now; it's all related to promotion. Careerism
is not a function of the OER, other things are
promoting that, and it's not all that bad. To get
promoted you need to work hard, have a sponsor, get
a good job. Good personality gets a better rating.
You need PME and a Master's (to get promoted). It's
a discriminator. One needs to continue growing, (but
a) master's diverts from real job. Advanced education
should help you do your job. You can't get a master's
in an operational job. The (master's) programs are
easy because we couldn't otherwise get them (on a
part-time basis). PME in residence is more valuable
for promotion (than by correspondence) but all of
D-13
TOPIC 7: £areerjzj (Cont.)
fzRA2E CMMENTIS
these schools and deployments, alert duty, et.c, create
family problems. There is enough time to do these
things -- a few exceptions, but most people can do
these things.
LT. COLONEL
(OPERATIONS & SUPPORT)
COLONEL
(OPERATIONS & SUPPORT) "Credit for attendance" at PME or Master's program
does not reward what is best for the Air Force; should
rate on performance improvements resulting from the
education. There are difficulties in doing, however,
including the time required to observe performance
change. PME and Master's are used as discriminators
by boards because they are easy to see, few other
discriminators can be found.
GENERAL
D-14
TOPIC 8: Indorsement System
GRADE: COM• :
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT) The word picture and level of indorsement are most
important parts of the OER as it is used by promotion
boards. They believe there is hidden quota system for
indorsements and that commands control systems.
SENIOR CAPTAIN/MAJOR
(OPERATIONS) The indorsement process is the controlling system in
the OER/promotion board process.
SENIOR CAPTAIN/MAJOR
(SUPPORT)
MAJOR/LT. COLONEL
(SUPPORT) Level of indorsement and last sentence is all that is
important. The whole emphasis is potential.
Preparation of the OER form is an administrative
burden on units and raters. On average, each from is
retyped more than four times, and raters spend endless
hours preparing narratives, both for substance and for
form. In addition, preparing the supporting
documentation required to secure the proper level of
indorsements adds substantially to the administrative
burden.
There is a highly developed system for determining
indorsement levels including printed justification
forms with the discrimination factors used. In the
form we observed, the factors include: PME, civilian
education (attained UA in process), promotion
eligibility, and previous OER indorsement history.
Standards are specified for which reports will be
evaluated for higher level indorsement. These
standards are not uniform within MAJCOM or within
the Air Force,
Wing commanders have chance to identify higher
performers through indorsement level. However, they
also can "game" the system, inter alia. The problem
with indorsements as discriminators is not that higher
performers don't get tagged but that the system
doesn't discriminate well at the margin.
MAJOR/LT. COLONEL
(OPERATIONS)
D-l15
TOPIC 8: Indorsement System (Cont.)
LT. COLONEL
(OPERATIONS & SUPPORT) Since a hidden quota system is used, bring this system
out into the open.
COLONEL
(OPERATIONS & SUPPORT) Major information-bearing sections are indorsements
and promotion recommendation.
Current indorsement system is equivalent to a quota or
control system except ratees don't know the rules.
GENERAL
D-16
TOPIC 9: Feedback to Ratee
GzRADE: COMMENTS:
LT/CAPTAIN
(OPERATIONS) More feedback to the ratee is necessary.
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJOR
(OPERATIONS) More feedback about performance should be provided
to officers.
SENIOR CAPTAIN/MAJOR
(SUPPORT) The OER is not used as a feedback tool. This is
considered a weakness because they feel that there is a
need for some type of feedback and/or counselling
system.
MAJOR/LT. COLONEL
(SUPPORT)
MAJOR/IT. COLONEL
(OPERATIONS) OER is not effective as feedback (to the individual
officer.) Can't provide (accurate) feedback because it
will kill him on assignments, promotions. It is a
morale boost (to read how well you are doing) but it
has nothing to do with improvement of performance.
We don't need the OER for counselling, the people we
have are told all the time. Forget the OER, we tell
them. Not much career guidance. The civilian
feedback system (in the Air Force) is not very good
either, it don't change performance. Low ratings don't
get rid of (the Air Force) civilians.
LT. COLONEL
(OPERATIONS & SUPPORT)
COLONEL
(OPERATIONS SUPPORT)
GENERAL o There was agreement that the OER is not a good
feedback tool.
D-17
TOPIC 10: Promotion Issues
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJOR
(OPERATIONS)
SENIOR CAPTAIN/MAJOR
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS)
MAJOR/LT. COLONEL
(SUPPORT)
LT. COLONEL
(OPERATIONS & SUPPORT) There was discussion and consideration that the up
and out system may not be right for everyone in the
Air Force.
COLONEL
(OPERATIONS & SUPPORT) Point made that AF promotion system makes it too
clear to officer whether he is a "success" or a "failure"
each time he meets a board; those passed over feel
they have clearly failed. Canadian system, with
"fuzzy" promotion zones encourages pcople to keep
trying; being passed over does not destroy officer's
morale, because he has several chances for promotion.
Up or out system seen as part of the problem but
group unanimously rejected changing that system.
GENERAL It really doesn't matter how long you look at the file -
60 seconds or 5 minutes, usually there is no difference
in the final result.
They feel that the "up or out" system should remain in
place because it is a motivating force and drives
competition within the service. The unfortunate side
is that it drives away quality people at the same time
that it polices the system.
D-18
TOPIC 11: Suggested Changes in OER Form
.RADE COMMENTS:
LT/CAPTAIN
(OPERATIONS) A standard QER should be used for every non-
promotion year, where an officer is in the zone, then
a "promotion" OER, which could be more specific and
detailed, would be written.
OER's should be simpler, shorter, and less
burdensome.
The PER should have two sections: one section would
evaluate specific duty performance (e.g.. flying) and
another section would evaluate "other things".
LT/CAPTAIN
(SUPPORT) The recommendations for the form were to remove the
blcck ratings from the fiont of the form.
SENIOR CAPTAIN/MAJOR
(OPERATIONS) The first part of the QER - except for demographic
and the job descriptions - should be eliminated.
(However, higher ranking officers in other focus
groups indicated that rating blocks are necessary
because it allows them to "kill" unfit officers).
SENIOR CAPTAIN/MAJOR
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS)
MAJOR/LT. COLONEL
(SUPPORT) It is especially difficult to create "facts" for page one
in the case of young rated officers whose joL consists
solely of flying-related tasks. Conversely, it is easy
fo, junior support officers to provide facts to
do,.ument performancc factor scores. A solution is to
eliminate the narrative on page 1 of the form that
pertains to performance factors.
LT. COLONEL
(OPERATIONS & SUPPORT) Remove the front part of the form (after the job
description section).
COLONEL
(OPERATIONS & SUPPORT) Most of the front side of the OER is not useful,
although the job description may be somewh-t useful
and may be worth retaining.
Should use narrative assessment by supervisor only',
difficulties discussed briefly.
D- 19
TOPIC 11: Su22esitd Changes in OER Form (Coot.)
GRADE: COMMENTS:
GENERAL Rework the front side of the OER forms, but
maintain discriminating factors for the Board.
D-20
TOPIC 12: Purnose of the OER
GRADE: Q,.EI..S,.:
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT) Purpose of the OER -- OER does not adequately
accomplish task of school or assignment selection but
does work for evaluation.
SENIOR CAPTAIN/MAJOR
(OPERATIONS) Keep the large organizational picture in mind:
retention, morale productivi-y - when evaluating the
OER system.
SENIOR CAPTAIN/MAJOR
(SUPPORT) The purpose of the OER is questioned. There is a
need to clarify that purpose and then redesign the
OER form to accomplish that task.
Purposes of OER - OER is not fully accomplishing its
objectives, particularly as it refers to identifying
individuals for promotions.
MAJOR/LT. COLONEL
(OPERATIONS)
MAJOR/LT. COLONEL
(SUPPORT)
LT. COLONEL
(OPERATIONS & SUPPORT)
COLONEL
(OPERATIONS & SUPPORT) Two major goals of OER could be:
I) to provide information helpful for promotion
decision.
2) to curb careerism by focusing OER on assessment
of current job performance.
GENERAL
D-21
TOPIC 13: Controlled System
LT/CAPTAIN
(OPERATIONS)
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJOR
(OPERATIONS)
SENIOR CAPTAIN/MAJOR
(SUPPORT)
MAJOR/LT. COLONEL
(OPERATIONS) If any controls are introduced, they should be for new
lieutenants. lie careful not to shift the dissatisfaction.
making unhappy the people who are good rather than
those who are weak. The rumors about a new OER are
already hurting retention. Everyone is so critical of
the system, but a new system would be worse. We
don't adapt readily to new things.
MAJOR/LT. COLONEL
(SUPPORT) The quota of *potential' scores under the controlled
OER was a disaster; however, that system might have
worked if the percentages had not been so restrictive.
LT. COLONEL
(OPERATIONS & SUPPORT)
COLONEL
(OPERATIONS & SUPPORT) No clear answer to question of whether a new control
or quota system could be workable. Suggestion that
quotas be matched to promotion opportunities at each
grade.
GENERAL
D-22
TOPIC 14: Otheuep
G.RADE: !COMMENTS
LT/CAPTAIN
(OPERATIONS) Approximately 90% of all flyers are good, solid pilots
which makes differentiation even more difficult.
There is a lot of competition between MAJCOM's to
promote their own people. This problem is
compounded by the differences in number of generals
in the MAJCOM's.
LT/CAPTAIN
(SUPPORT)
SENIOR CAPTAIN/MAJOR
(OPERATIONS) The Canadian AF system - in which the ratee cannot
see his/her scores, but can see the comments - is a
good system.
The Army OER is a good system given that senior
officers indorsements are tracked. (This system can
also be "gamed", however).
Keep the large organizational picture in mind:
retention, morale productivity - when evaluating the
OER system.
There seems to be a conflict between what is good for
the individual arid what is good for the AF
organization as a whole.
SENIOR CAPTAIN/MAJOR
(S) PPORT)
MAJOR/LT. COLONEL
(OPERATIONS) The system is good but highly inflated. It doesn't
allow for a single mistpke or a personality conflict
between rater and rated officer.
MAJOR/1.T. COLONEL
(SUPPORT) There Is a price to pay in designing a system that
identifies the best people explicitly. That price is
dissatisfaction and attrition among those not so
identified.
D-23
I~~~~ - - I
TOPIC 14: Othr Iues (Coot.)
GRADE
.I;: !0
LT. COLONEL
(OPERATIONS & SUPPORT) Though they feel that the OER is a good tool for
promotion to the major level, and that the right people
are being promoted, there is skeptisism about the
system because of gaming. The unwritten code has
existed through the last 3 types of OER's.
There is an awareness that corporate culture drives the
promotion process. The Air Force culture and the
possibility of changing that culture is questioned.
LT. COLONEL
(OPERATIONS & SUPPORT) There is a question as to whether the OER itself is not
effective or whether the OER is a product of a system
that is not effective.
Provide training and guidance to the raters from
higher level officers and reinforce.
COLONEL
(OPERATIONS & SUPPORT) A significant change in the OER system would require
a major cultural change in the Air Force. Current
problems with OER are culture-driven.
GENERAL The total needs of the Air Force are taken into
consideration.
Half of the Generals thought that the OER is a good
tool for communication about the individual.
They recognize that there are many officers who do
not understand the system.
It is the responsibility of supervisors to teach "the
system" to subordinates.
D-24
ENCLOSURE 4 TO APPENDIX D
AIR FORCE (OER) PROJECT
FOCUS GROUP GUIDE
A. INTRODUCTION
1. PERSONAL INTRODUCTION
2. OVERVIEW OF HAY/SYLLOGISTICS BACKGROUND AND
CAPABILITIES
3. BRIEF DESCRIPTION OF PROJECT
a. Review and conceptual redesign of officer performance evaluation
system.
b. Three parallel efforts.
c. HAY's private sector expertise.
4. GROUP MEMBERS INTRODUCTION
a. Allow everybody to briefly introduce themselves.
5. EXPLAIN FORMAT AND PURPOSE OF FOCUS GROUP
a. Format
1. Unstructured, flexible format.
2. Generate and discuss concepts and ideas.
b. Purpose
1. Explore the issues surrounding the OER process, in order
to gain a better understanding of the OER process.
B. GENERAL ISSUES
1. EFFECTIVENESS OF CURRENT OER SYSTEM
a. Is the OER system achieving its purposes as stated in Air Force
policy and regulations? If not, why?
1. Promotion.
2. Assignment.
3. Augmentation.
D-25
4. School selection.
5. Separation.
6. Feedback.
b. What purpose can an OER system legitimately fulfill?
2. STRENGTHS OF CURRENT SYSTEM
a. What are some of the strengths of the evaluation system currently
in use?
3. DRAWBACKS
a. What are the main drawbacks of the officer evaluation system?
4. DIFFERENTIAL EFFECTS OF OER SYSTEM
a. Does the OER system fit some groups more than others?
Probes - rank, job, time in grade?
5. OER IMPACT ON THE INDIVIDUAL
a. Does the individual receive a "fair shake' from the current
evaluation system? Why or why not?
6. WHAT PROBLEMS DO YOU FACE AS A RATER? HOW DO YOU
COPE WITH THEM?
7. IMPROVEMENT OF OER SYSTEM/PROCESS
a. How can the OER process be improved?
Probes - rating/writing, review process, training, roll out.
D-26
8. IDENTIFICATION OF ISSUES
a. What are the key issues that need to be addressed in a project of
this nature?
b. Are there any bases we may not have covered that we should?
Probes - in the focus group, in the project.
D-27
APPENDIX E
FEEDBACK INTERVIEW SUMMARY
Following the completion of the data collection phase of the study, the team
developed a preliminary set of OER conceptual designs. These designs were tested for
feasibility and desirability, in part, through a series of interviews with Air Force
officers of various grades representing the major commands. Enclosure I displays the
units of assignment and identity of the individuals interviewed; however, the names of
these officers have not been included in order to preserve the confidential context in
which the interviews were conducted. Enclosure 2, page E-3, shows the interview guide
used.
The results of these interviews were used in refining the preliminary designs into
the recommended conceptual designs discussed in Section V. A summary of the
interview results is displayed at Enclosure 3, page E-5.
E-l
ENCLOSURE I TO APPENDIX E
INDIVIDUALS INTERVIEWED
COMMAND/AGENCY ZOQ133QN
Air Force Communications Command Deputy DCS/Personnel (0-6)
Staff Division Chief (0-6)
Air Force Logistics Command DCS/Personnel (0-6)
Manpower Staff Officer (0-3)
Air Force Systems Command Deputy DCS/Personnel (0-6)
Logistics Staff Officer (0-5)
Air Force Training Command DCS/Personnel (0-6)
Military Airlift Command DCS/Personnel (0-6)
Squadron Commander (0-5)
Personnel Staff Officer (0-3)
Strategic Airlift Command DCS/Personnel (0-6)
Vice Wing Commander (0-6)
Squadron Commander (0-5)
Electronic Warfare
Officer (0-3)
Tactical Air Command Wing Commander (0- ,i)
Executive to Wing
Commander (0-3)
Military Personnel Center Director (0-6)
Director (0-6)
Personnel Staff Officer (0-4)
Personnel Staff Officer (0-3)
E-2
ENCLOSURE 2 TO APPENDIX E
FEEDBACK INTERVIEW GUIDE
Explain background of study and the fact that we are considering various
alternatives.
11. For each element presented, determine the respondent's reactions:
A. Positive, neutral or negative
B. If negative, reasons why
C. Whether positive or negative, any problems anticipated in
implementation
Ill. Elements to be presented
A. Having OER preparation set up as a computer-interactive process
with certain information computer-supplied to cut down on the
administrative process.
1. Having pre-developed generic job descriptions to which
modifications are made by the rater.
C. Having an OER work sheet that is used to set future goals and
review past performance but does not become part of the OER
record. Its objectives would be to help in coaching a junior
officer and to develor a mutual understanding of performance
expectations.
). Hlaving i section on the OER form which requires the rater to
indicate one area in which a plan has been developed to enhance
the officer's effectiveness over the coming year. This would
include measureable objectives for the plan.
E. Having the rating officer identify the single strongest area of
performance for an individual,
F. flaing an indorsing official indicate the ranking of the officer
against others in the same grade (for those rated at the highest
potential level).
G. Having the wing commanders or equivalent indicate the 10% of
each grade who are judged to be highest in potential.
11, Having performance factors rated for only the extremes.
I. Having a rater's rating history become part of his/her own
personnel file for consideration by his/her own commander in
rating the officer on *The Exercise of Leadership."
E-3
J. Having raters total distributions of ratings for that grade appear
on all OERs that are part of the selection folder.
K. Having an indorser's rating history become part of his/her own
personnel file for consideration by his/her own commander in
rating the officer on "The Exercise of Leadership."
L. Having indorsers' total distributions of ratings for that grade
appear on all OER s that are part of the selection folder.
M. Eliminating all numerical ratings of performance, requiring
comments to document what the officer has actually done
(accomplished) in his/her job during the rating period.
N. Eliminating all numerical ratings of potential, retaining the current
system to assure that better performers receive higher levels of
indorsement.
0. Retaining a system which produces highly favorable ratings for
almost all officers so as to enhance morale and commitment.
P. Having separate OERs for company and field grade officers which
cover the same general factors, but provide different criteria
against which they are judged.
IV. Any other suggestions the individual might have for improving the OER
process.
E-4
cc 4
cc
> 0
0l >c C c 0
s- 0 CZ u *o
V I- ~ c 4
4o 14u 0
-00
r- cc
.2 LJ.
0 0
.00 = oo
"" C
4)CL
IL))
0
m i. w = c
cu. E0
0
S
0
'-
.0J E. Q
_0C
'0 LL.
06 cý c c
006 cc
0 V
'0~9 0 0'Q
'<
-. ~~~c 00 N c c0'
4 )
oe.~
-j I* m2.V
4A 0 Z
w CD %.- 2 2
o 1Gof4 a
kn coc u1 0 0
f- a., C, eq en e' en Wý0
Cli~I od.
-
C> - )6. (:
C6 "0 - 0 'r
0 L. 0
'-) eq (12 -0fn
0++0 +0 +CD +0C
E-5
>. C
r~) E E
0
.0 0
0 cE)-
) ) 0L
00
0. >1 Z c
6 wj C
L.I o- .( C
w r. C 06 0
-
0: 0 PV
- ~~ V4) C 4 ) >V)
C
E.- w C 0
-~ C 6Q~
0
C)I - c
w- .
C3
LsC1 -1 44 E
.~
E co
E F.0 t
L~~~~ E.
. 4
-0 CL- w
0
W, E EC
V) > L. -5 ) ~ L
=~ c = Z 0
0
ý;~~~~. -
=U
00 ,:
(U0
f-4~
en1 .
2? 2f
6 n rn en C-II 9) 4
6 6 -. 4)
U 0 4 So.
E -~ 1. 01.
.-. 0.
0ON V%
0. C)C
L 0'+0 0. +0 +
F.- 6
.2
E
cu m
0 >
E)*
.. h
00
c 4) -o
o E .~COO
0: 00C: M
Ev
4) ) C c.06
w0 46-
4)
. 6 )C0
v
0--' -p a
-
v
>, C: 00
04)
.- =~ >o>
0
w-
4
0-0
4)G >~ 4 )
C)4
cu L. co. u
. 4)
-~ ~ z~ V~ 06 v
'r~r (
4o o
0 ! E
U) 14
W- U
oJ jz~- -0
0. IL
.2
3 0 § -
CA". o u _l
n P
4 = '
'ITI
U D
I-
%m e
20 Z
q~ +0 6 +-4 +00 m
+( 6I+0
U) c
zV
0 10
e--
0 c0
-N 0
oC
'- . mc4
U)
0
-0
u Go0 E
U U
0 V-WU
C -. *5>. 0 -c 0
0~ 4)4
W0 .0 cc.. - )
- ~0 - (U
U) .5
9u m
0
2 U, >) > 2.~a
L' - . .- .8 s) 0o v
M ~ v0. cc0r C
-6 E)-
.0 E
,
o 0EC-0v cr
i&1
2 .2 0
( ~.'g
eZ ~ r o ~ cc
0 0
E * ~ U
0
U-0 0 0 v Ca~)
.
~C
LLI I-.U)
C0
C4
-
C4 fl
+0. )~E UU
U ~~
~ u~ou I-8i ~I..S -5
APPENDIX F
OER FORMS USED IN THE SERVICES
This Appendix displays the forms used by the U.S. armed services, the U.S.
Coast Guard, the Foreign Service of the Department of State, and thc Canadian Defense
Forces.
EORM TITLE PAGE
U.S. Air Force
Air Force Form 707, Officer ................................................ F-2
Efffectiveness Report
U.S. Army
DA Form 68-8-1 ................................................................... F-4
OER Support Form
DA Form 68-8, Officer Evaluation Form ........................... F-6
U.S. Navy
NAVPERS 1611/1, Report on the.................... F-8
Fitness of Officers
U.S. Marine Corps
NAVMC 10835, USMC Fitness Report ............................... F-10
U.S. Coast Guard
CG-5312, Lieutenant Commander ........................................ F-12
Officer Evaluation Report
Foreign Service
Form DS-1829, U.S. Foreign Service ................................... F-16
Employee Evaluation Report
Canadian Defense Forces
CF 1417, Personnel Evaluation Report: ............................... F-21
Officers
F-1
I~~~ ! I
I I IIIj
AFR 36-10 Attachment 1. 26 ocktober 1982 Effective 1 November 1982
SAMPLE
RATEE IDENTIFICATION DATA IA.4AFA 36,-fOwVfUlly befoM 0`1111In si yIti)
I. NAsa (Lam First,Middle I.,ialJ iL SSAN (Mclvd# Suffix) S.GftAOC A. DAPSC
SMITH, Jack II 1231-34-5432 Captain
6. ORGANIZATION. COMMAND, LCCATION a. PAS COOC
345 Tac Ftr Wg (TAC), Mt Home AFB, ID MTOTDKLS
7.renoo or -e.•y S.N.Oo.v
DAY OF 9.t.o. -
VIASN OR
KP
FROM, 13 Jul 81 I T.HuI 31 Oct 82 sure"VSION 1201 Annual
11.JOUOESCRWTION ,.OUTTIm., Enter ooem-- e.i
t a.d approved duty title as of the
L K•YOUTIgS. T^9141 ^NO .gPONSISITIb, closeout date of the report (paragraph 2a this
attachment).
Item 2: Describe the type and level of responsibility, the impact, the
number of people supervised, the dollar value of projects managed, and any
other facts which describe the job of this particular ratee.
Ill. PEF
Fc6,MANCC PACTORS r 7 5l1~ -po WCL0.~.
U4-LfOW! Mgt Ir eqid Ap9OVr ASOVC
OId Icrflc •~'ipJ~
e~pqfonactI ,vquir• NOT OlS3EVEOD UTJlL.BO UANOARO STANDARD STANDOAWO TANDAND
I. o.01
KNOWLAO . u, o
(Dept& L- . L .
bdld
0
A)
What has the ratee done to actually demonstrate depth, currency or breadth of
job knowledge? Consider both quality and quantity of work.
2. J•UDGhgP4T AlO OICCIIIONStCOnS.flwrn,
accurate, effective)0
Does the ratee think clearly and develop correct and logical conclusions?
Does the ratee grasp, analyze, and present workable solutions to problems?
3. P.AN AN00 NGANIZI WONK (Trmaly.
Does the ratee look beyond immediate job requirements? How has the ratee
anticipated critical events?
6. 14ANA0GE4MYN OF MCSOUNCeI0L.. J L LJ L.
(MaRpowe? rgld
m0 rE)
Does the ratee get maximum return for personnel, material and energy expended7
Consider the balance between minimizing cost and mission accomplishment.
S. LCA )cRN1FInrwflVt.accCpf 0 L J I
How has the ratee demonstrated initiative, acteptance of responsibility, and
ability to direct and motivate group effort towards a goal?
4. ADAPTABILITY TO STRESS (Sidb~e .... ...... .JLJ L....
flIXIble. dependable]
How has the ratee handled pressure? Does quality of work drop off? Improve?
1. ORAL COMMUNICATIUMI (Clear,
c0mcU confideu"0 1<
How has the ratee demonstrated the ability to present ideas orally?
0. WIVIITT N COMMUUNiCATIOM (O1.0LJ
CoACIDe.ropis'edJ -0 .L4..J
How has the ratee demonstrated the ability to present ideas in writing?
*. PR@OPF9IONAL. QUALITIES (Afthsad*. 0 L----J L--.J L . L•
dftwL CoaposteN. bra*Vt0g25
How well does the officer meet and enforce Air Force standards of bearing,
dress, grooming and courtesy? Is the image projected by the ratee an asset
to the Air FPnrrp
S6.
.. VUMAoN
RoLATIoNS (Equal oppornitJ L. J L._.J "
par coaloptld. NmattiWryJ
How has the ratee demonstrated support for the AF Equal Opportunity Program,
and sensitivity for the human needs of others? Evaluation of this factor is
MANDATORY, ----
F-FE R
AF 1Z0om,- 707 P..VIoUS COITO W"e9U9. F-9..c OFFICER EFFECTIVENESS REPORT
Effective 1 November 1982 AFR 36-10 Attachment 1 26 Octobe 1982
SAMPLE n/
IV. ASSIGNMENT RECOMMENDATION: i. STROMEgST QUALIFICATION, Pers e rver nce
I. SUcGcSTEO ,oe (IncbideAFSCI:
3. ORGANIZATION LCVELI 4I.9(NQ
V. EVALUATION OF POTENTIAL:
Compare the mlee' capability to assume Increatied r ,posiubathty
with that
of other offwrs -homi you know in the mme g'ede~indicate your noting t / I .'
by placingon "X
X in the designatedportlonof the mott approp•#ese bloack.
EVIL KIL ILI
I Z IZI
RATER A000 MNOOR. RATIER A^O04 INRS, RATER A^OOR I"OD*s. WATER AOO" IIOOWS-
RATIR CR RATERt an "ATvR an MATZR no
VI. RATER COMMENTS
Organize comments within the standards of good writing. Do not use headings;
underline, or capitalize merely to add emphasis. Include those comments
required by paragraiii 3-15. Add any other comments not covered elsewhere
and not excluded by paragraph 3-14 which will increase the value and meaning
of the report. Amplify those positive aspects of the ratee's performance
deserving special note.
AE. GRA D-.E .
SR vc oV.O N., coR,, LOCATION oUTY, TITL. ,.To
JACK LAMB, JR., Lt Col, USAF Operations Officer 1iNov 82
529 Bomb Sq (H) (SAC) $$AP SAI
Plattsburg AFB NY 012-34-5678FR 0
11. ADDITIONAL RATER COMMENTS O3CO•CuR ZamNONCOECUR
Review the ratings and comments of the rater for completeness and impar-
tiality. If the additional rater does not concur with any rating in
section III or V, or any comments, check the nonconcur block. To reflect
disagreement, initial appropriate blocks (section III) and mark additional
rater block (section V). Significant disagreement (para 2-26) requires
justification.
N@4 "C. .
GRADE. 0R
OF SVC. O NG.N.CO AO.LOCATION cUTY TITLE OAT;
FRANK HARRIS, COL, USAF Commander 2 Nov 82
529 Bomb Sq (H) (SAC) .SIAN4 SIGNATURfe
Plattsburg AFB NY 987-65-4321 & P
VIol. INDORSER COMMENTS OCONCuR KNO€CONCUR
Review the ratings and commenfs of the rater and additionaT rater for
completeness and impartiality. If the indorser does not concur with the
additional rater's comments or ratings, check the nonconcur block. To
reflv t disagreement, initial appropriate block (section III) and mark
indormdr block (section V). Significant disagreement (para 2-26) requires
justification.
P*A:E. GRADEC. RO
Or SVC. ORON. C094D. LOCATIONt DUTY TITLA 1"T
James M. Robinson, Cc, USAF Commander 4 Nov 82
380 Bomb Wg (SAC) SEANA
Plattsburg AFB NY 234-56-7890FR
*U.S. VERNtmENT PRIIITIN Office: Iin.-SSJ.S,
AF Form 707. (Reverse aide.) F-3
OFFICER EVALUATION REPORT I T FORM
P. t~.
R A *-elI em ee W OCUR
Rhad PN,.n A*I Sfwmto nm on e'r q.'WMC0em00k* ffib fe'm
PART I - RATEo OF'IMPS ICE NrIPCAh
NAME OF RATED OFFICE NA
IL'. fuI. Nil GRADE ORGANIZATION
LANG. LESLIE R._ I CPT, B-Btrv. 3d En. 55tch Arty
PART II- RATING O4AIN - YOUR RATING CHAIN fORt THE EVALUATION PGR#00 I:
NAME GRADE POSITION
RATE A
RTRREY, THOMAS A. . LTC. an Copander
INTERMEDIATE NAME GRADE POSITION
RATER
UNION NAME ORADE POSITION
RATER FOX, LARRY R. CUL Bae Comander
PART III - VERIPICATION OF INITIAL PACE.TO-FACI OISCLMION
AN INITIAL CACEITO-FACE OIScuSIOn OF DUTIES. RESPONSIBILITIES, AND PEINPOREiAFE OGJECTIVfS FOa Tin CURREPNT
PATINO PERIOD TOOK PLACE ON See paragraph 4-6 and 4-7
PAYED OFFICERAS INOITIALS See Daragraph 4-6 RIATEN'S INITIALS See varaeraph 4-7
PART IV - RATED OFFICER (Cýpww oe..B. od c b-&-- ftr 66 F__ft- P*4")
STATE YOUR $SIGNIICANTDTIES AND RESPONSIIILITIES
DUTY TITLE IS YHIE POSITION COOE t_
6. INDICATE YOUR MAJIOR PEFORIMANCE OBJECTIVES
DA. -ot•.NF SEP?-,,, O,,. -- -" ..... ..
F-4
SLIST YOUR S1Ga'; ,CA'l CONTRIOtlITWNS
SIGNATU'LOE AND OATE
PART V - RATER ANODO" INTERMEDIATE RATER fReq.aru" e.d r€tment a. P.1 IV.. 6. ead ebow.
i~ulw M144A A" eo-trei Ujint
u-hou
. SOW110UnrrIo
•e l' Poilftda
jrihoiiatnon DA Farm 6 7--s.j
a RATER COMMENTS ilgio..I
SIGNATURE AND OATE 1M.-d•o'-, I
SIGNATURE ANC00ATE 1`11..4410-t
DATA REOUIRED BY THE PRIVACY ACT OF 1974 ( L'.S C S$2.a
1. AUTHORITY See 301 Title UM
SC.See 3012 Title 10 VSC"
2. PURPOSE: DA Form 6- -S. Officer Evaluaton Report. wre'h az the primary source of Information for ofrwrr personnel
rmana epment detusiont DA Fotrn 67-6-1. Offsetr Evaluation Support Form. aer-, as a guide for the rated ofr'cer's perform.
anet. development cf the rated orffier. enhane" the accomplishment of the organization m ,inon,
and providea additional
perfurmanc, ianfurn.ation tvii the ratinr chain
3. ROUTINE USE: DA Form 6T--&' will be maintained in the rated officer' officiaj military Peruonnel File (OMPF) and
Carter lklanatenment Inditidual File iCIIFI A cops will be provided to the rated officer either directly or aeat to the
forwarding addite Showrn in Part 1. DA Form 67-1. DA Form 67-6-1 is for organizational ue only and will be returned to
thefalrd officer after re-ieu by thr- rating chain
4. PISCLOSURE: Di•closure of the rated offrier',
aSAN (P~rt I. DA Form 67-4) k .oluatair. Nowe"". fallum to wdlfy
the'SAN uay rAamult in a delayed or erroneou proc ing of the offic't a OER. Diadowe of the Ilnformatio is Part IV.
- DAIo.
m• lT6--]
- kroluntary. Nowere. fuliltre to provide the Inloemstio• reqwuaed will riult in an eitala•tlo of th
ratVd ofrrvr wt•• oat the bentet of Ikset ofIr'na commetmu. Shoid the raled ol"•er w the Priae Act as a book not
t;o p-r1ide the Infowatim mreq ted in hat IV. the Support Foram will wontaln the rated offeera satemest to that effetd
anIe forwauded throlughlie ratng usita ls aneordance with AR 623'-1-S.10.
F-.5
* sl I"F.[ F.As, NA."( MIDDLE INITIAL A~SNGRD
OAT( O AE
01 R-1 1, OE0
J. ,N. (0 STACOOK
VWN* %I
NAY(1SATION. ?IP CnDFCAO APO. MAJOR COMMAND REASON FOR suamissioN c.
COMO
COO[
tw~UCOEE
o OF .. MILPO a RATED OFFICER COPY ae ome g.. t
do" FORVWVAROINdGADDRESS
AAONT..S coo(
~,.. F I .. I I
r 2. 1F
ORWA RO CO VO0OF F IC
ER
tXPLTNATION Or NONRATEOI
1
EA.OOS
PART it - AUTI4EWTICAT1ON (Rated offal, lig....i... PART I doI..4 RArIUI; OFFJCJALS ON.LY)
* A.E O; flETjR f..I P,'*. ptipI'S
GRAoir PIIflANCIý %7AANIZATION. OUTY' ASSIGNMENT
. -AE 01INTERMEOIATE RATER (",I,. t-1. MI)uNf~ATPE_
T
GFRADE. ORANCM, ORGANIZATION. OUTY ASSIGNMEN4T
c NA%-E OF SENIOR RATER 11-I7. F.it. 011 SNflTR
GRADE. ORAIICH. ORGAFFIZAT-014 OUT,' ASSIGNMENT JDATE
d SIGNA!UAC OF AATEO OFFICER DATE DATE ENTERED ON 11 RATE D OFFICER 0. R MPO INITIALS ~.NO. OF
OA FORM 2.
2A CooINITIAS INCL
PART Oil DUITY DESCRIPTION fR..,I)
* FFIICIPA-. OVA'ý TITLE lb. STI;MOS
F.REFCR TO VAnT ilia OA FOFIM 67-S-I
a. PRafnESSINAL O PEI to.or~i
I$-
I
atth1 1.4-1,a theRP
IMP. aia- e Ii udapabl
the changinE LOW DEGREE
2. Mfroni.raie..caIIlopriale dcswlepaa anod etppnpau in [ahgge
li SeesIAJ-d
@rov~igtmendat
5. Pctforvns under physical and wai~ntal snt. 11. Pow.t.. malt&Iirlt WaingFFand aipposiane.
6. EncoIu.(ig candor aRd frainkflea in subordinatus 13. Suppolu EO/EV.O
7. Clear and concise in wfitLan communkation 14. CLOWsaw conda. In vaillcommunicaELovI
t' PROFESSIONAL ETHICS fCIRPR @1PT IP"mP late noted 01aWMM pontea4a411 .I.Ia-diti a'. _d. 1D~~..I
1. OFO.CATION
2 AtESONSISWILITv
3. LOYALTrY
A. DISCIPLINE
6 INTEGRITY
a MORAL COURAGE
7. S(I.PLEISNESS
6. MORAL STAND-
A RDS
DA 617 8 019PUACIES
CA, FORM 477 I IANI 1). 1101IERC
IS OSOLIYE.4fI NOV 15. US ARMY OFFICER EVALUATION CIPORT
F-6
Pt .100 CO V11111
PART V - PtRPOIUAWcE AN00 POTENTIAL EVALUATION 1FMMI
a. RATED OFPICIARS NAME IM"
POTOP ICE
1 ASSIONEO IN ONE OP HlIS/HER 01SIGNATEC SDECIALTOESAAOS []IE3 nEl
6. P111FORPINANCE VU"I NO THIS ARATWFIEG
PII00. REFEIRTO PART III, CA FOAM 47-4 AsiC PART Off . 6 ANO 6. CA FORM 4,.40-t
r7 UIIE~T
A 90U;TV" PAIITS
c.CCJUEFNTONSDICIFICASPECTSOP THE PERFORMANCE. REFER TO PART fit.OCAFOAM 67-9 Af.O PART fit a, 6 ANtd, CA PORNN101. ONOT UPONRCOMMENTS
ON POIEPPTIALI
0. THIS OFF ICE A POT tN1I AL F OR PROPOT PON TO THE "R XtI NICHER RA E
0POI0
DYMOE
AHEAD OF PROMOT' -IT--
0. COIIIMENT ON POTENTIPAL
PART VI - INTERMEDIATE RATER
ItCO.MEFITS
PART ViI - SENIORA RATIN
I POTENTIAL EVALUATION tso. rh.,., I AN All b
Ifl.IN F
DA
SIR USE ONLY
lift
it
A COMPI.E TEO OA fonm El A I -* d'C" -, 11 .T.II.
I Hot1 EfFORT AND CDA,.IOI 41 (1 I
I JNO
F- 7
SUPEnS USE ONLY suptils USE ONLY-
I IoPI16
, 11-_ ,, 03
REPORT ON THE FITNESS OF OFFICERS
NAME(LAST F:RS. MIDOLI "- ORAOD 3 0" SN0
5 ACp A1 6 UIC 7 TIPSATION DAT ri?,M -"
[Z_ ,EMACjjo_
I_ EI
0 -CCASION ru1 Ali____
0- rm -r-A~rarif!
I P -il ]1 10 DCALC N OFI DEACHMCN! ' FROM -
to
IODIC~A.r REPORTING SEimon or.
OFCC
OFFC1
C)" S -E
'11 1co,
Ir'S SPE TO
1 -i DPI -- , - .
SULAr . I -' ftI i I . IAL jJI H3 CO I IIL-'6 CLOSE Out I IoIN?
2- EMPLOYMENT Of COMMA•-o CONT,,UED ON "EvEASE SIDE OF RECORD COPY' 2
•I-
23 REPOA71NO SENIOR [LAST NAMAE.
FkMillITL 21 ORA60D 11rE7 I
28 nUTICS ASSlGNtO jCOPATiNUE ON REVEIAE SIOE OF RiCOno COPYi
SEIPIC ASPEN
rV INO COD LTU!TTI VUW6U
SHEET I~
29 GOALSE?11NG 1113o0 *O 1
.. '. I 3I, oWRKINO [-1 2 .... *
a
mail 33 11AVI
0.
"" £
kC'•)T_
___ •f1 '' PEI AV"ONS
•, M. ... o Iil ° '"
34 REAPONS
• N03F WmlImO
II UAFSFII 3$ IOAJA 3f7 Bi-11N
A :J-
8 I UA1OS AM
0PIO7wN AOiLITY
3 8
S A T . t WATC H 4
3 ! . A
A I-,.
.AFA
I J I I AA
0 " r.
Ad~~~~. do(T'(i
No so of P,On~~i!u
~
"'I
jJL~J .1 I
EVALUATION
SUMM1tARY
B3 FIIOI t..J
/ '.'• yr f
i t ,l.,I I j _____.__
/ ... ,,
-il(,~l4•{IiAIIL i~ ',*v,/iiU I [1 --
•-
6' ;'lil`
LUM A , L1]
.. OI. j
, i ... i i...~._ _
(A(_
I k ._ (
ut. [11 I ,,p , iA.,C,'
... .. Itil. ..
C It .. .. J j __ . . _ __ ___. . .___ 4___ __
SI'LI=I•uhuAL?4 T .4,}l l' I leCI C(1
(J.lll IcoMl~ 7t'* pLV(•IBFb
I•t
li tlJUi,(, [TI f,1 IV ......e [ IIf,` AICALYIL. J llu ('fl.j(,,iiA 1 *]' )i
I 7... MIIp
I tilt
ic
/ ."' II IIu, II ,l,,,I IJ],,,,, l "0 JI I II Ai
... ,-
-np• _IrJ' _iva)
Ila Jl$.'Il L
/ MLIII , / AI--Ia •l- / A-OI~i?Yo / (II
,, A~ol i 11 lll~ li(
cu., 'I Mt III
ýUJL74 ý11 1-( III V.14U 11111 J
ALL L tj jlA(lL F11A
81 'iflNAIUNi Oj OFFCIN
I VAII
VAI IIA I i ,l; 1.1 l? (I ll f1l1H
A (C N OW
I. (" L T? AT }I HA
V i th fll -,'k ti'lU It IIA V t il tl
' AP PAI LIO0 Of V V P il
FOMMANCJ ANt) 10G~t TO MAAL A StIAt IMANI
E . ... NA1UAfOF"('PORTING SEAiO'
F S tit SOFANibIGNATURIE
V, MIOJI
a it I
A ll
oi WkINO [41410i11 UN LUNCUU!"(fiAl ANIdr LUNUtMhINI)IJI b I'L* NIPO I
V-3
8_ _ _ _ _
11vPres(1#111110 ?~111 *E4L WORK 11 t~?
S IE
28 DUT
IES A~lGA# D ICo~imugd
Sa CW4MENTI P&V,•wtv COErYr
• on IA•*oi.pC P6OMgf§I I.*,SCW P MhbV P0fWfl Irll ON WOIO OR W F*vVf9i mf WO S4E1W dO aOf
A lo,,nanc £
€OtnWIl WWAPV Cvnwnor,
c*18wno Ie ~t., Ski%& 041idIaftt, awl ay be -.notanI 10 "Wtf df-tiaOm~nt "n Iutfe i&~gtmffnl A RvW% on boauawIM MAm~nitl (' w*CJI~l &I-.Wrf an" kVopong CCMwYeds
Nf r
'~fo ve
F-9
I 9
PROGRAM i. OaImNLZATIOcl
DF is st£ . Iruc 1 9 DESCRt~nol TITLE(Alshc eal" es q--
2. MARINE REPORTED 04
6. LASI
NAMA b POSTNAMI M t. it 0 eA. * W rdNTtcA
)tONNO. U.PM"O 9 tAT.Arv
tI - 1 I I1
23. OCCAoNsIO AND PERIOD COVERED
0 • OCC 6. NOW• FRlOM10 4. TYP it 061100SO• NOe1AVAAA&rlY C w awme ý,,,N #vHil.AI
"
"DUTY
1GNMFNT S. SPECIAL INFORMATION
a
*DIIReT7IVE MUL .6 10OWNS1 /0 04. Is.LO NO. *.o * GUALIFICAt1ost b clVteWiING ORsCR*s
ID NO.
i NO . d.*4 1
6. RESERVED FOR FUTURE USE 7. RFWRVED FOR FUTURE USE I ORGANIZED RESERVE DRILLS
'0 SI I I I/ •.
, 9. DEPENDENTS REQUIRING TRANSPORTATION
0.NO 6 tOCAtICI . c. ADOMI&
2 =1 lto. Du n
PEFERENCE (Code). 1ob. DUTY PREFERENCE (Descriptive Title) (Aw s.
e vio e se.li
.... 2 2d 2d
"d
11. REPOITING SENIOR S T IS
12. SPECIAL CASE (Mark i1 oppIhcoblo) led ATTENTION TO DUTY 1So. YOUR ESTI TEOF THISMARINES "GENERAL VALUE TO THE SERVICE'
EREPORT E REPFOR T e p9 Eq ED Eq @0909D0]E3[3E 90SC3E
i3.PERFMANCI 14l. COOPERATION '~b. DISTRIBUTION OF MARKS FOEALL MARINES OF THISGRADE c
[] . . N , • • • [] R LjI L__JLJL •I IC__
_Ll_j
L_.J -zo.
13b. ADDITIONAL DUTIES l4f. INITIATIVE 15. FILLBOXES SO THAT THEtUM Of EACHCOLUMNCORRESPONDS TO ITEMISb
I
13C ADMINISTRATIVE DUTIES 14g JUDGMENT 0.>C)
o0
:3d. HANDLING OFFICERS IWAR NCO, Wo-) )4h. PRESENCE OF MIND
Z ED
E EDR R FE9 B
9 EAB
9 D
RE 8 1 04 CD (D
8 D l D E E ID k,z0
0113. HANDLING ENLISTED PERSONNEL 4, FORCE
S13 TRAINING PERSONNEL 141. LEADERSHIP 16. CONSIEORING
THERE•UIRMENT$S
OF SERVICE
IN WAR. IN•n:ATI YOURATI1TUDI 0 t •
F IIOlARD MAVIN<. THIS .1A0IN1UNDERYOURI
COMAANP z
0 ER EA
9 9 9 NDT EDPREFER 06E OR E EPARTICULARLY 'K
1
3
g. TACTICAL HANDLING QF TROOPS 14k. LOYALTY OBSERVED NOT WILLING GLAD DESIRE < Z
E. e E E3 e) E1 21 17.
HSMRINE SEENTHESUBJECT OF ANY OF THEFOLLOWING REPORTS' C)
IF YES REF-RENCE IN SECTION C.
AD L E
1 QUALITIEr$S 141. PERSONAL RELATIONS o,
COMMENDAIOR I b. ADVERSE .DISCIPLINARY ACTION i^
l.ENURANgE x,
E 93
E] 9
ED B 9
Ell] E R E ] ED
YES ENOI EYES eNO
EYE
-40. PERSONAL APPEARANCE
•M- ECONOMY OF MANAGEMENT 18.REPORT
BASED
OH OBSERVATION 19.OUALIFIED FOR PROMOTION Z
--_
-,z -- ~k YES
ED) E-' C9] e R E _E] ED LE
RE DAILY I3OFREOUEN U1 ENT .A-.p.,CA 3YES .:,T,
IAC. M•lILITARY
PRESENCE I4n. GROWTH POTENTIAL 20 RECOMMENDATION FORNEXTPUtY 21. RESERVED FOR FUTURE USE 0
RECORD A CONCISE APPRAISAL 01- THE PROFESSIONAL CHARACTER OF MARINE REPORTED ON THIS SPACE MUST NOT BE LEFTBLANK.
ED,
22.I CERTIFY tIW Inlfomotion in se*ction A is correwi to the best of my 23. I CERTIFY Itha to the best of my knowledge and belief all entries mades hereon are
knowledge, true and without prejudice or pan-olity.
z S (SIPo tVro of Morine ripored on) (Dole) (Sanoiure of Reporting Sonor) (Dole)
z 24. (Check ,ne whlen requietd) I HAVE SEEN THIS COMPLETED REPOR. AND 25. REVIEWING OFFICER (Nose. Grode. Service. Duty Assignment) J25o INITIALS
1
i HAVE NO STATEMENT TO MAKE Z I HAVE ATTACHED A STATEMENT.
25b. DATE
(Signolurt of Marine reported on) (Dole)2
-- STAPLE ADDITIONAL PAGES HERE F-IO
USMC FITNESS REPORT Page 2 (1610)
MARINE lpORTREO ON (Lostl tome) (,irt, name) (M.i.) GADE IDENtIFICATION NO. P7.Ioo (Fon.) (7To,) OAc.&sIO
REPORTING SENIOR'S CERTIFICATION
I certify that on the terminal date shown in Item 3 of Section A, I was the Reporting Senior for only those Marines of the
some grade as shown in Item 15b of Section B. Those Marines are ALPHABETICALLY LISTED below. I rank this Marine as
of (only rank Marines marked Outstanding in 15o and b: mark NA if not applicable).
NAME (Lost. first. M.I.) ,mos . NAME (Lost. F,,,i. M...) ,MOS
C
0
-4
i,,l
REVIEWING OFFICER'S CERTIFICATION 1
z
I1.E I have not had sufficient opportunity to observe this Marine, so I hove no comment. t
2. 1 have had only limited opportunity to observe this Marine, but-from what I have observed I generally concur with the 4
Reporting Senior's marks in Items 15a cand b. 1`1
x
3. 1have had sufficient opportunity to observe this Marine, and concur with the Reporting Senior's marks in Items 15o and b. or
/ z
4. Lij I have hod sufficient opportunity to observe this Marine. and do not concur with the Reporting Senior's marks in itemsuiA
I5a and b, I would evaluate this Marine as (item 15a) and rank this Marine as ______of rn
-_ (only rank those evaluated as Outstanding (OS)).
REMARKS (mandatory if Item 4, above, is checked):
SIGNATURE _____________________DATE ____________
NOTE: The information above WILL NOT be entered into any computor Program.
T7_1"I
TRANSPORTATION LIEUTENANT COMTMANDER
U.S. COAST GUARD
CG 5312 Page I (Rev. 644) OFFICER EVALUATION REPORT (OER)
THE REPORTED-ON OFFICER WILL COMPLETE SECTION 1. ADMINISTRATIVE DATA
at NAME FLast, First,I iddle Initial) b. SSN e 'RADE d. DATE M RANK
04
e. UNIT NAME DT g.OPFAC h. OBC i. STATUS INDICATOR j. DATE SUBMITTED
k. DATE REPORTED PRESEN UNIT I . TYPE REPORT m. OCmASION FOR g•EGULAR REPORT
IR-- emi- Detachmer of Detachment Promotion
-T 0i Regular Special L ,
Concurrent ,
Annual porting o0 of Officer E-o, Officer
n. PER~IOD OF REPORT o
c. DASNTOSRE' _ p. REPORTED-ON OFFICER SIGNATURE
,A- OT= " IO
V W'. UAT PCS TAD LV OTHERI
THE SUPERVISOR WILL COMPLETE SECTIONS 2-7. I. Section 2. descrtibe tie
ef ,loeeso
lnukoding piery anid esellftatial si
dvtleretento eetel
aend giseet,•,t•ere.
to Unlt o
Contot Guard missions. Than lot oath of the rating senses In Secton, 34 !otmswto the offincer's Wortrence during the reportung perode,604~ thes
stioneloede gAhYleeT
and ::elgal insit lo flittng
in the lpropridte "lrcie. In the area folloii ng iach s1ctiOn, dee•rIbe tk eel. for the mtrlke lithe eking .. inifiee whistle neali. Use onrly ceed epe.. COmOless go~
2. DESCRIPTION OF DUTIES
0 Documentainwouowei
3. PERFORMANCE OF DUTIES: Measures Sri officer'S ability to got things done.
a BHENG PREPARED G
Ga htca try
Wbine.
unspectd Appoeast cns Tan prompt Pllea-to
6 ioo to et Ti"1ng
Asud•t•sea sruleomly well IAs
learned-
trolled •y eoentve-mas Sets valvie or (rnot iui,•,ml tetoa H=aclyiunght. IU l Imnonedwli eooisawicraolam Wei Led
emn•oltd ability tho ntliepae, to id•n unrealistic goal, if any Salt wrong IrunpoFadl ie high aid rlhin goIs I "rlgn '",nwiueca ut,,Ia e.ente Iel
ilyw tmii~bdone.eo atpnor•is, slid prioritles Te& go
so (01ll*. eilgietnopeIng pmadu. pVow I Pwr
a eal@dPktJeslJIe oal•) llI
eaadnoe.
e U(or
aoompln t
u aemJod
rg.o o=r4 proem1 uret. p.lacor eetams Net of soyrete will Uesa Io heie onehenI Uhulanp<
en. opereaung powidue&
rea.
'I
tl~ll
n1M01l x.4eboth perdLJtalbletlif iw yp~lpr•iosr 0 inee~tliosonalbilni t or ient lsole ad JamU "Lesson hom-orkl It0| oysltalle to aPhi4leve Vi hisbastl outo ýt li
urrertlin 0i41tionel. tlrasionl. guy well prepared for r jp h,blhulie end priperllol fe' lerOmp|llshil .•eop
en.
MIAL.
l.11. litie end siillonless TWW pl ritlll
_____~~~~lo o
__ __ __C a'eu tyi eeply.
bSU. RESOUtCPS May overuleder allocate t-spu~wtors, oncrn - Ku'incully-nmisanfe -anety of. toti.. ustilfl71 -6 =1 et Irisgifts wear
late en uopr-clvrlt'l ar"as, *t 9rerlook Si.ntuinaniauily wi., the ineruelsai•llul4 Pdest La one bse a"e c•4inca
-rmolstsd
eblllty to utill.r prepl, 60o9 crtrila de.M.dA Moto efleol .• l Coal coenrioevis Ueltsl.nl, eta nel
o,
done dloist w1ihiha •rnai ape qetnss ef at
mrneoay mag tol, nd tim eaficiently, to managing a narrow rlnge of srtinvlttl well through ethere IIe fl 'p ee l tivtea C4whtljy "Md dam inth loa"
delegze, en to virldefollow-up tonirol. Ovir/under manages, doien't delo1gat efle'tiely., requires same d at-bein UlIolo whi•ioeor polsble Kamcthe'l"bi
wisely tlorutllis people. or "bi•rns" Mioisre lad
kars what.r elil•g an;
them
o•nt lwmin't (el-lewop iLope War
eflst "eg
I- (.EMTNG KItIULTh Ucuali obtaiin."uu Wiesl
t, etiij quTigi (-f ... .1 dienk
c well I, all rte-otis amlwstd (lo
C5.U.JdL;;;;lftV o -;;;~ts yw #ip-
sK"th bt e as m e.
unor
.. -
.olast eh,-,r uruulol wituaionll Utlbein sll oceti Amwr•r hld-.
•-ys
1nTe u•ulily'quatityofhe•ffclWe tr'-k at tuintiI. in rsl, ine icasli UI will mol endw
reqo•rlmitntas e •s•hen veawe. a1 to So morIand do it hell, ii apiLt of
connp lihments IThe ofleriteeneaa orItimpact anecded twgeol iults ~isoly meiriiai teure. rduuiac rinished, qusality work and "now" eetcit Owe,waei and that a(
of reoultuo
ciit officer i unit Srow the IusituI quo rniiuiirm naive frici ayherlihd•ut Il•al, a esitl satee
oI lu il y
mu high
s1alil p
kee.St Guard hi- i posliive ieniepat an
.iun
t
or
ll•dCoe"
•m•ew, Me'&
indei o
Il•lm~e ie oslaT
,uisl, reA( PMllnOelepo On iUNS
aM'
0)o _ 01 CD D 0) 0_
a-,-bVLE~I Needs renunngdoosilropurIlebath Toh& Parte 60Tk kowip.,er11pn lihlytnietacsoils in ,,iiplu uns"116
tl cein duo ds•avelo'i Welewihiout
trsmufic 4.di, .ropletel prd e
F t. eAd isels olamed Adep61e flbdJiret 1Wph
e iLt
71w tes- to which the eafficr rapOeb". tion blow o' Laid responding to hquosus, dasdI ine, Mlkel trmely Iree sc to
.rs- prap-ira#el U#sIF lllpew bePrs
ra
"repllea.ervamets lerdllsiea in a Im4lY own i-twrs, aosAl&uleouIchengedio quia.e. menmre leoters or mile Tellal.e &. el
iOn. poiw'y, ds . or reneihiltlu cha in
g eailt, eie ip, o pope Iktlp4%lo t
o4 a•aler shanielsp
isei.
eibilluets in 11ldiss. delemen, or phlwwwwuut -sla
--
d 1 alietathlk I=
ilAm~rA@l. H
f
Illm =; &bfeW* &OL No 1 -- .1,•
10,6 r"ll I
•r (l~~l+4m IllJII~hll~l~rolll..lJse tollllnlllllllll'll
p1 issm 6l 0 Sh. 4806
ugl~revt or bitlallnt mwphr llll
(D 0 C0 T Q 0__
_CD_
I, COMMENTS€ (Porfortfunoo of Dutie);
F-12
Pou odrdw is obsolete...
i.PtPttA X1I7B. -I ~ 1~Xw u aan~ltes
f. COMMENTS (Performarim of Duies contlnued):
4. INTERPERSONAL RELATIONS: Measures how an officer affects or is affected by othiem.
a WORKING W-TF! O"HERS. Sometmimmme
dirards the ides andfestimp ,acaunagam, Ip of idea; Ule. Stimulatwe ape". ueP m ow
des•amsw"
d others, or mueee bohult hacui d Reopecot the view., ideas of others; t
ee tommun Wi
•pw. 4Ow em
Dammossalnal &ilhy to prim ,a tween . faflure to inform or consult-L b. a w-mp-tivs,lsters a m of eamwerlL. arwtkble g with *han of ael
fesS. ISimoeraits. Sodto week With oter Ueobimpaltne; Salk too muchb~sS. t. Keeps Other iftds moed w oithersl Car. rianketPstl s GeO"difermeut
People sa"
people o W N
tonS
Si ahie Momnsoon
coal& tus. May be inflaudble, os tam pe or
w mba rim shamo of lead. 'Irea. pow"l in ow wrgasnhetanm. to work lost.erh withetmatn
nxt
SWlow So resolve clcte. Not a Lamina adeat., omurte mam. Helpm datastin m them toac geBas
which H/0
player. reed-wve
conflicte and may Wuem on mmu would not eUatnwium haewbmu obtaine.
b.HtUMAN RELATIONS: Eakbite discriminatory sadmincise toward Trieset otbare falirly and with diplt Through leadership end demontiarated
o__
_ _ _ thers due to their retigton, aes. mc. ream. rarlegus of rligion,see. am, ram or athoic sum pereteonsaimamitmat insile fiair
The degree to which the ~wr.~luble the or aethntic bcgon.Allowbse his Is- hsckground. Cearries
out work,,=-nzg ~ an qa rsmeto te,.S l te
leta nad epardO
t ar the Cmm mndant's fluem ar the
o ramotmaatf lhpe. apesl responsibiltlie without btim.cregardlesn of eligico• t, Mae. . rem
Heman Rel•• a Policy and show o
ree mae tey
m piontimi to barem alwmu le I subaai dieastm acon mthls
(r b-Ting Imp or ehni b6dwakmo Dow not.WSoltem pie
-ad weham te in dealing with cams, diseectful; may makm"
aslurng remarki, i
to•etepirlt of the Commandant's Human actions or bohsvwr by anyoe.
Maker dnd Itn. f. Dcwot hold subordinate@ accountable for Relations Policy. ]= lemu•lrlyboterwo.ty catributimos, 1o
their human relations responsiblities. thi Si
___(D_ 0i) 0 of)
0 _ _ _ _0
c. COMMENIS flnterpersonal Relations):
5. LEAOERSIIP SKILLS: Measures an officer's ability to guide, direci, develop, influence, and support others in their performance of work.
. LOOKING O1f FOR OCTiEIIS. 1 shrvi lithle concr-i- for th, Saaety. problems. Ceam shout peop•le Ko, es and rempond. Createes n attitude of cnresand a Sense (o
nied., goals of other* May overlook or t. .,h, need. Concermod for Lheir gaae- Wmflucilty w.n
others Peracnally ensurses
The ofrlot' Sensitivity an4 responsiveneee tolerate unfair. insensitive. or hbunve trUat- itylwell-being
l ami ble •lsneJsnd helpe r-eouwre ar available to mo t people*$
to the nede, problems, goals. end ac)hieve. eent eopeople. May b leaebk
to other, wSith personal on,ob r"11td pembleod.l = needs and thaet hmit. of ondurtorae are NWS
ment O
(there. but non.vsp"ruive to their personal needs. end goale When unable to sm.st. eSugge•ts exceeded. Alians
Ilemihk to people and
Seldom ocknowledgeo or recognaia• rubor- or proiede• other remur "Goe to het" for their problem. b)oa no tolerate unfair, in
dealtee' achievements people Rtewards daervlg subordinates ino eenutve, or abusive Lreatmnent of or by
t t'ineey falUion. others itztremely oanat enticuea in ensuring
dmereng uubcrdnat-- get appo.prate tme-
ly •ecognition.
(D Q 0 ® @ D
@®
L. I•AY•bWPU• UIOtflINAMh: Shown little interest in treining or develop. Provides opportuntiee whith enurag, Creasto.challen Situaions wh•ch prompt
-ent of eabLordin•.Le. May unnesaaar,
iy eubordinates to eapend their roles. banJo. sr 'ly high lreldevel ntof people
"Theextent towhich an ffi uses coechjng, withbold authority or over-aupervise. Imporutat •asks. and liars by doing1. L ork group run le'clo-kwork
oaeu eetangad
tag and prvidea oppoa Deesn't challeri their bilitihe. May Deetateas end hold. subordinate. accoun F .lwSyA kn,,w what' on end
guru-th
fo growth to increasm
the ktilla. w.ulersto, margis
• . lrfmaic. or eltlcues table. litecogiue goodperformanca.caract. ro .. :y handle the = e Ho!de
kLaneled, . Lad proficiency of subordinates. ezceeively. Dmoen
I beep etbordanete, in- Shortcomings. Proide opportunties for autnraiainate acsounteble. prondee tamely
formed. provides littia monaet
tctive feedback, 0ianang which support profmoal growth pr'•uir •ndconstructive ir•irm Pruovid.
artit- And maueve tsrsinng opportunities-
I •"JThUt..•.c•'r 00"r Il ;JF.s i
Pmn
officer who haa diffii•uly controlling and A loader who Saru. the &
ppor mmrad
€oou. A Setrong leader who commnande respect end
Inrlueunin og
thers .ctively. Miy not insill ent of others Set.s hih work standards iap"r. othie to ech-eve result. rnot normal,
The
.Lfr's
re •fea
ctivao In
in Influrn- •rdi'dence or enhar.,a ,opertUone amrong end epectatlions which are clearly ]y etuancable. People want to srvie under
dirst".lo others in the accomphlaun of rubordunalee and others bet& work statn. understood. iA-quiree IS to meet Uti h1erleadership Commun•catetshighwork
Leeks or masaon i darda that may be vogue or mtusonderetooi. standarda Eneohaiode Keep. people .sndaPrdend enpaataoew .hacrely
"Tolerate lAtLeor marginal performance motivated end on tiack even when "the go urdervltood_ Get es
unor reeslt. even in
Falters in difficult situationns ing #et. toagh tare- O
cical nd ddrat aituationa. Wine
over rather than nmpo*e nell
Q G0
1 0
d. KVALUA TIN(J hl'l'lt)If)vlATE. - Preri ovaluatiuo that are late,inownie Prtpae s e-valustuor which are timely. lar, Pi-epares cVsluastune which aee always on
tent w•th actusl parformnsc, ornotwtLhin soirie, anrdaoilurnt w-tih eystem stan, taie.fair. accurate amin.clearly measre per,
Te aOunt to which so ficer eceoducte.
or systse udelinos Second gineeesethe darde. ftiquired narrative are concumt, formance against the standattrs Nev"r gets
roqulro• ~ ~ o car L' 'odut
aur , /t,.ertofte need to be improved J to-zrpum,rend conU-ltirbtrto udnr.ndr reports returned for Corut-orvaiiuastrnt.
utlnflatmd arid timely eveluoaornn lor or redon. = 't hold iuber•drien ecoue Subordnates' performance and qualiies Use perforimsanc evelustbao e a tool to
onluiteod, civildan, andoffiorr personnel table for their retlogo Proides littlo no Seldom gets reports returned for co•e•s. develop subordinart. •nd
schieves notable
euvnelini g f -ranatee. tionedjittmeot Provides constructive perforomance Improvement. Sets en example
ceu-mbh4 whe., needed. Doa not eeh
1 .
in in supporting eape.lished guidelines
ecueeifatdf 'yprprdLwst
101 IG) (~~romz
others 100 8
a. COMMENTS (Leadership Skills):
F-,13
CG-312 (Pae 2) (Rev. 6-64)
6. COMMUNICATION SKILLS: Meamire an offiers bwty to commmic in a potive. clear. and comln4V maInner.
a MPEAJfNG AND LUTMNMG: Countoutas as ht •is
kspb ed by fpes ck
l
dearly mnd-o1moUy. Gt- the i
,-,s alan-m aleicdad um-.
in 411 pointaamesar.
Spemaks~
1 ~o gffecivly end With new =
Alwayslislafdt and credible in both
No,.,well. un
ofcr spak AM•
btall i- in- a~~.u
milsssieuses als. rhsc inm-
both•
priate., -- A pbic-• artualas. gilelujmalinfedice
andim oo to, am,
divsduej. group, or public atsaatioaL Di o.
0 prst issibsitt confidence -hen Uses appropriate cramar-~
WW plots r
wpealkurin may be unprepsred. Latem pOW, ts% baa•o dtistraung manneriamr . GiCAs phass•es and pe•suade. L•ncuragee osthsm to
ly; doeasn't give Wenhr a chancie to speak. other a chance to speak. listens well. respond. is an sUsintowe
liesstner.
b. WRrIU•-G Writes material wh)ich ay be bard to Writs., clearly and simply. Mittelial ad. Celauueetly wnites material which isan 1ax.
undersetandi or damnet support conciu•si=n diems, subject. lows wel.,achieves intend atmple in blrevty, clarity. logical flow, and
How well an elar, conmmunicates through raldied. May use jargon or uViephr. ad purpose. Uses short aente4nc•spara- persuason.Tailmr wiucitt to audhietun..
Imroni material. rambling seltence•p•lracrupiN or Imart graphs, pereonal prionouns, eand the active tg approprte rooeerstaooal syle Wit.
gromm. asru••ire, format May oversae rm-. Av-ids buresuatoc. jarlgon Senwork neve _edok I.o
the poenve oice. Ownwot or
•Uhat d - or big words when little ones will do Own uborduisate mmw.s =m high sAtdard
dinata often needs corredjron or rewritef week or that of subordinates rarely needs
coriection or rewrite.
Of___ 0 a) a) 0 0 (D _
i. AR11CULAT•NG IDEAS May hiave valid We but lacks orIJIManiati Expresss idea nd oncepta in sioorgonit. Peadily iebshsim ecdsbility. C•ncm. Par.
or a confident delivery. MAY argu rather ad.undervtAndsibla manoter. Pointsout pro's suasive. and st=ng. Delivers Ides" with
Ability to contribute idess, to diatiseelsusss then diecuss; or may intaryset inelevant and con's. Urns soud resnig cinvinong[ogit anbl comments err"
an expem thoughts clarly. ooherit:ly. camment•. Contributes litle tha. isgs- off subjsect-L
Ieeptiv• toides ooters. C T hins c
iung, thovugh, Clearly states key
and eo1emporniscualy. is -ll or large or use••.il Unrectptive to ideas ao ethm's speak well "-Ofthe cuflf." iue-. ad quenin. BuAids on thtei
up brleeirrr m-ebtmin. O- t "ink's ' •eilon feet" in alI
d. COMMENTS (Communication Skills):
7. SUPERVISOR AUTHENTICATION
a. SIGNATURE c. SSN d. TITLE OF POSITION e. OATE
-7
IP
1 6,1 1iz
j THE REPOR1 ING OFFICER WILL COMPLET6 SECTIONS 8-13. in SectionS . Comment on the Suosritiear &valuation of this offica, lOittlna. Teo
n for ec of d, raig
siicaL-, In Section 9 and 10 cor'ip•te this officer against the etandaildl shovrri and
'osign a mriark by filling in the spPioctiil. C10it
a. In the ieas following each ieatiion desýnbe the bain lot me
marir given citing specifics wfiatt pouoible. tine only eliotled spate Complete SectIonsl 1I, ¶2 end 13
8. REPORTING OFFICER COMMENTS
SPERSONAL QUALITIES: Measures selected qualities wvhich illustrate the character of the individual.
£thr'l.AfIl'E Tend., to poe'.pone needed iction Ira- ! Get thinor, done Alosyt etrn~rs to do the OrrDns~t-a, nlurtures. promotes. or brngs
,I
plementa charie only &then confronted by
v ob better Makes improvenrents. "works shout new ideas, Lethod.,
or priactices, which
iemonstrated iblity to me rofr'ward. make necessity or directed to do eo.Often over smnrter. not harder - Sellstarterr, not afraid result in significa tiprovement to unt
changes. and to week responsibility iiLhout eastn
by eveniss May supprem inuatite of or mokint meitakeh Suppots ner andior Coast Guard Doe ot promote N.O
gtidanicw and supervision subor-dinatei May he non.supportive of ideaa.newL.od.opractice* and efforts of other, change for take of change Malkes woe
chatters directed by higher authonriy. to bnnC tibout constructive chanre Takes thwhile idraiipractires work w.hen osthers
ticely corrective sction toa rvoid.reiolve may have given up Always takes poisiUve
problerru acion well in advrncr:
q G) 0
0 (D
0
b JJDGMENT: May "nt show sound Iogic or common sense Dirmonsurvat. analytical thought and cemn Combintes keen analytical thought aid to.
in maJung diffdcult decisions. Somstimes ats •on sen.sei•i making proper deci-1ioi. Uses eight to make urtilo end omionul dae.
Demonstrhted abilityrto a at 1undde'm. too quickly or tou late. gets hung up in i cisand experience snd oinido, the inn. s
Isioe. lc-seeson th
e ms lnemd st
stuon
anm mkis sound recommedti onsLaO
by dataije. orboy
69WS ker elements
kl toaltis pact o( aluternatives
t
oghs nlib, cost. and relevant iliforUntaion. own in o•oplex ItU"-
using expenasce. common sen0e. end aise wironglI h tilseconaidor-atiogn Mekes sound decisions lon. Always dos the "right" thing at t
anaiy1ca though in the decia-on proce s a Utmely fashion with the olst informs --right tme.
tion evailable.
c-REPONSWCLIT.: Umiallyceaintisdpended upon to dot•s right Posmess high standard of honor and i. Uniesnmemang in midase otbf-- and ka-
thing Normally ocountable for own work tegrity Holds iself nd subordinates tun. tegrity. Placme goals
Oa Coas Guard above
flmoicnitmted mn"'
mitent tatgettingtkejob May awrvt lees thant sittlactlory, work or tabl Kep ommritsinentasteve when un. peierada
aabibtics sadi gels.. "Goss ths as.
os ad to bold oee's self ac"ountable for tolerate ndtftorsom. Teod ai to get involv. cmifortable or ddiffcult to doIn Speaks up Lre mile. and mare.' Always holds esVand
mw
anldsubordinsates' actions. ccrtneeg of ed or speak up Provides miniml support for when necessary. even if postion is un sibordesiatse accontable fopruidm and
umvri.nloL ability to ompt decasons coc- ded&isonsi
men to own kIldas. popular. L•yal to Coast Guard. Suppots tiouniea the eciragei In h Willl
tuary to own vnies andsin
mks thsem worit. ergianhethosial polcsaridedsdocep which may stand up aned
ho ---inAd fi-med -nL.
bs siontor its,ewv Ideas la eenu ps~la -olmeafdedsweerk.
__ __ 01 G
00 0 _D _
d. STAhMIA. Pill'umn
(a csI marginal leader strrni PesfaMA" ifIs
eusafred sundier straes or Pesfarmanos"
seethes an ima
iially high
or during pseudo a
of e Wendd
tr. May dsh'arigis ofextenided work with no lees lee hna4ie rdsa perds of
1'lle
orb'sia's ability to think and ac rt
fs make pooir
decilsosem
irevaook key faIos of prductleuity or msafty. Works amtr houra extsnded work. Can wor haLew" sea
lively sander waodi"iac that are witmlul ficu so0wreng priorities. or wase= ngh bat
when mO to get thne
job dones.
Stays several days and stUl remain very, proedw.
aedtesr enontally or physically fatiguiiing esafety
constidera~tions. Belka at putldung
in mel when the peesmiure
Is see ties and Marc. IThrivess =ar st1"Nsull
neci' r
av .
ti comes rattled In time situstiose
eswsitive stressul situstleews
___ __ _ 0 0 0 G0 0 00
a SOBRIrI'. Use
of lcoholc
sets powasinaplo, or semIte Usesalcmo I dum-uinua~isly andin hiemr"
Meessandeein
tosalows fear. is aedruei.
Is reduced job pseforuaine. May bring Unt. or ac at all. Job performance estivaeraf. holdsumpsevimas a assloaifor dwwwmag
The extent to which an affloweercial
name.~ didt o seret
w keissigh shohol odiaserc helnd by useof alocilol;nan diwedisct brought lag Intaemerate am and teog aey se"
madeostatbes tn theuse ofalcohol assd indres.] ad incidnsle ijall
wh~ f dulty. Dose sest
seek So
mvlmio. DoWeas
t tlsroate losisoamseatless Uspeecint ascaekl realala ladeta A
suhore to 40 wsse.I help for people with aimba related pro. by etherst Surptorts alcohol educatiocpn' le catirr
ia
mvla alanhwed
oussnta pie
IF~
h I s
a to Iinks tums adbeaisWpreveat pram a.d see. help for t1ese wiitalcols pmCreatess h sm ltars
sitrtvee Its
-- 1 -l1ois re
latedind
usdded &. re
latedpr obl rn
ia a "oh
l e
II
1 0,'. 0. _G)_ D 0
f. COMMENTS (Peronal Qualities):
10. REPRESENTING THE COAST GUARD: Measures an officer's ablity to bring credit to the Coast Guard through looks and actions.
as.APPECARANCE: May sat a~lwaysa" rstlav or~1= Apos t. -et and well genoamsd
in .. Always Wumaraa
an linatenomh appearance
standard. Civilia
n sts my amand civilian iatte. Presents Clearly mesets Creaming standarda.
Me awat
t a an,~
afs, r it prto~ at Urnso,May "s presnt a lityeally Wi- eapain. Rsquirsaeuaahr- Deviewatrte Vast, can in wouwing andl
"-a,,and w"ell-md. norm or pmally trie appearanc. Diownot boM diatna Its form to raomnngAnuiform munt-an undifo and dwilaat ut.s.
dyihaesn ame prescibed -at&gi inh nto. to sesm standards. standards a&d maintain a pliytmlly trimo Ham
a maert phyicmallyw~in ssilltary ap
Maaida. and anomly iu su pewn Fom exartimnm
In g•ue:an. NfO
dimiats to do lbs
o drew and physical appearance m~
dusinas
soad otherso.
_ _ _ _ _ 0 0 _ _ _ _ _ _ 0 _
b.CUSTOMS AND COURTESEES Occasocially laxs sine
ovun bat msutary Cawmt in mdmmzngO
to 1111111411
U*traimm. - Always precise in ucrdenagil uiIluiai7
-- cstoms. wwanume.
and t;diuoaL May not custooma.
"Ad courtesan. Convays tboatitm- cowrtm*L Ins~p~suewblorrljnates to do the
Ther l; to whice asnffim• ornes to show iu "r Fct wben dealing wihb pulam icother and require whrduetkn mawe Etsrplafisa the fnet traditions of
Military itradlitm. customs said courteedes others. Tolerates lax behavior on part of toconfort. Tviiats peolevwmh corayand Military CminOm.. etsq'iitLs. end protocol1
and aunoly rhsqw ordinsas Lodo ouhardinaties. consindi Ion; emw savdinatas dothe Gone. out oway to inur polite. mdaeraa
tbe
sus. sam and genuine treatment is extended to
everyone. Ininsta ubordisetis do llkewise.
_ _ _
_ _ 0) -
0 0 0 _ _
c. PIOFESSIONALMM: May be muainfrmed/unawwar of Coast " Welveted bow Coast Guard obpcives. Peco•igusediasanexpe.t inCoast Guard-a
Guard polias objectives. May bluf policies. poadmu serve the public; cm- fairs. Works ceatively aod ainifidenUy With
How an
l atppliess
knowledge and ekills raether tan adMmit
iWuDesJlle to mutwuces thee effectively Sutasghdfor. represntative of public and gfovieinesL
in p•i •idinq
serce to the public 7b@ mas
t- enhaneselfusageor image d Coast Guard. ward. cooperatve, agidevenhanded in deal. LJwpao
confidence aM trust, and dsahly oir.
couin which an officer represnts the Coast May be ineffecti-e wheni -okin wih Ing wih hepblic and govenuuenL Aware veys dedication to Coasit Guard ideals in
Guard. others May lead personal life which infr. of irmpectfimpreasion actions may caua• on public and private life. Laves everyone with
ioge s Coast Guard respnonbilitiem or others. Supports CG ideas
II---ads personal aer positive imnage of salf AndCoast
Image. ife which reinforces CG Imuag
_0 (0)® 0 0 o (a,
d. DEALI• G WTH TRE PUBLIC Appeuar ill-ai..sse with the publicor oedui. Deals fairly and honeetly with the public, Always self aseured and in control when
inconsistent in epplying Co•aA Gu•srd prm- medis and others atall levels. Responds pr- dealing with public. media and others at all
How an individual acts when dealing with grams to public sector. Falters under mptly. Shows nofalvoniuar Doesn't alter levels Straightforward, Impartial. and
other se-voie. aem bus'inesa, the preasure. May take antagonistic. or con. -hen faced with difficult aiLtuitons Cornfo. diplomatic Applies Coast Guard ruleipm
wiedis, or the public. deeenilir-. approach Mekes inapprupnate table in social ;tusti,'rn Issenuitie to con- graims fairly and undiormly. Has unusual
statements May embarass Coast Guard ii" ernis expressed by publit social gtrce. Responds with grest poise to
omie social situations. provcative actions of others
__ _ _ _0 G
© 0 0
0
a. COMMENTS (Representing the Coast Guard):
11. LEADERSHP AND POTENTIAL. ;'scibc this otiice's demonstrated leadership ability and overall potential for greater responsibility, promotion, special as.&ignmenri and
coenT.ind. Comrnnntn Should be rulated to ttiose areas for wh,.h the kepot'ing Officer has the apprupriate background.)
12. COMPARISON SCALE AND DISTRIBUTION. (Considering your cornenets above, in line a. compare this lieutenant cQmneander wrth others of the sme " .
gradse wtorn you have known in your career).
ONE OF THE
A QUALIFIED MANY COMPETENT PROFESSIONALS
UNSATISFACTORY A LFIED WHO FORM THE MAJORITY AN EXCEPTIONAL A DISTINGUISHED
OF THIS GRADE OFFICER OFFICER
aD I DI IDDDI E I D]
- - FOR HEADQUARTERS USE ONLY
Sb..-" r-] D... D
N
13. REPORTING OFFICER AUTHENTICATION
a. SIGNATURE b. GRADE c. SSN d. TITLE OF POSITION e. DATE
14. REVIEWER AUTHENTICATION 0 COMMENTS ATTACHED
a. SIGNATURE b. GRADE C. SSN d LE OF POSITION
F-15
See Instructions Before Completing
(September 1985)
M,-1e reproduced. Two-sided copies must oe head-to-foot as original form.
NAME OF EMPLOYEE BEING RATED
(surname first)
U.S. FOREIGN SERVICE
EMPLOYEE EVALUATION REPORT
TYPE OF REPORT GRADE SSN
REGULAR-- CAREER CANDIDATE - VOLUNTARY __ POSITION TITLE
INTERIM Change of rater - duties - assignment
POST OR ORGANIZATION PERIOD COVERED
From To
RATER (type name) REVIEWER (type name)
TITLE: GRADE: TITLE: GRADE:
I. EMPLOYEE'S JOB AND WORK REQUIREMENTS (Established by Rater, Reviewer, and Employee)
A. Describe the position and where it fits in the stalffing pattern: indicate the number and kind of employees supervised.
B. Divide work requirements into two categories, continuing responsibilities and specific objectives (including, as appropriate, professional development
activities); delineate in desending priority order. Include specific requirements relating to needs of other agencies.
C. Describe any special circumrances influencing the work program.
F-16
FORM OS-1829 When completed on ForeignService personnel, thisis an efficiency reportwhikh &hall
be subject to Inspectiononly by
Seteamber 1985 those Persons euthorized by Sec. 604 of the ForeignService Act of 7980.
,~~~ E
I
FORM DS-1829 Page 2
II. EVALUATION OF OVERALL PERFORMANCE AND ACCOMPLISHMENT ComDaeted by Rater)
General Appraisal:
SFS Member, Adjustment of Salary Level-Parformance was excellent or better 0 Yes 0 No
All clases-Performance was satisfactory or better (If no, see 0 Yes 0 No
instructions for documentingunsatisfactoryperformance.)
8. Discussion; Performance-strengths and waakne -as-i$
evaluated in terms of the five competency groups listed below. (See instructionsfor definitions.)
All groups must be discussed with at least one competency from each group. Support assessment with examples of what and how work was done.
1. Substantive Knowledge [degree andlevel of functional end/orarea skills and knowledge, includingvwhere appropriate, technical careerskills)
2. Leadership (presence,effectiveness in oralcommunication,foresight,positiven.,and negotiating$kill)
3. Managerial Skills (interestin improvingsystems, concern for influence, objectivity of purpose,self.control, achievement orientation,andoperational
elfectivencts)
4. Intellectual Skills (conceprualability, logicalthinking, understandingofauthority relationships, skill in written communication,languageskills, and
culturalsenstivity)
S. Interpersonal Skills (EEO leadershipandsensitivity, socialsensitivity, reachingskill, counselingskill)
F-17
FORM DS-1829 Page 3
II1. EVALUATION OF POTENTIAL (Completed by Rater)
A, General Appraisal: (Check block that best describesoverall potential)
1. For Career Candidetes only: Assessment of Career potential as a Foreign Serv'ce Officer or Foreign Service Specialist:
0 Unable to assess
potential from observations to date
[] Candidate is unlikely to serve effectively even with additional experience
[] Candidate is likely to serve effectively but judgment is contingent on additional evaluated experience
0 Candidate is recommended for tenure and can be expected to serve successfully scross a normal career span
2. For other Foreign Service employees:
0 Shows minimal potential to assume greater -espomibilities
D Has performed strongly at current level but is not ready for positions of significantly greater responsibility at this time
E3 Has demonstrated the potential to perform oeffectively at next higher level
0 Has demonstrated potential to perform effectively at higher levels
0•o as demonstrated exceptional potential for much greater responsibilities now
8. Disrussion
1. Potential is evaluated in terms of the competency groups listed in Section II. Cite examples illustrating strengths and weakhneses in competencies
most important to your judgment.
2. For career candidates, discuss potentisl for successlul service across a normal career span: for Senior Foreign Service, discuss potential for highest
and broadest retponsibilties. for all others, discuss potential for advancement.
C. Areas for Improvement: The folluwing must be completed for all employees. Employees should be made aware of areas where they should concentrate
their efforts to improve. Based on your observation of the employee in his/her present position, specify at least one area in which he/she might best direct
such efforts. Justify your choice. (The response is not to be directedto need for formnia training.)
F-18
FORM DS-1829 Page 4
IV. RATING OFFICER'S COMPLIANCE STATEMENT
requirements were established by rater. reviewer, and employee on
I.. applicable,reQuiremeln$ were revised on ,_
Employee's performance was discussed (candidatewas counseled)on the following dates:
1. 2 . . 3. -- _4. .
In the case of an unsatisfactory performance rating, this is also to certify that the requirements of 3 FAM 521.2e (tenuredemployeesl, 3 FAM 557.5b(2)
(employeessublect ro administrativepromotion),3 FAM 577 (FOCareerCandidates)or 3 FAM 587 (SpecialistCoreer Candidares)have been met.
Date Rating Completed
I(Rater's S~gnaiu re)
V. REVIEW STATEMENT (Completed by Reviewer)
A. Discussion: Give your assessment of the employee's performance and potential (it a careercandidate,overall potentialto serve &fectiveiys
,t alllevels
crosse normal careerspan, including FS-1 if an FSO candidatel. If possible support your evaluation by providing additional examples of performance
observed this rating period. Note differences with the rater's appraisal or recommeondations. Comment on relations between rater and employee.
B. Reviewing Officer's Compliance Statement:
After reviewing this report carefully, I consider it to be complete, in conformance with th inr.ructions, and adequately documented by specific
examples of performance.
Date Section V Completed
L (Reviewer's Signature)
F-19
FOr, -So1829 Page 5
VI. STATEMENT BY RATED EMPLOYEE
A. Discussion: This section is intended to provide the rated employee's views on the period of performance appraised and on career goals and objectives.
You must comment on your most significant achievements during the period. You a'so may wish to address activities or problems which may not have
been adequately covered in the report, or aspects of the appraisal which may need clarification or correction. You are encouraged to state your current
career goals including training and assignments desired over the next 5 years. (Continuationsheeto may be used.)
B. I acknowledge receipt of a copy of this report.
Date Section VI Completed
(Emoloyee's Signature)
VII. REVIEW PANEL STATEMENT (Completed by Review Panel)
A. Examples of Performance: Specific examples have been provided to support the ratings given the employee. Yes [/f not, return to
raterfor rewrite.)
B. Certification: This report has been prepared according to the regulations and contains no inadmissible material.
(Date) (Panel Signature)
C. Comments: (If submitted late, indicate who is resiponsible for delay,)
VIII. SUBMISSION CONTROL
RECEIVED IN POST/BUREAU DATE RECEIVED IN PER/PE DATE RELEASED 70
DEPARTMENT FILES
F-20
(wlen any part Completed) (une foil rample en tout ou on pirile)
I National Defense PERSONNEL EVALUATION RAPPORT D'APPRtCIATION
Defence nationale REPORT DU PERSONNEL
Officers Officiers
Sa -'C in111aIS SIN Phink MOOc
NorM 00 famltlC I'lll ics NAS Gra8e CEM
General G6n6ralit6s
1. The Personnel Evaluation Report (PER) - Officers is 1. Le Rapport d'appr6ciation du personnel (RAP) -, Officiers
designed to provide information for use at NDHQ in select- a pour but de fournir au QGDN la principale source de rensoigne-
ing officers for promotion, development, training, employ- ments utilises Iots de Ia selection en vue d'une promotion, de Ia
ment, retention and release. It consists of two parts to be formation professionnelle, d'un cours de formation, d'un emploi,
used as follows: du maintien en fonction et d'une liberation. Le rapport est en
deux parties destinies AI'usage suivant:
a. CF 1417 for reporting on all officers; and a. CF 1417 sert A la prdparation d'un rapport pour tous les
officiers; c,
1). CF 1418 for additional reporting on all officers of h. CF 1418 Sert a la prparatioir d'uri rappo•rt acddiltinnel
Colonel rank and below (see Annex A to CFAO pour les officiefs du grade de colonel ou de grade inferieur
26-6 for speciai procedures for officers in a foreign (voir ianiiex; A a i'OAFC 26.6 pour dispositions speciales
establishment, international staff, or seconded po-.,, relatives aux officiers dans det etablissements 6tranger5,
tide), avec des 6tats majors internationaux ou en affectation
hors cadre.)
2. Detailed orders and instructions for completing the 2. Des ordonnances et des instructions dltsilldes sur Is facon
PER are contained in the following references: de preparer les RAP figurent clans les iul•tiicitiorn suiivlnitfs
a. CFAO 26-6 Personnel Evaluation Reports - Regular a. OAFC 26-6 Flalwort5 clalmi I~t,.)i du iwrsorirl -
and Reserve Force Officers - which presc'ibes the Officiers de la Force r~gulitre
r t de la F1i6serve -- tablit
policy and orders with respect to general reporting Ia ligmie de conduite et ;Cs formalitrs relatives aux exi-
resoonsibilities, reporting channels, occasions for gences gndn•ales de la preparation et de Ia filibre de trans.
completing PERs, and other administrative orders riission des rapports, les circonstances exigeant I'tablis'e.
pertaining to the submission of PERs. ment d'un RAP et autres ordonnancos administrativet
avant rapport J la pr6suntation des RAP.
b. A.PC-268.000/IS-000 - P,_.'mnnel Evaluating and b. A-PC.268-000/IS-00U - Etablistement des rapports d'sp"
Reporting - Officers v, ich provides detailed preciation du personnal - Officiers - donne des instruc.
instructions for completing the PER. tions detaillees our Ia oaCon de reniplir Iv RAP.
Afin do %'stouter quo le RAP demquire un document
To be a valid career document the PER must be valable, iI doit iitre pr~pari avec Ia pr6uision qul lul
completed Accurately. It is imperative, therefore, voladue. 11 no mbe a ficer s r ortu rs at lui
that reporting and reviewing officers read and
'curs di lire et di comprendro los Instructions d~tell-
understand the detailed instructions in A.PC-268- lies doigu eat d ca pu on A COI s -
000/IS-00 bfor
comencng a ev~uaton.list figurant A Is publication A.PC-208-000/hSO000
000/ IS-O00 before commencing an evaluationi.
event do ridiger leJ epprdcistions.
F-21
F CONFIDENTIAL ONF EI TI" -
SECTION~ I - PERSONAL INFORMATION - INFORMATIONS PHRSONNIELLES
A. ýrlsrltaI Status b. Dependent Chilldren (11140/0go/sChOe 94111do/IogUS" Of ins~truction)
Elint mastrlmonlai Enfants b charge (teexafdoo/anvste scolirel~/iingui 6'irtltrictlon)
c. Location of Depence~lti d.olst* Moved
Domicilte des corionnei a charge Dole do cifirflegoemorl
0, Factors Affecting F-utrjee Postings
Ficlt~pitil pouviint lnhtuef out lot futurgs affectationst
111I.ta g~f,,c
eographical iocetlin lo' it
nespC'ltng
Rdvion casifle tor$ 06 INOFOChIaCn affectation
9, 1 vois of so-l'Doyri'eI' doesi.O t.,,, flet potions,
Genre d'0m-t,v ovitro lorc 00 I,%ipruenrjiIe btfeclatior,
I.~04II(Jllrct101 uaIIt1 - bunce bt 1Wt111'I
k Lurtunl tif'e'ogts ,,
Ai filnn)
VIt~~tP
CI 'ibCFt oc IuJQs
I abi i l
1. C~
Q ~ ,. t",,' Al((t,1dhllL, va'l' C1t.A 7t. , lip (II
if ,Jemaeuud uie fnglu wji, v'11UIC~
aur,'.i Auluc~'e,
£ IAF C 26.0 (,ilt jsiqh I& la~jlglis
1i.gc's t, r j u tC1911w1
ofI Vriencti
CONFIDENTIAL CONFIDENTIEL 1
(wthom Any~Part fooflplaetCli June fall forripll on, fout sly @~It.
l(when any part comploted) (ure foil remplie en tout ou en patie)
PERSONNEL EVALUATION REPORT - OFFICERS - RAPPORT D'APPRILCIAMION DU PERSONNEL - OFFICIERS
DIRECTIONS FOR MAI1KING RESPONSE SPACES SECTION 2 - IDENTIFICATION OF OFFICER REPORTED ON
RIOLEB POUR LINSCRIPTION DES ESPACES SECTION 2 - IDENTIFICATION DE L'OFFICIER OUI FAIT L'OBJET OU RAPPORT
AUJX RIPONP'ES.
R NSURNAME
- NOM :INITIALS - INITIALES
a Use C3LACK Iead pericI Oniv
(H e Or of . . . . .-
b tJ-r.plover qu ur. v.' . .,on d)J
l ,uIom
pIonib 0 0 0 0
11,4 ou •,:,U rI/,,o
l A ( A )
0 ® ® ® SIN-NAS
b Fivire dmi, marejues no~ireý (Im Cuuvrenl 0
Unit It cercirs
Wis,.,iochiene 0 000000000
.,,,0 0 0 0 0 010 0 0 0 0 0
, 0
b"(.".""l"g"r "0000 @ ® 000
®1®I 000 ®
di VI'rIe vuur c,tai~in rod n, I~v iv
d lrm~cframv,, initains e~n oniCre tough vS~h~vq G 00 0 0. 0D D0 000
cli)aquL rannit
lflhe o aC8 (a 8 10 000000
,- 11ý0,,o f.orb. ,,,, ,,,,ý ;..1-10 1 M OC - CEM Ul -C U
vNO fill Is-rg U( IIl~th Lu' 10 laI1U!C 0 0 D @ ZUi.IZU
* MARK5MAIIOUES0 0 0 ®I 00 ol '
----- ' 0 0D 0 D 1
:10 00 00 0)(
IlMPROQP~tl InJ.A•LAIPT^ILt. . ".'.'.r.,I ,,/I*
,'
G__
__ _ _ C)
0'0 ;
00 (D (I) 0- (). 100 00
X0 1 ~ ®®;®;®0 0' o®l oi ®
SECTION 3 D~ETAIL.S OF FIEPOIIT (D! (2O6 100. ' ®('I, l0 0!0:(
SEC71ON 3-EN•iGEIUMVLNT67SUR LE RAPPORT I C) a I (D 0 0
.1 G
0
A TYI', Of
r Il ' Cr1U D1 14A I'O
[0.10 L10 0, 0D
T 0...G: 0,! 1®
0® v 0,®
/,lU ......................................... 0 S-TI---- EN0I~iCAT ON OF REPORTING OFFICER
SECTION 4 - IDENTIFICATION OE L'OFFICIER RAPPORTEUR
7 Lour
d Ull I,,O'W:A ............................ 0 UIC CIU NA . SO I SIN - NAS NA . SO
i...; ' 0
f 0 0
n'IbLIVIDII -ji
-------- -0 - D 5( 0 0C-
G
,,.o,;o , ,,0 0... 0, 0 0 0 0000
U,111
"
41IeD11UM If
U''i.It (DI D 0 10 0 1 0 ( D (
• U•,,,
II., ',,,
.,I~ 1/. , .... ,,,,,, ........ 0 Q (0 G)1 0 1 0 0 0 0
,0 0
000
G000000 00
,,...................................0
o D ,) $ ,D ,•0D
: ,,,'Ud, •.L)6 ........................... 0 10 0 0 0 0 G 0 0 0 0 (D
7 Ituf ~i(GD
-------- 0 0.0 0J t T10 0 0( 0 T 0. 00
•J"it-; P
l F'iO U
Pi" i;•0
f u
II;O/ 'VItE PAR L A f4APP()nT 0 0 , (D
@ @D@ @
"IILPOMlT"ro
,- flWU, I"0 . ALS(CTION
A A - WENTIFICATION OF REVIEWING OFFICER
--MA H Molk 'k Ali -- Mo,,N?,M ,
:•• -Vll-
7A-h SECTION 6 - IDENTIFICATION DE L'OFFICIER RMVISEUR
S .: ;" .-.... 0" UIC.CIU NA- 5.o SIN. NAS ,NA,.,SO
Q
A 0......
.... 0... i
A -0 " 0'. G (D 0 G G D
AV...coj0)Ar0 0 0 0 0
'.4 ...................... .... 010M000..00..00!
JUN 0
o0 J0 00"
0
o0,. ....... 00 0000 01 00 0 00•0 0
Uo, ......
00 000•, 0 D D G 0 D 0 (D
', 1 1
.,° .... IT....
01,, (2 C. ()
0 01 0 0 0 ®00
0
Orf
...... 00®
o0
OCT ocT 0(D. 000 D 0 ® 0 (00 0 0
o ooo 0 0 0 0
"Nov ... _001 G Nov' ...... 00oG
C-2
01D0 01( ( )0 G
F- 2 3
a DESCRIPTIVE TITLE OF PRIMARY JOB
a TiTRE OESCRIPTIF OU POSTE PRINCIPAL
LT I CAPI' CP AJ LO O N
b SECONDARY DUTIES (by descriptive title onlvy c RANK FOR POSITION 0 0 0CAPTCAI
MAJLCOL COL N
b. FONCTIONS SECONDAIRES (litre descriptit seulemenl) c. GRADE OU POSTE I CAPT CAPT
LT LT CAPT SAJ LCOL CO. i
d: RATED OFFICER'S RANK 9 0
di GRADE DE L'OFFICIER EVALUE 5. 0AP 0A LCL
. TIME IN JOB
e ANCIENNETE ACE POSTE 0 0 0 0 0 0 0 0 0 0 -
"OS 4 G 8 10 Tj 1b 2 30 36 48f.
I PcnIOD OBSERVED •OIS
I PERIODE D'OBSERVATION 0 0 0 0 0 0 0 0 0 0 -
SECTION 7 - COMPARATIVE ASSESSMENT7 ra
SECTION 7 VALUATION COMPARATIVE 7-1 Reportng Ofcer - Ocer rpporteLr 7-2 Reviewing Offcer - Of-cir reviseur
-. PERFORMANCE FACTORS/FACTEURS DE RENDEMENT 1o- Normal H • _ _ _ f Normal S."
I Acceotec responsibilities and duties
1 A pr.s en Charge des TCspoflsobri,Ies(DG
aI des lonctionr,
2 Appied job knowledge and skills
2. A appique its conna,ssanres el leCs-------------- -- 0 0 10 (D (D 0 1 G i
comDeiences au travall
2 Analysed problems or situAtOns .A
3. AanalySO 1o$
problernes 0.. le sS:i76ns ---------- )) i)( k $
4Made dec.s'ons'too b acti ..
4 A DS des oclcisior~s el djes 'r-eswes G( D 00 D (
M
Tade riers and 1repataTo~ns.....
5 A woesse des plan s
e -Ia
.! ,
erep (eD (DD
..o0,o:;.eied. ............ .. ®® ®®I ®®I®®oo® -
c,eg........e o-ee
.,.-. ........... 0 !(D 0 ( ® ®® ®
) ® -
cX;£rm,,ni- ý._e ot.i.e-~ew i ( D D 0 z D c
p a
. .... ------..
--------------
0 G C D (D
D D 0 0 (9 G
. A •cmmf~r.J ,C 64e
F. CC,
SIeltre ,Ce" 5h'es'•, - ". ,
11111
9
.,o. a'.
: cIs•,o
.. ........ . -...
1 G G ( (D (DG
13 A
Lc ~. --- _----~r--------------- I
1-• ',u' C!'
• ed I:t
( 31 '
%zI PTi0 Std )C
. P..
eA).. ...................
- . (D
e (D'-0
,e ses su,.p4rnes
l
,PO1E. SI. NA. AT.. .. . .TES.r.I...SS..,. N"
.. a I....
. . Norr.a-
2 ,p
nce .IS
4 Conduc®0
4
6 . o ul
e - -
5 iS)ielleCI-----------------------. Q o ®®o®®
6 negrilýy ..........-..........................- '--
- -- - - - - - - - I (® @ g( -
6 . niegrii Z E 0 ®
(Z G ( (
7 LOtaule '
S ~1ao----------------------------------------o ®® -O&t
D( 2
3 D,,I.Calion
. C,,jrage ........................... . ,
~
"oI
SECTION 8 -POTENTIAL I ..QO
® L. ..Q,
81Re~orhng OfficerT Ohic~er repPorreu I -
8 e~e m Olf~cer Othc~er rev~se~r
Normal 5-R'e, ew N ora
SECTION 8 - POTENTIEL , 1 '0®! No~ma . -
O G)
. (D
- G (2)0 1
®1 (0) 0 G
SECTION 9 PROMOTION RECOMMENDATION 9 1 Reportng Officer - Officier repporteur IReveing Officer C. 1cer
reviseur
NO' NOT YET YES NONOT YET YE[S
SECTION 9- RECOMMANDATION DE PROMOTION NON PAS ENCORE 0 OUt 0, NONO PAS ENCORE 0 OUI (0 I
:OH NDHO USE - A 1'USAfrE DU .NOTET• .
,]_.___
_]__. ___ i
1when any part completed) (une fols remplie en tout Ou en partle)
SECTION 10- DETAILS OF JOE - RENSEIGNEMENTS SUR LE TRAVAIL
a. Unit b. Official appointment Ic. CSdate
Unite Poste offgctel Date
ce mutation
d. Unusual circumstances (if any)
Circonstances lus$tee (ii y a1 lieu)
SECTION 11 -NARRATIVE BY REPORTING OFFICER - EXPOSES DE SITUATION DE L'OFFICIER RAPPORTEUR
i (THE IVARRA TIVE NORMALL YSHOULD BE LIMITED TO THE SPACE ABOVE THE DOTTED LINE)
tL "EXPOSEDE LA SITUA TION DEVRAIT NORMALEMENT SE LIMITER 4 L-ESPACE AU-DESSUS OU POINTIL LE)
".i.e sh nte e wiJwrcCiatan a etc u_""
Sgnat ure Date
K CONFIDENTIAL CONFIDENTIEL
(when any part completed) (une foit remnlie en tout ou en pairle)
F-25
CONFIDENTIAL CONFIDENTIEL
(w-len any Pan completed) (urie foil remplis on tout ou an Catte)
C.CTION 12 - RECOMMENDATIONS FOR TRAINING AND EMPLOYMENT - RECOMMANflATIONS W'INSTRUCTION ET D'EMPLOI
6. Training 0. Employment
Instruction EmpI0
Aank. name and auuo3tnimert Date
Grade. -ion, Ct post,"
r
SECTION 13 - COMMENTS BY REVIEWING OFFICER - OBSERVATIONS DE L'OFFICIER REVISEUR
I do not kno. this officer I know this officer slightly i know this officer well []
Ji: fie cohiao pLmsau tout cel oficler Je ne connail cet officier ou'un Peu Je conna. -,n ccl officiei
RAnK. name and avDuintmen! I"natur Daitc
3race. onom
et poste
SECTION 14 - COMMENTS BY NEXT SENIOR OFFICER - OBSERVATIONS DU PROCHAIN OFFICýER SUPtRIEUR
Ci no' kno thi
i•, OtfiCcr I know this officer Sltghtly I know this ofhce.r weit
Jhe connias las itu lout cCI otcCr 0jc ne connais cet oatlteer Qu'un Pew .e contais bierl cet
eo&fcler
Rank, niarre. acipo~ntnnent Aar unit SgaueDt
Grade. nom. oote et unit
SECTION 15 - AODITIONAL REVIEW - EXAMEN SUPPLEMENTAIRE
F-26
CONFIDENTIAL CýONFIDý)E N TI AL77T
FI 1 E
I I ITI I
• '' ,,9 0,III I

Air Force Officer Evaluation System Project

  • 1.
    , - •"i FILECOPY SYLLOGISTICS INC. FIUNAL REPORT co AIR FORCE OFFICER EVALUATION SN'STEM PROJECT 04 La N IV ' i• & THE HAY GROUP DTIC ELECTE JUL 11 W B MANAGEMENT * PLANNING * ANALYSIS -N 1 7 r xT' -c Fr A a ~<ii
  • 2.
    TABLE OF CONTENTS SECTIONTITLE PREFACE ............................................. iv EXECUTIVE SUMMARY ........................................................ v INTRODUCTION .................................................................. I-1 Historical Background ........................................................... I-1 Project Objectives and Tasking ............................................ 1-9 II STUDY METHOD................................................................... I-I Phase 1: Background Study .......................... II-I Phase 2: Data Gathering ....................................................... 11-2 Phase 3: Literature Review .................................................. 11-4 Phase 4: Data Analysis ......................................................... 11-5 Phase 5: Synthesis of Recommendations ............................. 11-5 III FINDINGS ON PERFORMANCE APPRAISAL IN NON- AIR FORCE ORGANIZATIONS ........................................... I11-I Performance Appraisal: Findings from the Literature ....................................................................... III-1 Performance Appraisal: Findings from the Private Sector ................................................................. 111-23 Performance Appraisal: Findings from the Other Services ................................................................ 111-33 IV FINDINGS: AIR FORCE OFFICER EVALUATION SYSTEM ................................................................................ IV-I Major Features of the Current OER System ....................... IV-I Issues Affecting Officer Evaluations ................................... IV-8 Summary ................................................................................. IV -21 V CONCEPTUAL DESIGNS FOR THE AIR FORCE OER ............. V-I Formulation of Conceptual Design ....................................... V-I Testing and Redesign of Concepts ....................................... V-5 Conceptual Designs for Officer Evaluation ......................... V-6 Uniform Elements of the Conceptual Designs ............................................................................... V-7 Conceptual Design I: Differentiation through Command Persuasion ......................................... V- 17 Conceptual Design 2: Differentiation through Rater Persuasion .................................................. V-22 Conceptual Design 3: Differentiation through Top Block Constraint .......................................... V-29 Evaluation of Conceptual Designs ........................................ V-37
  • 3.
    SECTION TIT[E PAGE VIIMPLEMENTATION PLAN ..................................................... VI-I Feasibility Assessment and Final Decision ................................................................................. V I-2 Design ..................................................................................... VI-3 Development........................................................................... V I-5 Test ............................................ VI-6 Full-Scale Training ................................................................ VI-8 Full-Scale Operation .............................................................. VI-9 Evaluation ............................................................................... VI- I1 Refinement and Maintenance ............................................... VI-12 VII CONCLUDING COMMENTS AND RECOMMENDATIONS ........ VI-I Recom mended Initial Steps................................................... VII-2 Recommended Changes to OER Process ................ VII-3 Recommended Implementation Actions ............................... VII-5 Other Issues ............................................................................ VII-7 ApPPENDICES A R EFERENCES .................................................................................... A -I B SUMMARY OF PERFORMANCE APPRAISAL METHODS ....... B-I C PRIVATE SECTOR PERFORMANCE APPRAISAL INTERVIEWS .................................................................................... C-1 D INIrIAL AIR FORCE INTERVIEWS .............................................. D-i E FEEDBACK INTERVIEW SUMMARY .......................................... E-I F OER FORMS USED IN THE SERVICES ........................................ F-I Accession For NTIS GRAI DTIC TAB 0l Unaruiounced 0 Just trtoatton D1 button/~ Availability Codes ~veil and/or Dist Speoial ii
  • 4.
    LIST OF TABLES LA.BETITLEPG I-! Highlights of the Air Force OER ..................................................... 1-6 1l-I Focus Groups Identification ............................................................... 11-3 I11-1 Comparison of Performance Appraisal Methods by Purpose and Costs ....................................................... 111-20 111-2 Other U.S. Services OER Comparison ............................................... 111-64 V-I Comparison of Conceptual Designs to Design Criteria .............................................................................................. V-38 VT-I Implementation Milestone Schedule .................................................. VI-13 LIST OF FIGURES FIGURE TITLE PAGE IV-I Air Force Form 707 ........................................................................... IV-4 V-1 Sample Job Description ..................................................................... V-10 V-2 OER Worksheet and Counseling Form ............................................. V-I12 V-3 Conceptual Design I .......................................................................... V-19 V-4 Conceptual Design 2 .......................................................................... V-25 V-5 Conceptual Design 3 .......................................................................... V-33 iii
  • 5.
    PREFACE Syllogistics, Inc., andThe Hay Group have prepared this final report of the Air Force Officer Evaluation System Project sponsored by the Deputy Chief of Staff/Personnel, under Air Force Contract No. F49642-84-D0038, Delivery Order No. 5025. Lieutenant Colonel James Hoskins, Personnel Analysis Center, Office of the Deputy Chief of Staff, Personnel, and Lieutenant Colonel Jerry Wyngaard, Air Force Military Personnel Center, monitored this effort and provided helpful comments on the draft final report. The Study was executed by a combined project team of Syllogistics, Inc., and The Hay Group. The views and opinions expressed in this report are those of the authors and should in no way be interpreted as an official position, policy, or decision of any Government agency, unless so designated by other official documentation. SYLLOGISTICS STUDY PERSONNEL Mr. Frank M. Alley, Jr., Project Director and Principal Author Ms. Forrest Bachner, Analyst and Co-Author Ms. Donna Lessner, Analyst Mr. Stuart H. Sherman, Jr., Senior Vice President, Corporate Oversight Dr. Susan Van Hemel, Analyst and Co-Author Mr. David Weeks, Consultant HAY GROUP STUDY PERSONNEL Dr. George G. Gordon, Technical Director and Co-Author Mr. Jesse Cantrill, Analyst Lt. General (USAF, Ret.) Edgar Chavarrie, Consultant Mr. Gregori Lebedev, Partner and General Manager, Corporate Oversight Mr. Rene Morales-Brignac, Analyst and Co-Author iI,
  • 6.
    EXECUTIVE SUMMARY From Junethrough September 1987, Syllogistics, Inc., and the Hay Group conducted a study to examine the strengths and weaknesses of the current United States Air Force Officer Effectiveness Report (OER) system and to recommend alternative designs which could improve its usefulness. Two other groups conducted separate but concurrent efforts with the same study objective. These were active duty and retired senior Air Force officers at Randolph AFB and students at the Air Force Command And Staff College. Specific Air Force guidance for the project was that any alternative conceptual design to the OER should: I) focus on the officer's current job performance; 2) provide good differentiation among officers on potential for promotion and for successfully executing higher responsibility; and 3) provide some vehicle for giving officers feedback on their performance to support career development and counseling. The study was carried out in five major phases: 0 A study of the background of the officer evaluation process in the Air Force, including review of documentation and briefings by Air Force personnel; 0 The field data gathering phase which included interviews and focus group discussions with Air Force officers and functional managers, (interviews and focus groups were conducted at Andrews, Charleston, Langley, Offutt, Randolph, Scott, and Wright-Patterson Air Force Bases); o A review of performance appraisal in non-Air Force organizations (literature review, industry, other military services and government entities); o The analysis of the data; and v
  • 7.
    o Synthesis ofoptions and recommendations. KEY FINDINGS Key findings from the study are described below, by source. LITERATURE o While a wide variety of performance appraisal methods have been studied, most are unacceptable because they are either inappropriate to Air Force needs or totally impractical to implement. The combination of graphic rating scales and verbal descriptions remains, in our judgment, the only feasible path to pursue. 0 A performance appraisal system should focus on a single purpose, e.g., promotion. Other purposes should be addressed through alternate means. 0 Pbrformance evaluations can be improved by training the evaluators. This applies to both rating techniques and the need to rate accurately. o Counseling (performance or career) is best done separately from the formal evaluation. OTHER SERVICES 0 Each of the other services recognizes the special relationship between an officer and his/her immediate supervisor and has tried to reduce the conflict between maintaining this relationship and providing an honest evaluation. vi
  • 8.
    o Each ofthe services has some mechanism for minimizing inflation in ratings, including peer rankings (Navy and Marine Corps), rate-the-rater (Army), and intensive headquarters review (U.S. Coast Guard). INDUSTRY o Since the principal purpose of performance appraisal in the private sector is to support relatively short-term compensation decisions, much of what is done there would not meet Air Force needs. o Some type of rating control is prevalent in the private sector, but it is usually driven by the compensation or merit increase budgets. o Performance feedback is encouraged and emphasized as an important component in supervisor-subordinate relationships, and most private sector organizations ti"-n supervisors to give such feedback. AIR FORCE CULTURE o There exists the perception that the Air Force officer corps is an elite group who are all above average. o The "controlled system" had a very negative effect on morale. o There is an unwillingness to openly make fine distinctions among officers. o Career advancement is often viewed as more important than job performance, especially by junior officers. DEVELOPMENT OF CONCEPTUAL DESIGNS Building upon the foregoing rich and diverse baseline of information, the Syllogistics/Hay study team developed three alternative approaches to enhance the OER vii
  • 9.
    process. These alternativeswere developed in accordance with several design criteria and guiding considerations. The design criteria stated that an improved OER should: o Focus on job performance, not peripherals; o Provide differentiation in potential for promotion; o Be acceptable to the officer corps; o Provide a means for developing subordinate officers; and o Minimize the administrative burden. In addition to these criteria the project team worked with a number of considerations, including: Alternative OER designs should reflect and sustain the larger Air Force culture; 0 Within the Air Force, the alternative OER designs should encourage change in attitudes and habits concerning the OER; o Promotion board judgment, not mere statistics, should be the ultimate method of making career decisions; and o Alternative OER designs should be practical to implement. RECOMMENDED OER DESIGNS The study-developed alternatives share a number of common elements but represent three levels of departure from current practices. Common elements in the designs include a parallel, "off-line" feedback system between the rater and ratee; ratings on fewer performance factors; a single verbal description of performance which focuses viii
  • 10.
    on specific accomplishment,not adjectives; computer basing of ratings; an improved method for producing job descriptions; and having potential rating done only by officers above the level of the rater. The principal distinguishing factor among the three alternatives resides in the methods used to assure that differentiation among officers is built into the system. CONCEPTUAL DESIGN 1 The first alternative accompi;.z.: differentiation in the same way as does the current Air Force system. That is, differentiation is represented by the level of the final indorser. Discipline is maintained by persuasion from the Chief of Staff to the MAJCOM commanders and by providing promotion boards with information on the distribution of indorsements produced by each command. CONCEPTUAL DESIGN 2 The second alternative calls for ratings of or[gzmanc by the rater on a number of scales and rating of pntial by the indorser on a separate series of scales. "T.is method attempts to obtain a fair degree of dispersion through the "rate-the-rater" concept. Specifically, rating and indorsing histories become part of every OER submitted to a promotion board and also become part of the rating and indorsing officers' records (and selection board folders) to be considered in their own evaluations. This alternative would provide a powerful stimulus to differential ratings. However, given the Air Force history and culture favoring "firewalling*, there is substantial risk that this approach would meet considerable resistance to compliance from the officer corps; since with a changed system, many officers would be rated significantly lower than they are currently. ix
  • 11.
    CONCEPTUAL DESIGN 3 Thethird and preferred alternative, differentiation through top block constraint, is designed to reduce any stigma of "negative" ratings, while simultaneously placing greater emphasis behind recommendations for early promotion by limiting them to ten percent of each grade at the wing level or equivalent. This ten percent target would allow for the overt identification of the truly outstanding performers. At the same time, it is a small enough minority of the population so as not to threaten officers who are not included in the ten percent stratum. By this approach, the rater would evaluate the overwhelming majority of officers as "meeting and sometimes exceeding" job requirements. The rater is encouraged to limit the number of officers rated "consistently exceeds the job requirements,' through the rate-the-rater concept. The wing commander, on the other hand, would be compelled by regulation to comply with the ten percent early promotion recommendation limit. Based on the study findings and analysis, the consulting team believes that the third alternative is most likely to meet the Air Force's needs in both the short and long term. In the short term, the amount of differentiation is very modest, but the possibility of acceptance without major upheaval is reasonable. In the long run, as the ten percent ratings and indorsements are distributed, promotion boards will be compari,,8 individuals with variable and qu:litatively different records (since an individual may receive different top block ratings on different factors from different raters and indorsers). OTPER RECOMMENDATIONS Some changes are also recommended in the information supplied to promotion boards. In addition to supplying rating and indorsing histories, it is recommended that only OERs in the current grade or the previous five OERs (whichever is greater) be provided, the board be given a list of Special Category Units (SPECAT) that are !ikely x
  • 12.
    to have ahigh proportion of outstanding officers, and a thorough exposition of the rating tendencies either of the command or of the raters/indorsers be provided to the boards along with the selection folders. The final recommendation focuses on the importance of a carefully planned and deliberate implementation of any modification to the OER process. This is indeed a critical considerat;on; since the implementation phase involves a number of complex stages and sets the stage for the acceptance (or non-acceptance) of a modified officer evaluation system. The report provides the necessary rationale and backup information for each of the conclusions and recommendations. We believe that the recommendations are workable and, if implemented, will contribute significantly toward assuring the continuation of a quality officer force. xi
  • 13.
    SECTION I INTRODUCTION From Junethrough September 1987, Syllogistics, Inc., in conjunction with the Hay Group, conducted a study to examine the strengths and weaknesses of the current United States Air Force Officer Evaluation Report (OER) and to recommend alternative designs which could improve its usefulness. This report documents the findings and recommendations from that study, and is organized in the following way. Section I gives the historical background of the OER and explains the project's objectives and tasking. Section II sets out the p~rocedures which were followed in the study. Section III presents the findings of the data collection and analysis phases of the study from non-Air Force sources, while Section IV gives the Air Force specific findings. Our rationale in formulating alternative OER designs is given in Section V followed by indepth descriptions of these alternatives for improvement of the OER system. Section VI outlines a proposed implementation plan and Section VII concludes with summary observations of the study group. The assessment of officer performance is an important function for the United States Air Force and makes a significant contribution to the maintenance of the consistent high quality of its officer force. The Air Force uses the OER for several purposes, including: selection for promotion and school assignment; job assignment decisions; and augmentation, and separation decisions. HISTORICAL BACKGROUND The Air Force like many large organizations has experienced inflated evaluation ratings and/or evaluation systems which were incompatible with their overall purposes. There have been six distinct phases in the Air Force OER system since the establishment of the Air Force as a separate service in 1947. These are: I) the forced choice method 1-1
  • 14.
    adopted from theArmy in 1947-49; 2) the critical incident method used from 1949-52; 3) rating of performance factors with narrative commentary, 1952-1960; 4) the "9-4" system, 1960-1974; 5) the "controlled era", 1974-1978; and finally, 6) a return to a mechanism similar to 3) from 1978 to the present. Although these phases will be discussed in greater detail in the following pages, two characteristics have recurred throughout this history. The first characteristic is that throughout all the OER changes, major and minor, the Air Force has availed itself of extremely high-level expertise, from academia, industry, and in-house, in its deliberations. The Air Force has over the years been willing to consider many state-of-the-art approaches to performance appraisal. The second characteristic is the fundamental conflict between administrative need for differentiation, as institutionalized through the *up or out" system, versus an institutional reluctance to identify less than outstanding performance. PHASE 1: 1947-1949 Initially the Air Force adopted the A-my system for its OER program. This system included narrative comment, but the primary rating tool was the forced choice method which had been developed during World War I! by industrial psychologists as a means of reducing bias in the ratings of Army officers. In this method the rater is asked to choose from sets of phrases those which are most and least descriptive of the ratee. Raters did not know how the overall rating would come out, as the OER forms were machine read and scored according to a "secret" formula. The forced choice system was discontinued due to the lack of rater acceptance. The raters wanted to know how they were "grading" their subordinates. 1-2
  • 15.
    PHASE 2: 1949-1952 In1949 a new evaluation system was implemented which incorporated the critical incident approach as well as mandatory comments by the rater. The front side of the form showed the rater's comments about certain ratee traits and aspects of performance along with the indorsement. The reverse side covered proficiency and responsibility factors on which the rater evaluated the ratee. The scores were then multiplied by a weighting factor, totaled, and divided by the number of factors to derive a total score. This system was terminated in 1952 due to inflation of ratings and problems with the scoring of the forms. Total score became the predominant concern, outweighing individual factor scores. In addition there was some indication that inappropriate weights had been assigned to certain factors. Finally, the ratings on the front and reverse sides of the form often showed an illogical relationship and the form was very time-consuming to complete. PHASES 3 AND 4: 1952-1974 In 1952 a third OER system was implemented. This system was derived from a study of private organizations, the other U. S. military services, and the Royal Canadian Air Force. The basic form of the 1952 system incorporated six performance factors which were rated against graduated standards. The reverse side of the form cailed for an overall rating as well as providing space for the indorsement. Although there have been many forms as well as policy changes since the 1952 system was implemented, the basic form and aim of the system have remained consistent, with the exception of the 1974-1978 period, through the present. 1-3
  • 16.
    The changes whichhave occurred to the 1952 system include the timing of OER preparation. This has alternated between a prescribed date and occurrence of an event, e.g., a permanent change of station move. The period of supervision in which a supervisor must have observed the work of a subordinate for rater qualification purposes has gone from 60 to 120 days, to 90 days and back to 120 days. The relationship of the rater to the ratee have shifted from the officer in charge of career development in 1952 to the immediate supervisor in 1954. In addition, at various points the rank of the rater and of the indorser relative to the ratee has been variously controlled and uncontrolled. The number of top blocks which could constitute an outstanding overall rating has for psychological reasons, alternated between I block and 3. One top block supposedly sent the message that most officers should fall in the "middle of the pack." Three top blocks were thought to encourage greater differentiation. In 1960 the "9-4"system was begun. The 9-4 system continued to use the overall 9 point scale evaluation from previous systems but added to it a requirement to rate promotion potential on a scale from I to 4. Initially, the 9-4 system did bring some discipline to the ratings but eventually the ratings became "firewalled" at the top score of 9-4. This inflation occurred even with an extensive educationai program to warn evaluators against rating inflation. By 1968 ratings inflation had once again rendered the OER system ineffective. Nine out of ten officers received the highest rating, 9-4. Development work on a new system began in 1968 and continued through 1974 when the controlled OER came into being. During this six year period four major designs were put forth as collaborative efforts of the Air Force Human Resources Laboratory, industry, universities, government laboratories, foreign military services, the other Armed Services, the Air University, and the Air Staff. 1-4
  • 17.
    PHASE 5: 1974-1978 In1974 the controlled OER era began. The basic form of the previous OER was retained but raters were instructed to distribute their ratings as follows: 50% in the 1st and 2nd blocks (two highest) with a limit of 22% in the highest block. Although the system had been extensively discussed and pretested prior to implementation, it encountered almost immediate resistance. The basic problem with the controlled OER was that officers who were experienced in a system that gave top marks on just about all evaluations understandably resisted a system where top marks became the exception. Perceptions centered about the notion that a *3" rating was the end of an upward career track in the Air Force. Although educational efforts were made to overcome such misgivings and ultimately only the top block was controlled, the initial anxiety about the system was never overcome. In 1978 the controlled OER era ended when the Air Force leadership decided that individual need for a less stressful OER system was more important than the management benefits of differentiation. PHASE 6: 1978-PRESENT Since 1978, the OER has retained performance factors, narrative comment, and promotion potential ratings. The majority of ratings are again "firewalled* to the top blocks and the discriminating factor has become the rank of the indorsing official and the words in his/her narrative remarks. Table I-I shows various characteristics of the OER since 1947. I-5
  • 18.
    *d a 0 06C ao6 6 .- tnCL 05 C4 06' C6 V) > IL) v V ) 4 v: u 0. 0 CIS, ISJ 1- z u. w 3 3 3 0-3- <g 1-6 Li. L
  • 19.
    V 00 4.. LD .'o V*V) 0,; 51~~~ *OEV. oL~ 06 o. C1 . CA a in. a a a a a CA 0-0 3 cm~( E -o 0 0(66- 0. 2 U V C) 0 & C1-
  • 20.
    V ;I- I 0.... . . I u C: .il.• --.- • • .
  • 21.
    PROJECT OBJECTIVES &TASKING The Air Force leadership is concerned that the OER has again become less than effective for its intended purposes. Some of the features which have been observed to be deficient and which an acceptable revision should possess are: 1) focuses on the officer', current job performance, 2) provides good differentiation among officers on potential for promotion and for successfully executing higher responsibility, and 3) provides some vehicle for giving officers feedba,.k on their performance to support career development and counseling. In order to achieve these goals, the Deputy Chief of Staff for Personnel directed that a study of the OER be performed, to result in recommendations for an improved Air Force OER system and for its implementation. Three groups were tasked to perform this study. The first of these groups is composed of active duty and retired senior Air Force officers and is based at Randolph AFB, Texas. The second group is composed of twelve students at the Air Force Command and Staff College at Maxwell AFB, Alabama. They conducted their study as a class project. The Syllogistics/Hay team is the final study group. This team was chosen to provide an independent, outside view of the officer evaluation issue and to apply the expertise of the private sector to the solution of the problems. This study is thL basis of this effort. The Syllogistics-Hay team was specifically tasked to study the current Air Force Officer Evaluation Report piocess to determine its strengths and weaknesses, to apply their knowledge of available methods for performance appraisal, and to develop one or more conceptual designs for an improved OER process and recommendations for the implementation of the design(s). 1-9
  • 22.
    SECTION 1I METHOD The studywas carried out in five major phases: 1) a study of the background of the officer evaluation process in the Air Force, including review of documentation and briefings by Air Force personnel; 2) the field data gathering phase, which included interviews and focus group discussions; 3) a review of performance appraisal from non- Air Force sources; 4) the analysis of the data; and 5) synthesis of options and recommendations. Each of these phases will be described in some detail in the following sections. PHASE 1: BACKGROUND STUDY At the outset of the study, the Air Force provided a briefing to contractor personnel, covering several aspects of the OER, its purposes and the process by which it is completed. The briefing described the current officer evaluation report form and its evolution through the history of the Air Force, with information on the lessons learned as each change was implemented. It described the philosophy of officer evaluation, as it has evolved, and the difficulties which have recurred through time, especially inflation of ratings and "gaming" of the evaluation system. At the contractor's request, an additional briefing was provided, covering the Air Force promotion system and its interaction with officer evaluation. This briefing provided valuable background on the operation of promotion boards, on the use of the OER in promotion decisions, and on the officer force structure and factors affecting promotion opportunities. Copies of briefing materials, as well as pertinent reports, Air Force regulations and other publications were provided to the contractors. Contractor personnel carefully I1-1I
  • 23.
    reviewed these materials.This was an essential step in the preparation for the next study phase, the gathering of data from Air Force personnel and others. PHASE 2: DATA GATHERING The data gathering phase of the study had four components. The first was personal interviews with individual Air Force officers who are highly knowledgeable of the personnel policies and procedures relating to officer evaluation. These officers ranged from general officers in command and policy-making positions to mid-level officers responsible for administration of the OER system. In each case, an interview guide (see Appendix D) was used to direct the discussion and to ensure coverage of points which the contractors had determined to be of major importance to t!•I• study. Notes were taken in all interviews for later analysis by the study team. All interviews were conducted by senior team members with extensive experience and expertise in interview techniques. The interviews ranged in length from one to three hours. A list of the officers interviewed is displayed at page D-2. The second data gathering component was the convening of focus groups of six to eight Air Force officers each to discuss the OER process. The nine groups included ranks from lieutenant to major general, but each group was composed of officers of similar rank (e.g., lieutenants and junior captains, lieutenant colonels and colonels). Some groups included only rated officers or only support officers, while others were mixed. A list of the groups, their location, and composition is given in Table II-I. 11-2
  • 24.
    TABLE i!-1 FOCUS GROUPSIDENTIFICATION Group No. Location Ranks Other Information I Randolph AFB General Promotion Board Officers Members 2 Pentagon Colonel All Air Staff; mixed Rated/Non-rated 3 Randolph AFB Lt/Junior Capt Non-rated; support 4 Charleston AFB Lt/Junior Capt Rated; operations 5 Randolph AFB Sr Capt/Maj Rated: operations 6 Randolph AFB Sr Capt/Maj Nonrated; support 7 Randolph AFB Maj/LtCol Rated; operations 8 Charleston AFB Maj/LtCol Non-rated; support 9 Randolph AFB LtCol Mixed rated/non- rated; ops/support Each focus group was conducted by two contractor personnel, with additional personnel present as recorders at most sessions. One of the two served as chief facilitator and led the group discussion with the aid of a discussion guide (see Appendix D). The second facilitator was less active, entering the discussion only infrequently, and assisting in maintaining the focus of the session. The Air Force personnel in the groups were informed of the purposes and method of the study at the beginning of each session and were encouraged to be honest and open. The contractor's goal in these groups was to elicit information, not only on the operation of the OER system, but more importantly on how officers feel about the process and how it affects their careen. Each focus group met for approximately one and one-half to two hours. The third component of the data gathering effort was a series of interviews with persons responsible foi administering officer evaluation systems of the U.S. military services other than the Air Force and of the U.S. Department of State and the Canadian 11-3
  • 25.
    Armed Forces. Theseinterviews were conducted to learn about details of the officer performance evaluation systems of these services. The interviews focused upon identifying the ways in which these systems differ from the Air Force OER system and the significance of such differences. Each respondent was asked about specific strengths and weaknesses of the system which he/she administered, and most respondents provided documentation on their systems. The fourth data gathering component was a series of telephone interviews with representatives of major .orporations which have active management performance appraisal programs. These interviews were conducted to obtain information on current private sector performance evaluation practices. Fourteen interviews were completed, using an interview guide (see Appendix C) to ensure that all major points were covered. The interviews were performed by persons with expertise in private sector performance evaluation issues. PHASE 3: LITERATURE REVIEW In addition to the study of the background materials provided by the Air Force, the contractors searched and reviewed z large sample of historical and current literature on performance appraisal. Textbooks and review articles were used for an overview of "Otraditional" performance appraisal methods, anrl for information on the salient features of each of these methods. Special attention was given to cuirent research literature, with the goal of identifying and evaluating currently popular appraisal methods and systems. This literature was reviewed selectively, with emphasis on issues and methods which appeared especially relevant to the needs of the Air Force. 11-4
  • 26.
    PHASE 4: DATAANALYSIS The data analysis effort included several elements, some of them performed concurrently. Since the literature review analysis produced a conceptual framework within which other information was analyzed, it will be discussed first. The literature review findings were analyzed and organized in several ways. First, the information was searched to determine major features which are common to all or most performance appraisal systems. These features were listed and used in the analysis of data from other sources (see below). The study team also developed a taxonomy of performunce appraisal systems, based on what is evaluated, what measures are used, and the techniques by which the measures are applied. The next step was to identify in the literature a consensus on the , •,-ionship between organizational characteristics and performance appraisal methods. This resulted in a number of principles relating organizational characteristics to the categories of appraisal methods which have been found to be appropriate to them. The material from the briefings and documents provided by the Air Force was reviewed to extract major recurring themes or issues. These issues were listed and classified for use when evaluating alternative proposals for changes to the OER process. Those issues which emerged as most important were also compared with the data gathered in interviews and focus groups, (i.e., Are the historically important issues still seen as important by current officers?) The notes from interviews with Air Force personnel and from the Air Force focus groups were analyzed to determine major issues. A capsule description of each issue was prepared, and where specific issues could be identified with particular IN-
  • 27.
    population groups, thiswas done. Certain issues, for example, were of concern more to rated than to non-rated officers; others were more salient to junior officers than to senior officers. The issues were categorized into groups according to their content or area of reference, for example, issues relating to the OER form, to the OER process, to the matter of control of rating distributioiks. The study team was careful to document the perceived strengths cf the present system as well as its perceived weaknesses. The study team also noted its impressions of Air Forcc cultural and organizational characteristics which interact with the OER process, since these are of great importance in determining the acceptability and feasibility of any proposed changes to the OER process. The data from interviews with the other services and departments were reviewed and analyzed to extract major features of each performance appraisal system. A comparison matrix was prepared to facilitate understanding of these systems and of their similarities and differences. These systems were also examined to determine how each deals with the issues which had been found to be of greatest importance to the Air Force. The information gathered by telephone interview from large corporations .vas subjected to an analysis similar to that used for the other military services, Major features of each corporation's performance appraisal system were extracted, and a matrix was prepared comparing the features across companies. PHASE 5: SYNTHESIS OF RECOMMENDATIONS Upon completion of the data analysis, the study team began developing conceptual designs for improving the Air Force OER process. This involved careful consideration of the ;.riteria which had teen developed for a successful OER, the practical considerations wi'hich had emerged in the analysis phase, and the knowledge 11-6
  • 28.
    -gained from theliterature and from other organizations concerning the feasibility and effectiveness of various potential solutions to the problems we had identified. Several preliminary OER designs were outlined, and their salient features were listed. These features were then discussed during interviews with 20 Air Force officers of various ranks, many of whom administer OER processing for their commands or activities, to obtain feedback on the value and feasibility of each feature. The feedback interview results were tabulated and analyzed, and decisions were made by the study team about features to be retained and those to be discarded or revised. The preliminary alternative conceptual designs were then revised into final recommended conceptual designs for presentation at the final briefing and in this final report. "1-7
  • 29.
    SECTION III FINDINGS ONPERFORMANCE APPRAISAL IN NON-AIR FORCE ORGANIZATIONS This section gives the findings about performance appraisal in non-Air Force organizations. These were collected from a review of the performance appraisal literature, interviews with fourteen private sector organizations, and interviews with officials from the other armed services as well as the Department of State. PERFORMANCE APPRAISAL: FINDINGS FROM THE LITERATURE A literature search was conducted during the project with two purposes in mind. First, we wanted to determine recent trends and developments in the field of performance appraisal. Second, we hoped to cull from the literature an indication of standard elements for a performance appraisal system which could be used in our analysis of, and deliberations over, alternative OER designs. In addressing these two purposes, this section is organized into four parts. The first part, Survey and Background, discusses the available liteiature and gives the historical development and current position of performance appraisal. The second part, Standards, offers a set of standards for all performance appraisal systems and discusses typical errors in appraisal. This part also includes a discussion of the components of any performance appraisal system. The third part, Afethods, describes the primary forms of performance appraisal with the emphasis on subjective methods and compares these methods. The fourth part, Implications, offers some conclusions from the literature search and their implications for the Air Force's inquiry into alternative OER designs. Ill-I
  • 30.
    SURVEY AND BACKGROUND Theliterature on performance appraisal is both extensive and diverse, and touches on many side issues such as motivation, job satisfaction, equity, etc. The bulk of the literature focuses on different aspects of documentable performance measures, a focus which is understandable due to the legal requirements of Equal Employment Opportuvity legislation. At the same time, an irea that is somewhat lacking in treatment is that which pertains to such broad organizational issues as the practical and meaningful implementation of performance appraisal within an organization and the matching of performance appraisal techniques with performance appraisal purposes. Rating scales, as a performance appraisal technique, have been in use at least since the 1920s. Although several newer techniques have been introduced, rating scales still predominate. Much has been written about Behaviorally Anchored Rating Scales (BARS), but the developmental costs appear to outweigh the advanta;es associated with the technique. The use of outcome-oriented techniques, such e. rna~.gement-by- objective, as a performance appraisal method is increas.!_g in Popularity as a management tool, but there is some indication that its popularity for appraisal purposes may be fading. The thrust of the literature search was on current literature which for our purposes was 1985 to the present. Certain standard texts were also used, primarily for the Methods section. These were Qrstpizntional Behavior and Personnel Psvchologv by Wexley and Yukl (1977); Personnel: A Diaanostic Aooroach by Glueck (1978); and, finally, Anolied Psycholoav in Personnel Manaaement by Cascio (1982). Performance appraisal, evaluation, or, as it is alternatively callpd, employee proficiency measurement, is generally defined as 'the assessment of how well an 111-2
  • 31.
    employee is doingin his/her job" (Eichel and Bender, 1984). The activity of assessing job performance is certainly widespread in the United States. A Bureau of National Affairs (BNA) study in 1974, for example, found that three-fourths of supervisors, office workers, and middle managers have their performance evaluated annually. A second BNA study (BNA 1975) showed that 54% of blue collar workers participate in performance appraisal. How these assessments are used by organizations, however, varies widely and has shifted noticeably over time. Before 1960, performance appraisals were used by most organizations to justify administrative decisions concerning salary levels, retention, discharges, or promotions. In the 1960s, the purpose of performance appraisal grew to include employee development and organizational planning (Brinkerhoff and Kanter, 1980). In the 1970s, requirements of the Equal Employment Opportunity laws caused organizations to formalize performance appraisal requirements in order to justify salary, promotion, and retention decisions (Beacham, 1979). Currently, performance appraisal is used primarily for compensation decisions and often for counseling and training development. Performance appraisal is used less frequently as a basis for promotion, manpower planning, retention/dischaige, and validation of selection techniques. (Eichel and Bender, 1984; Hay Associates, 1975; Locker and Teel, 1977). Although performance appraisal is widely practiced, the activity is still usually regarded "as a nuisance at best and a necessary evil at worst' (Lazer and Wikstrom, 1977). This attitude towards performance appraisal seems to be held often by both evaluator and evaluatee. Schneier, Beatty, and Baird (1986) note that the requirements of performance appraisal systems often clash with the realities of organizational culture and of managerial work. For example, a manager often has an interest in taking decisive action whereas the performance appraisal may have ambiguous, indirect results. 111-3
  • 32.
    Employee attitudes towardorganizational pron .tional systems have also been found to be negative. In one study of such attitudes it was found that respondents believed that personality was the most significant factor in career advancement and that promotion decisions were usually made subjectively and arbitrarily by superiors (Tarnowieski, 1973). Regardless of the perceptions, performance appraisal is a necessary organizational activity. The following sections describe the current state of this activity. STANDARDS OF PERFORMANCE APPRAISAL Whatever performance appraisal system is used, there are certain standards which the system should meet. The literature identifies five such categories of criteria, narrely: legality, validity, reliability, acceptability, and practicality (i.e., cost and time). Thc, categories are closely related and must be defined in relation to one another. Luality refers to the legal requirements for performance appraisal systems, which are the same as for any selection test in that they stipulate that the performance appraisal system be valid and reliable. Validity, in turn, refers to the extent to which an instrument or method measures what it purports to measure. For example, an organization decides to evaluate an employee's performance. If the goal of the performance appraisal is selection for promotion then the performance factors to be evaluated must be selected based on an idea of what will be successful performance indicators for the next level position. This evaluation would not be valid unless it could be demonstrated that success in the selected factors was a predictor of success in the job to which the employee was being promoted. Apart from legal implications, it must be noted that the idea of validity is important at the more elementary level of organizational planning as well. If the organization were to evaluate job performance for developmental purposes then the 111-4
  • 33.
    evaluation must bedesigned to identify individual strengths and weaknesses and must incorporate a vehicle for communicating this information between the rater and ratee. The third criterion, reliability, is the extent to which a personnel measurement instrument provides a consistent measure of some phenomenon. For example, given the assumption that a person's skills do not change, an instrument which measures skills repeatedly would be reliable only if it repeatedly produced approximately the same scores. The fourth criterion, aa biity, refers to a system's having to be acceptable to both evaluators and evaluatees. By acceptable, we mean that the system be perceived as fair and supportable within the organizational culture. Findings from one study of middle-level managers indicate that the procedures by which appraisals were made seemed to affect the perception of fairness to the same degree as the ratings themselves (Greenberg, 1986). This study also found that procedures that give employees input to the performance appraisalsystem are seen as being fairer than those that do not. The issue of acceptability must be considered whenever there is an attempt to introduce a new appraisal system into an established organization. No matter how well- designed an appraisal system is from a technical standpoint, it is not likely to be effective if it requires behaviors which are incompatible with the customs and expectations of the organization's members. A well-designed and well-implemented program of education and training may improve the acceptability of any appraisal system, but it will not overcome a fundamental mismatch between the appraisal method and the corporate values or culture. Finally, the criterion of Draicafity refers to the requirement that the performance appraisal system should be fairly simple to administer and reasonable in terms of time required and cost of development. 111-5
  • 34.
    Problems of PerformanceAnnralsals Although these standards could go a long way in promoting the integrity of performance appraisal systems, there are still typical, almost unavoidable errors made in the performance appraisal process due to the subjective nature of most measurement techniques combined with the proclivities of the raters. Among these are central tendency errors, "halo" effects, contrast effects, similarity-to-self errors and opportunity bias. Central tendency error is the propensity to grade performance at an average point on a scale rather than rate at the very high or very low end. Leniency and strictness are different manifestations of the same theme -- leniency being defined as the tendency to constantly rate at the higher end of the scale and strictness the reverse. A second common difficulty is referred to as the "halo" effect. The halo effect occurs when an evaluator assesses all factors based on the evaluator's own feelings about one or more factors of performance, rather than assessing each factor objectively. Halo effect can be reduced either by changing the sequence in which the evaluator rates performance factors or by making the performance factors more specific. Contrast effects occur when a person is evaluated against other people rather than against the requirements of a job. For example, three people are up for a promotion, one average and two less than average performers. The evaluator promotes the average performer because he or sh,. looks better in contrast to the other two candidates, not because he/she is necessarily qualified for the promotion. Similarity-to-self error occurs when an evaluator rates a person based on the evaluator's (often unconscious) perception of how similar that person is to him- or herself. This similarity could be in terms of job experience, educational background, 111-6
  • 35.
    personal preferences, etc.Once again, the evaluator is not using a job related criterion to make his/her rating decision. Opportunity bias is a rating error which can manifest itself in two ways. The first is when objective data which may or may not be job related are used in an evaluation. Such objective dath could be absenteeism, tardiness, sick leave, etc. These data are objective and readily available, but may be over-emphasized relative to other aspects of the job which are unable to be measured objectively. The second way in which opportunity bias occurs is often associated with evaluations for employees of field offices, remote sites, etc., by headquarters personnel. In this manifestation, the evaluator tends to downgrade the field personnel because their work is not visible to the evA!uator. Components of Performance Annpra1sPl Prior to discussing specific methods of performance appraisal, the actual components of the performance appraisal system need to be identified. These include goals, methods of performance appraisal, indicators of performance, schedule of appraisals, znd evaluators. •.gJj. The goal or purpose of performance appraisal is usually either to support the administrative needs of the organization or to facilitate individual employee development. The goal of the performance appraisal should drive the type of performance appraisal system used and the type of performance information collected. For example, the primary administrative uses of performance appraisal are for compensation and promotion decisions. One would assume, then, that an organization would make these decisions based on assessment of current performance and would choose a performance appraisal method which would provide that information. The same idea would hold for the organization whose performance appraisal goal is employee 111-7
  • 36.
    development. The methodchosen in this case should give an indication of employee strengths and weaknesses. There is indication in the literature that performance appraisal for multiple purposes which include development tends :o fail on the development side. One important study showed that employees became defensive about performance counseling when a compensation decision was dependent on a favorable rating (Meyer, Kay & French, 1965). For this reason some authors argue for separate performance appraisal systems for different purposes or at least for separating the counseling session in time from the formal evaluation. Methods. Methods of performance appraisal can be categorized as objective and subjective methods for purposes of broad differentiation. Subjective methods, on the one hand, rely on the opinion of an individual or several individuals regarding an employee's performance. Most often subjective methods use some sort of scaling device to record these opinions concerning specified performance factors. There is tremendous variation in these techniques, mainly in the degree of accuracy attempted by the scale. Objective methods, on the other hand, use direct measures to rate employees. Such direct measures can be either rates of production, personnel statistics (e.g., absence rates, sick days) accomplishment or non-accomplishment of specified performance objectives or test scores. Objective methods are generally used with employees whose jobs are repetitive or production-oriented. Objective measures carry the obvious advantage of not being dependent on evaluator judgment. However, they may not be as useful to many organizations as subjective measures because they often reflect outcomes which may not provide the total, or most important, picture of an individual's performance. In 111-8
  • 37.
    addition, they frequentlyfail to provide a means for comparison of performance among employees. Finally, it is occasionally the case that plausible objective performance measures simply cannot be devised for a particular job. Practical considerations usually limit the use of objective techniques, although it is important to note that objective information can be helpful in supporting subjective ratings, even when correlations between subjective and objective ratings are low (Cascio & Valenzi, 1978). Taylor and Zawacki (1984) categorized methods as traditional (i.e., use of quantitative or statistical tools along with judgment by an evaluator to evaluate performance) or collaborative (i.e., use of some form of joint, evaluator-evaluatee, goal- setting technique related to performance.) In a study of Fortune 500 companies, these authors found that collaborative designs brought about improvements in employee attitudes more often than traditional designs. They also found that, although more companies were satisfied with collaborative than with traditional designs, there was a general shift in usage to traditional designs, perhaps due to legal requirements for precise measurement. In another study of the effects of goal-setting on the performance of scientists and engineers, nine groups were formed which varied goal setting strategies (assigned goals; participatively set goals; and "do your best") and recognition vehicles (i.e., praise, public recognition, bonus) (Latham & Wexley, 1982). Those in the groups which set goals, either assigned or participatively.had higher performance than those in the "do your best' group. In addition, it was found that those in the participative group set harder goals and had performance increases which were significantly higher than the other two goal-setting categories, Indiisiira. Indicators of performance can b- behaviors displayed by employees, tangible results of employees performance, and/or ratings on employee traits or qualities (e.g., leadership, initiative). 111-9
  • 38.
    There is consensusin the literature that traits are not the preferred performance indicators. Traits are difficult to define and therefore can lead to ambiguity and poor inter-rater reliability. Trait rating may also not be helpful from a developmental position as it is hard to counsel employees, for example, on "drive'. Finally, a trait- oriented appraisal is likely to be rejected by the courts (Latham & Wexley, 1982). It is difficult to show, first, that a trait has been validly and objectively measured, and second, that a particular trait is a valid indicator of job performance level. Behavioral indicators can be shown through job analysis to be valid measures of performance. Research on these indicators suggests that rating both behaviors and results is the best course of action (Porter, Lawler & Hackman, 1975). Schedule of the Apnralsal. Most organizations appraise performance annually, usually for administrative convenience. S6nedules are often based on employee anniversary dates with the organization, seasonal business cycles, etc. Appraisals scheduled once a year solely for administrative convenience are difficult to defend from a motivational viewpoint, since feedback is more effective if it immediately follows performance (Cook, 1968). In addition, if all appraisals are conducted at one time then managers have an enormous workload, although the annual dates for all employees need not coincide. Variable schedules for appraisals can be used when there are significant variations in an employee's behavior, although problems with this idea can include inconvenience and lack of consensus over what should constitute "*significantvariation.' Evaluatoil. An evaluator can be the employee's immediate supervisor, several supervisors, subordinates, peers, outside specialists or the employee him/herself. In a study by Lazer & Wikstrom (1977), the employee's immediate supervisor was found to be the evaluator for lower and middle management in 95% and for top Ill-10
  • 39.
    management in 86%of companies surveyed. Use of the immediate supervisor as the evaluator is generally based on the belief that the supervisor is the most familiar with an individual's performance and therefore the best able to make the assessment. Several supervisors can be used to make the appraisal, a method which has the possibility of balancing any individual bias. Eichel and Bender's study (1984) shows that in 63% of the responding companies another supervisor would join in the appraisal in some way. Another study (Cummings and Schwab, 1973) showed however, that an evaluation by a trained supervisor was as effective as by a typical rating committee. In any event, the research on the effectiveness of joint appraisal by several supervisors is sparse and inconclusive. Peer evaluation, although rarely used, consistently meets acceptabie standards of reliability and is among the best predictors of performance in subsequent jobs. Also, peer appraisals made after a short period of acquaiutance are as reliable as those made after a longer period (Gordon A Medland, 1965; Korman, 1968; Hollander, 1965). Peer evaluations may not be used extensively because peer. are often reluctant to ac! as evaluators or to be evaluated by their peers, supervisors may not want to relinquish their managerial input to evaluation, and it may be difficult to identify an appropriate peer group. Outside specialists can be brought in to conduc: appraisals but this is rare. The assessment center technique incorporates outside personnel but this technique is often expensive in terms of time and manpower. Use of outside specialists was so infrequent that it was not even reported in the 1975 BNA study. Self evaluation in the form of either formal or informal input to the appraisal process was reported in three out of four responding companies in Eichel and Bender's survey (Eichel & Bender, 1984). Several studies which compared self and sup- visory Ill-I I
  • 40.
    assessments showed lowagreement between the two techniques (Meyer, 1980). Self assessment appears to be used primarily for employee development purposes, while supervisory assessment is used mainly for evaluative purposes. The role of the evaluator is key in most performance appraisal systems, because most performance appraisal systems rely on the judgment of the evaluator. On this point the literature supports the idea that evaluator training can be effective in reducing evaluator error, such as 'halo', especially if the training includes practice (Landy & Farr, 1980). Within the context of these components of any performance appraisal, specific methods of appraisal are described next. METHODS As discussed in the previous section, methods for performance appraisal can be divided into objective or subjective. An overview of methods is described below with the subjective methods first. Appendix B offers a more complete discussion of each technique along with sample forms. Sublective Methods Nine subjective performance appraisal methods are identified in the literature, including: ,l*atlj._ScaIle. These have been and continue to be the most popular forms of performance appraisal. In this method, the evaluator is asked to score an employee on some characteristic<s) on a graphic scale. Characteristics can be personal traits such as drive, loyalty, enthusiasm, etc., or they can be performance factors such as application of job knowledge, time management, and decision-makitg. Scoring is sometimes left completely to the judgment of the evaluator; alternatively, standards can be developed II1-12
  • 41.
    which give examplesof wa xt should constitute a particular score on the trait or performance factor. The scale on which the factor is scored may be a continuous line or in the multiple step variation the evaluator may be forced to score in discrete boxi;s. The widespread use of rating scales is probably attributable to administrative convenience and applicability across jobs. In their simplest forms, however, rating scales are prone to many types of evaluator bias. Behaviorally Anchored Rating Scales, or BARS, were developed to address this problem. BARS provide specific behavioral examples of "good" performance or "poor" performance developed and validated by supervisors for a particular job. The use of behavioral examples precludes much of the ambiguity of such descriptors as "exceptional". BARS, once developed, are fairly easy to use and can provide the employee with rather specific feedback. BARS are very expensive to develop and usually are constructed for each specific job. There seems to be some consensus that on a job by job basis the expense may be outweigh the value. Their most appropriate application is for very high density jobs such as telephone operators. CJjcklijzj. In this method the evaluator is given a list of behavioral statements and asked to indicate or check whether he/she has observed the evaluated employee exhibiting these behaviors. A rating score is obtained by totaling the checks. Weighted checklists also use behavioral statements, but weights have been developed for each statement which correspond to some numerical point on a scale from poor to excellent. Evaluators indicate presence or absence of each behavior without knowledge of associated scores. The evaluatee's final score is obtained by averaging the weights of all items checked. i11- 13
  • 42.
    Eorced Choice. Theforced choice method was developed during World War II by industrial psychologists as a means of reducing bias in the ratings of Army officers. In this technique groups of statements are developed and grouped, two favorable and two unfavorable per group. The evaluator is asked to pick from each group of four statements which are most and least descriptive of the employee being rated. One statement in each group is actually a discriminator of effective and ineffective behavior. The other statements are not. The rater does not know which statements are the discriminators and which are not. Scoring is done separately, usually by the personnel -department. The obvious advantage of this technique is that the system, properly constructed, should reduce subjectivity. However, evaluators are often reluctant to use the method because they don't know how they are rating employees. In addition, considerable time is required to develop the discriminating statements properly. Finally, the system does not effectively support employee development needs. Critical Incident. Like checklists, the critical incident technique involves preparing statements which describe employee behaviors. These statements, however, describe very effective or successful behaviors. Supervisors then keep a record during the rating period indicating if and when the employee exhibits these behaviors. This record can be used during the appraisal interview to discuss specific events with employees. The critical incident technique can be very effective for development purposes, but is not as useful for compensation or promotion decisions. Forced Distribution. The forced distribution method asks the evaluator to rate employees in some fixed distribution of categories, such as 20 percent poor, 50 percent average, and so forth. This distribution can be done in sequence for different purposes, i.e., job performance and promotion potential. This technique is administratively simple, but there are several disadvantages to the use of a forced distribution. It is not useful in III- 1
  • 43.
    providing feedback tothe ratee on his/her performance for use in developmental counseling. It often encounters resistance from the raters, who are uncomfortable assigning large numbers of subordinates to categories which are less than favorable. The use of forced distributions where the ratings of multiple groups must be combined may also lead to problems, because the groups may not all be seen as of equal "quality" by raters and ratees. For example, is an average performance in a highly selected work group the same as an average performance in a less elite group? If not, how can the difference be equitably dealt with in the system? Forced distribution is usually done to control ratings and to limit inflation. Bnaal.fja. Ranking involves simply rating employees from highest to lowest against some criterion. The method carries about the same advantages and disadvantages as forced distribution but is harder to do as the group size increases. Ranking also does not allow valid comparison across groups unless the groups share some of the individuals in common. Paired Comnarison. The paired comparison is a more structured ranking technique. Each employee is systematically compared one on one against each other employee in a defined group on some global criterion, such as ability to do the j.. When all employees in the group have been scored, the number of times an employee is preferred becomes, in effect, his/her score. This method gives a straightforward ordering of employees; however, it does not yield information which might be helpful for employee development. Paired comparison, like ranking, does not allow comparison across groups. Fie.ldRyle. The field review approach uses an outside specialist, often someone from the personnel department, to conduct the evaluation. Both the manager and the subordinate are questioned about the subordinates' performance, then the specialist prepares the appraisal with managerial concurrence. The major advantage of 111-15
  • 44.
    the field reviewtechnique is that it reduces managerial time in the appraisal system and may provide more standardization in the appraisal s. Managers may, however, delegate all the appraisal functioa to the personnel office when in practice the technique is designed to be a collaborative effort. Essay Evaluatign. In t-is technique the evaluator writes an ebsay about the employee's performance. The essay is usually directed, that is, certain aspects of the employee's behavior must be discussed. Essays are often used in conjunction with graphic rating scales to explain a score. One disadvantage of this approach is that the writing ability of the rater can influence the employee's final rating if the evaluation is passed through the organizational hierarchy. Oblective Methods Objective methods do not rely on the judgment of an evaluator aid usually involve capturing direct information about an employee's proficiency or personal work statistics such as tardiness, etc. Objective methods are usually restricted to production oriented and repetitive jobs although they are also applied to jobs which are responsible for sales, profit or other objective outcomes. Even though objective methods may not rely on subjective judgments, they are still not a panacea for performance appraisal for the jobs where they are applicable. This is because the objective data is most relevant to the assessment of current performance, but probably could not stand alone as a performance appraisal technique for promotion or development purposes. Judgment as to the relevance of the data still adds a level of subjectivity which is impossible to avoid. Two objective methods, proficiency testing and measurement against production standards are discussed below. 1II- 16
  • 45.
    Proficlency Tests. Proficiencytests measure the proficiency of employees at doing work and are basically simulations of the work a job entails. Typing tests and assessment center simulation are examples of this technique. Written tests can also be used to measure the employee's job related knowledge. One disadvantage of the testing technique, in addition to those given generally above, is that some people are more anxious during a testing situation than in an actual work situation, and these people will be at a disadvantage if their anxiety affects their performance. A second disadvantage is that proficiency tests tend to measure what -an be done as opposed to what is done daily on the job. For example, lack of motivation on the job may not be reflected in the test scores. Measurement Against Production Standards. Production standards are levels of output which reasonably can be expected from an employee within a given amount of time. Standards can be set through sophisticated industrial engineering techniques or they can be as simple as the average output of all employees in the given time. In any event, an employee's actual performance can then be measured against the standard rather than against other employees. OtherLMthod Management By Objective (MBO. MBO, which can be a goal oriented management tool, can be used either separately or simultaneously as a performance appraisal technique. When MBO is used as a nerformance appraisal technique, the supervisor and subordinate usually establish performance objectives, often in quantitative terms, for the rating period. At the end of the rating period, actual performance is compared to the objectives and scored. In an intuitive sense MBO is very appealing as a technique for performance appraisal as it appears straightforward, can be used to convey broad organizational goals, and usually has a quantitative orientation. Many
  • 46.
    organizations have adoptedMBO or some form of goal setting for appraisal purposes, possibly for these reasons (Kane & Freeman, 1986, Eichel & Bender, 1984). MBO as a performance appraisal technique is relatively new and therefore has not been studied extensively (for that purpose). The literature does indicate, however, some areas where MBO can be troublesome. MBO can be difficult as an appraisal technique if the appraisal is for promotion purposes; because MBO does not provide relative performance indicators (French, 1984). A second possible problem is that MBO tends to focus on goals which can be quantified: production rate, return on investment. etc. Such quantitative goals often do not include or address causal issues such as leadership, judgment, etc. In addition quantitative organizational goals are rarely the result of the performance of an individual. Thus, the appraisal may incorporate factors beyond the control of the individual. For whatever reason, the literature indicates that MBO and, to some extent, goal setting as a performance appraisal technique may be decreasing in popularity (Schuster & Kindall, 1974; Kane & Freeman, 1986; Taylor & Zawacki, 1984). Comnarison of Methods Table 111-1 compares the various performance appraisal methods by purpose or goal of the performance appraisal and by cost in terms of development and usage. Examination of this table shows that there is no one method which would satisfy all three purposes: development, compensation allocation, and promotion. It also shows that costs associated with various systems vary primarily as a function of the amount of information which must be collected or developed. Finally, the three employee comparison methods (ranking, paired comparison, and forced distribution) have the particular advantage/diadvntage of being useful for employee comparison within a group, but offering considerable barrier to comparing employees across groups. III-18
  • 47.
    In the nextpart we will discuss conclusions from the literature and some possible implications for the Air Force. IMPLICATIONS FOR THE AIR FORCE The performance appraisal literature is frustrating in that it tends to dwell more on specific details of certain methods rather than on larger organizational issues. There are, however, some themes which appear relevant to the current OER considerations. The Air Force is a huge and diverse organization which must recruit, train, develop, and retain its desired work force. In addition, through the up or out system, the Air Force must constantly pare away at each class of officers. With these thoughts in mind, the performance appraisal system and the information it can yield to the individual and the organization take on extraordinary importance. It is also clear, however, that attempts to increase accuracy in measurement, fairness in procedure, and information for developmental purposes must be assessed against the administrative realities and practicalities of a very large and somewhat decentralized organization. The idea has been offered that the purpose of the performance appraisal system should drive the type of technique chosen or at least the information collected. The Air Force, like most organizations, uses performance appraisal now for multiple purposes but primarily for promotion. If the OER system is to be effective for the purpose of selection for promotion, then it should focus on that purpose and achieve its other, current purposes through alternative means. A variety of performance appraisal methods was described, classified according to how performance is measured. Examination of these methods suggest that some methods may be more realistic for the Air Force than others. For example, the III- 19
  • 48.
    S0 U, < 0-- o"0.0.. . io'E 0 0 L z 0: 00 0cZe 0 4i 'Z C 8c • o_,2 .0- a D E . !II- 20
  • 49.
    employee comparison techniquesof forced distribution, ranking, and paired comparison could not be used easily for promotion purposes, because once the rankings within a particular group have been established, there is no information to support comparisons across the ranked groups. The problem of equating rankings or distributions across work grouips or commands does not have a simple solution and is one of the issues which contributed to the lack of acceptability of the ;974-1978 controlled distribution system. Critical incident, BARs, and MBO are, or can be, extremely good techniques for employee development purposes. Each technique, however, carries some feature(s) which would seem to conflict with the administrative realities of such a huge organization as the Air Force. For example, BARs involves extensive development resources and a single OER form could not be used across jobs. Critical incident requires the superior to keep a log on each subordinate throughout the rating period. MBO tends to focus on short term quantitative effects and, like ranking, does not provide relative information across people, much less groups. The forced choice method appears to actually distinguish performance but is also associated with user resistance and high developmental costs. Surprisingly, the method which may be the most feasible, given administrative workload and organizational culture, is the traditional graphic rating scale, which, in fact, the Air Force uses now. Rating scales provide relative information, and can be made more or less specific through anchors or standards (such as the Air Force has now). Also the performance factors can be used to transmit the emphasis which the Air Force believes its officer corps should exhibit. The need may be not so much for a new technique to improve the OER system but rather control of the present technique to reduce inflation and improve the quality of performance information evaluated. Currently. the system works with 111-21
  • 50.
    informal controls (suchas the indorsement process) or with no controls (the tendency to firewall on the front side of the OER form). One means of controlling the technique is to influence the rater. This could be done by including "evaluation of subordinates" as a performance factor on the OER, by maintaining a history of the ratings given by the rater, or some combination of these. Evaluations can also be improved through rater training. This idea is very important if the Air Force wants to move away from the writing style and content habits currently in use. Raters can be given instruction on the type of behaviors (depending on technique) to be observed as well as on the organizational desire to have some accurate means of distinguishing performance. Thus, the training would be two-pronged, focusing on 1) what and how to rate and 2) the need to rate accurately. The Air Force currently does not include counseling as part of its overall performance appraisal system but has indicated a desire to do so. The literatureseems to indicate that counseling is best done separatelyfrom the formal evaluation. Also, related to counseling, the literature points to participative goal setting as the most useful technique in actually changing employee performance and/or attitudes. Peer evaluation is a promising source of information concerning leadership identification. Peer evaluation seems to be especially applicable in a military setting where groups of people enter together and attend training schools, etc. where such evaluations could be conducted. Peer evaluations should only be used as a supplementary leadership indicator, however, as there is substantial opportunity for personal change over a 12-20 year career. The most fundamental implication appears to be the need for organizational responsibility toward a performance appraisal system. In order to be useful, a 111-22
  • 51.
    performance appraisal systemcannot be an independent managerial tool but rather a process which is an organic part of the organization in which it is operating. Organizational responsibility toward a performance appraisal system encompasses: o stating the specific purposes of the performance appraisal; o defining those behaviors or performance factors which the organization has established as being necessary to its mission and culture; and, o supporting the performance appraisal system through education of the workforce and consistent enforcement of performance appraisal guidelines from the highest levels of the organization to the lowest. PERFORMANCE APPRAISAL: FINDINGS FROM THE PRIVATE SECTOR This section discusses the findings of a series of telephone interviews with representatives of large, well known industrial organizations. The purpose of the interviews, which were conducted during the months of June and July 1987, was to obtain data about current performance a3praisal practices and methodology in the private sector. Individuals from fourteen organizations were interviewed using a semi-structured interview approach. The interviews were designed to acquire information about the following: I. The purpose(s) of the performance evaluation system; 2. Process issues (who rates, ratings review, timing, etc.); 3. Rater training; 4. Type of system; 111-23
  • 52.
    5. Feedback; and 6.Control mechanisms SAMPLE Of the fourteen corporations covered, ten belong to the Fortune 100 list and the remaining four are in the Fortune 500 group. A special effort was made to contact organizations which were comparable to the United States Air Force in terms of budget and personnel dimensions, and this was successfully accomplished. The fourteen organizations are located in the eastern (9) and midwest (5) regions of the country. Following is a breakdown of the organizations by industry sector Aerospace - 4 Electric/Electronics - 6 Chemicals - 3 Pharmaceutical - 1 The interviews were conducted with individuals who represented the human resource management function of their organizations, and were knowledgeable of and/or responsible for the performance appraisal system for exempt employees. FINDINGS All the organizations had operational performance appraisal systems in place, and with one exception, all were quite systematic in their approach to evaluating job performance. The findings about these performance appraisal systems will be discussed in aggregate and by the following categories: 1. Purpose(s); 2. Type; 111-24
  • 53.
    3. Process (who,what, when); 4. Feedback; 5. Rater training; 6. Rcview; and 7. Controls. In general, all performance appraisal systems were clearly compensation focused, i.e., the pritiary purpose of performance appraisals was for short-term compensation and salary administration issues (me. it increases, incentives, etc.). The purposes of the appraisal systems in these private sector organizations were few (the maximum number of purposes reported was three) and clearly defined. Specific purposes were mentioned (all of which were secondary in importance compared to the short-term compensation purpose) among which are the following: promotion/succession planning, development, monitoring of performance, and feedback. Ten of the fourteen corporations reported the use of goal setting/MBO-type performance appraisal systems, with varying degrees of flexibility. For example, some organizations described their systems as "straight' MBO procedures, while others reported that they employed a "loose* version of MBO. This section will discuss who conducts the rating, the things being rated, and the timing and frequency of the performance evaluations. 111-25
  • 54.
    In nine ofthe fourteen organizations the immediate supervisor was responsible -for conducting the performance appraisal. In three organizations, the evaluation was performed by the direct supervisor and the rater's supervisor. In one organization the appraisal had two parts: one was completed by the ratee and the other by the direct supervisor. In the remaining organization, the rating was prepared by a group of directors. All fourteen participants in the interview process reported that employees are rated against performance standards, rather than on a comparison with peers. This is an important distinction because, as shall be discussed later in the "Implications' section, comparison against peers is used for the most part for promotion/succession planning purposes, while ratings against performance standards are used almost exclusively for compensation related activities. The findings also yield a very interesting dichotomy of performance standards: 1. Results-oriented standards, which measure the results or output of the employee being rated. Examples would be sales or profit figures for the rating period. 2. Behavioral standards, which rate the employee's work behavior rather than results. The rating factors on the Air Force OEP. are examples of behavioral standards. Again, there are important implications in terms of the purpose for which each set of standards is used, since results-oriented standards tend to be used for the immediate purpose of determining short term compensation matters, while behavioral standards are instrumental in promotion/succession planning decisions. 111-26.
  • 55.
    Performance appraisals areconducted annually in thirteen organizations (every six months in one organization). More than 50% of the interviewees reported that the performance appraisal cycle is driven by the merit increase/salary administration schedule. (This reinforces the notion that performance appraisals in the private sector are primarily applied to compensation determinations.) The timing of the performance appraisals is also a critical issue. Over 50% of the interviewed organizations execute the appraisals for all their employees during the same time period (usually at the end of the fiscal year). This is not an unexpected finding given the prevalence of MBO-type systems. In an MBO system - at least conceptually - individual goals are derived from the unit's yearly 3oals, and the unit's goals are themselves derived from the division's yearly goals, and so forth. The g•oals at all the different levels of an organization are ultimately derived from the organization's overall goals; logic and efficiency dictate that accomplishment of goals at all levels be assessed simultaneously. A related process issue refers to the length of time that appraisal forms are kept in the individual employee's record. For the present sample, the performance appraisal forms remain in the employee's record for an average of approximately 3 years. In one case, only the current appraisal form is part of the record, but the form includes a section on performance history. Feedback All fourteen organizations - with the exception of one participant who indicated that this was a problem area - encourage and emphasize feedback as an important component of the supervisor-subordinate relationship. In most of the organizations, rater and ratee meet at the beginning of the yearly cycle for a goal-setting exercise. The ratee usually signs off on a list of potential goals or accomplishments. r11-27
  • 56.
    Two organizations havean "areas for improvement" section in the appraisal form, as well as a self assessment section. In one instance, it was reported that feedback/coaching was one of the main performance factors on which supervisors were rated. Twelve of the fourteen organizations require and provide formal rater training for their supervisors. One person interviewed indicated that rater training was a problem area, and another reported that informal training was provided to their supervisors. The majority of the organizations place a strong emphasis on rater training, including the distribution of written materials on the subject. In one instance, outside consultants were hired to provide formal training to supervisors. Several of the organizations emphasize the goal-setting and feedback aspects of performance appraisal. In eight of the fourteen organizations the performance appraisal is reviewed by the rater's supervisor. In fo'ur cases, the appraisal is reviewed by a group (i.e., group of supervisors, central office, employee relations department). One organization did not provide information on this issue. One participant reported that there are three levels of revi. w for performance appraisals, when it comes to making promotion decisions. Eight ,if the fourteen participants are currently employing a forced distribution scheme with varying degrees of flexibility, in order to control the rating process, especially the problem of inflation. Two corporations are considering the implementation of a forced distribution process, while the remaining four do not have a control process at this time. In almost all cases, there is a very strong tendency to 111-28
  • 57.
    carefully monitor performanceratings. (One of the four organizations without controls, interestingly enough, has encountered a central tendency rather than an inflation problem.) Several of the organizations with forced distribution schemes have defined a minimum number at which the forced distribution shall be implemented (e.g., 100 employees). In addition, the distributions conform to various shapes, although the tendency is to have small groups at the higher and lower extremes, plus a large group in the middle. Whether there is a forced distribution process in operation or not, performance ratings in general are very carefully monitored at levels several times removed from the rater, for promotion/succession planning purposes. IMPLICATIONS The purpose of this section is to discuss the implications of the private sector findings for the Air Force's OER system. The potential impact and applicability of the key features of performance appraisal systems in the private sector will be examined. This will be accomplished following the format of the previous section, i.e., by findings category. Perhaps :he single most important finding in the entire interview process was the fundamental difference between the primary purpose of performance appraisal in the private sector and in the United States Air Force. The primary purpose of performance appraisals in the private sector is to make short-term compensation-focused decisions. An OER in the Air Force has far-reaching promotion and career implications for the individual officer. This fundamental difference represents a major obstacle to the 111-29
  • 58.
    application of privatesector practices in the Air Force. However, several key features of appraisal systems in the industrial world can be successfully incorporated into the Air Force setting. A second issue relates to the number of purposes for which performance ratings are used. Air Force regulations cite no fewer than six purposes for the current OER. It w"'l be recalled that three was the maximum number of purposes reported by the private sector interview participants. A useful suggestion would be to reduce ihe number of purposes for which the OER is used in the Air Force, or at least to specify its primary pu rpose(s). The prevalence of goal-setting/MBO systems in the private sector was not surprising, given the compensation focus of the systems. Several features of an MBO- type system -- clear performance objectives, increased communications between rater and ratee, continuity, goal orientation -- could be considered for possible implementation by the Air Force. However, it should be kept in mind that without an organization-wide commitment to MBO, isolated features of the system should be carefully considered. Process In all fourteen corporations the immediate supervisor was directly involved in the performance ratings. Significantly, the rater was removed from the potential for promotion decision. The practice of having the rater provide only performance ratings (without getting directly involved in the promotion decision) is an issue for consideration by the Air Force. 111-30
  • 59.
    Regarding the criteriaagainst which individuals are evaluated, the usual practice in the private sector companies is to rate the employee against a series of performance standards. Comparison with peers, on the other hand, is used for succession planning/promotion purposes and the rater is usually not directly involved in this process. As already mentioned, the private sector sample tended to use two sets of performance standards -- results-oriented and behavioral. The Air Force can consider -adopting two sets of performance standards, with the results-oriented standards applied to duty performance ratings and the behavioral standards used for future potential/promotion determinations at a higher level. The timing of the appraisal is another process issue which was explored in the interviews. Most organizations conduct all of their appraisals at the same time. This i" a good practice but it probably cannot be easily implemented in the Air Force. However, the Air Force could consider the option of incorporating all OER's into the permanent record at the end of the year. A final process issue refers to retaining the appraisal forms in the individual's record. The Air Force should consider whether all OER's should remain in the officer's selection record (as current practice dictates) or whether some limit should be imposed. Feedback is an important aspect of performance appraisal systems in the private sector. Formal feedback mechanisms could be established in the Air Force, with an "areas for improvement* section. This feedback/coaching exercise should probably be established as a parallel process, rather than forming part of the OER form. Informal and interim feedback/coaching can also be actively encouraged by evaluating the raters on this managerial aspect of their officer duties. 111-31
  • 60.
    Rater training isa key feature of appraisalsystems in the private sector. Formal and specific courses on performance appraisal are available, and in most cases required in private sector organizations. Training programs emphasize different things (e.g., providing feedback, goal-setting, use of rating scales) depending on the kind of system being used. A stronger emphasis on training officers in performance appraisal matters - - as an integral function of their duties and responsibilities -- is recommended. In virtually all the corporations that were interviewed, performance ratings are reviewed at a higher level (usually the rater's supervisor). This review is conducted with the purpose of examining the correctness of the performance ratings per se. In some cases, higher level reviews are conducted but with different objectives, i.e., promotion and succession planning. A similar process, for example, Could be established at the Wing Commander level of the Air Force. Controls This is a particularly interesting topic given the evolution and history of the United States Air Force officer performance evaluation process. A similar evolutionary insight was gained from the present set of interviews, as virtually all participating organizations had either abandoned, implemented, or considered the implementation of a control mechanism. In addition, the controls issue in these large corporations as well as in the Air Force goes to the heart of the most pressing and evident performance appraisal problem of the OER system -- the inflation of ratings. Ten of the fourteen private sector organizations either had implemented or were considering the implementation of a control mechanism for performance ratings. Even 111-32
  • 61.
    though the fourremaining organizations were not currently using formal control mechanisms, strong monitoring and training programs in these companies were making a significant contribution to a healthy variance in performance ratings. From a more technical perspective, it was interesting to note that in the interview sample, it was common practice to configure the forced distribution with small groups at the extremes and a large group in the middle (which in some cases consisted of 2 or 3 sub-groups) In hindsight, it seems that the '22-28-50' configuration which was implemented in United States Air Force in 1974 was counter to the way in which most programs are designed. An additional technical issue regarding forced distribution schemes refers to a minimum number of individuals on which the distribution is imposed. In the current interview sample, this minimum number ranged from 50-100. This issue calls to mind the often cited example of the Thunderbird pilots. Applying a forced distribution to the six (eight if you count the two alternates) most accomplished pilots in the Air Force is not a reasonable proposition. Having a minimum number of 50-100 pilots, for example, would allow for more equitable and meaningful distinctions between higher and lower performers, PERFORMANCE APPRAISAL: FINDINGS FROM THE OTHER SERVICES Early in this study, daia were collected from other uniformed services to learn how these organizations have responded to the challenges of conducting performance nppraisals of their officers. Th', data was gathered in a series of interviews with representatives of the Army, Navy, Marine Corps, and Coast Guard. In addition to these uniformed services, an interview was held with representatives of the Department of State concerning performance appraisal of foreign service officers. (The study team judged that the 111-33
  • 62.
    conditions of employmentfor foreign service officers are sufficiently like those for Air Force officers to warrant inclusion of this information in the analysis.) In each service, these interviews were held with representatives of the office in the service headquarters having proponency for policy toward, monitoring of, and quality control of the officer evaluation process. In each case, the person interviewed was the officer in charge, generally in the grade of coloncl/GM-14, except for the Department of State where the interviewee was the Deputy Director. (It is interesting to note that in two services, the Army and the Navy, the individual in charge of officer evaluation reporting is a civilian employee.) Each service furnished copies of its basic instructions for OER preparation, the forms used, and supporting pamphlets and materials. In the course of each interview, questions were asked to learn the issues each service has faced in developing a meaningful evaluation system. Each service was cooperative and without exception provided candid responses to our questions. In addition to United States Government entities, data were collected from the Embassy of Canada on the evaluation of Canadian Armed Forces officers. It was not feasible to interview the Canadian officials having responsibility for operation of the OER system. For that reason, because there is nothing uniquely different in the Canadian OER system, and because the Canadians use a closed system, this information will not be included in the subsequent portions of this section of the report. The remainder of this subsection will consist of brief discussions of the systems for officer evaluation used in each service, followed by a summary showing the central tendencies among these systems compared and contrasted to the Air Force OER system. 111-34
  • 63.
    United States Army TheArmy OER system uses a form and a procedure that were substantially revised in 1979 in response to unacceptable inflation in ratings. The preceding form had been in use for six years, and had also been introduced in response to inflation. Research had suggested that the strongest pressures to inflate ratings were placed on the immediate supervisor of the ratee. Therefore, the essence of the current system is to shift the responsibility for applying meaningful discrimination from the rater to the senior rater (the final indorser), who is typically the rater's supervisor. Purtense The purposes served by the Army OER system include the following: 1. Influence the selection of future leaders through maximum input from the field. 2. Improve the linkage between individual and corporate performance (modified Management By Objective). 3. Strengthen the chain of command by bonding the ratee to the rater and encouraging continual, two-way communications between senior and subordinate. ,4. Enhance professionalism by displaying the standards of professional competence and ethical behavior which Army officers are expected to display (teach through use of the form). 111-35
  • 64.
    The ratee musthave been under the supervision of the rater for not less than 90 days and the senior rater for not less than 60 days. The OER is submitted under the following general conditions: 1. Annually, based on date of last report; 2. When there is a change in the ratee's principal duty (to include PCS); 3. When departing on extended temporary duty or long term schooling; 4. When there is a change of rater; 5. To complete the record when the ratee is scheduled to meet a promotion board (in or above the zone) and has not had a report in the current job. The process begins at the beginning of the rating period when ratee and rater are required to hold a face-to-face meeting to develop a duty description and set major performance objectives to be accomplished during the rating period. This information is recorded on the OER support form (see Appendix F). The rater is the ratee's supervisor. Throughout the rating period the ratee and rater are expected to meet periodically to assess whether the duty description and performance objectives are adequate. The rater is expected to coach the ratee on his/her personal and professional development. At the end of the rating period the personnel support center initiates the OER preparation by forwarding the OER form to the ratee. who validates the rating chain and the administrative information thereon. The ratee then writes a description on the 111-36
  • 65.
    support form ofthe significant contributions he/she has made in the job during this period and forwards the OER form and the support form to the rater. The rater and intermediate rater (if any) evaluate the performance and potential of the ratee on the OER form. They also provide comments on the OER support form and forward both to the senior rater. (An intermediate rater is used only when there is an officer in the chain of supervision between the rater and senior rater. This occurs most often when the rater's supervisor does not meet the grade test to qualify as senior rater.) The senior rater provides an independent evaluation of the ratee's potential and, in most cases, the final chain-of-command review of the OER. When the senior rater has completed the OER. the support form is returned to the ratee. The OER is dispatched to the Military Personnel Center. A copy of the OER is given to the ratee at this time. At the Military Personnel Center, the senior rater's potential evaluation is entered into the automated personnel record and his/her rating history for that grade is recomputed. A profile of this rating history is pasted onto the OER next to the senior rater's potential evaluation of the ratee. The OER is then entered into the official military personnel file. Form One form is used for all officer evaluations, warrant officer through major general. An example of the current Army OER form is displayed at Appendix F. The rater prepares the duty descriptiob, using the OER support form. He/she rates fourteen performance factors on a scale of I to 5 and may write optional comments on professional ethics. The rater also rates on overall performance (scale of I to 5) and 111-37
  • 66.
    -potential for promotion(scale of I to 3). Finally, the rater provides separate narratives on performance and on potential. Tli- intermediate rater provides comments on performance and potential, but does not evaluate on any numeric scale. The senior rater evaluates the potential of the ratee for promotion, considering all other officers of that grade in the Army, on a scale of I to 9. The senior rater also completes a narrative section that focuses mainly on potential but which may refer to performance by the ratee or to the comments or ratings of the rater or intermediate rater. Dlscrlminatlnn Factors The results of surveys of Army selection board members show that the most useful discriminator on the OER is the senior rater's evaluation, taken as a whole (that is, the combination of the potential rating, the senior rater's rating profile, and the narrative). Other factors from the OER which the selection boards find useful in discriminating among officers are (in descending order of importance): the rater's narrative on potential, the rater's narrative on performance, and the duty description. Feedback In the Army system, the sources of feedback to the ratee are the OER support form and the face-to-face discussions which are mandated by Army regulation. Compliance with the system was not as good as was desired, and in 1985 a provision was added which requires ratee and rater to certify, by initialing the form, thtat the discussion required at the start of the rating period had occurred. Written feedback at the end of the rating period (using the support form) is optional. The ratee receives a copy of the completed OER but the feedback is diluted by the fact that the senior 111-38
  • 67.
    rater's profile isnot attached and by the widespread inflation in rater evaluations. The ratee can review the official file which includes senior rater profiles on his/her OER, by application to the Army Military Personnel Center. Quality Control The essence of the Army's quality control system is an attempt to influence the behavior of the approximately 10,000 senior raters through interventions initiated by the Military Personnel Center. To date, these interventions appear to be successfui, as the rate of compliance by senior raters with the guidance is above 85 percent. The most stringent control over senior rater behavior involves placing a form in his/her official military personnel file which displays that senior rater's rating history. This history reveals at a glance whether the senior rater is complying with the spirit of the system -- that is, creating a distribution of scores, over time, along the scale of potential for promotion. This information is available for promotion board review, thus placing those senior raters who inflate ratings in jeopardy of their own future promotions. Second, the Army Military Personnel Center has a senior rater contact program by which they hope to provide continuing education and training in the system. One of the themes of this education program is the concept of o. Senior raters are urged to select one or two blocks on the nine point scale (other than the top one) where they will place typical, high-performing officers, leaving room to rate exceptional officers on each side of this center of mnz. The rationale provided to convince senior raters to use this approach is that they should want to: I. Leave space to identify the very best; 2. Not ruin the careers of the others; and 3. Not de-motivate the officer corps. 111-39
  • 68.
    Even the mostconscientious senior raters are prone to inflation in score (however, it is the Army experience that few senior raters are attempting to game the system). A feature of the senior rater contact program is to offer a senior rater the opportunity to restart the profile if he/she decides that it has become so inflated as to obscure meaningful evaluations. The Army is also experimenting with an Army-wide restart (in warrant officer grades) and will observe the effect on inflation control. Promotion boards are given a briefing by the OER Evaluation Office. The response of the boards to the senior rater profile technique, as measured by a confidential survey procedure, is quite positive. In fact, the boards have asked for rater profiles in addition; however, the evaluation staff doubt that rater compliance would be high enough to make this step meaningful. United States Coast Guard The Coast Guard OER system was substantially revised ir, 1984, and the resulting process and form are in many respects likii that of th* Army. The Coast Guard system protects the ratee-supervisor relationship by shifting the burden of discrimination to the next higher level (reporting officer). Also, :Se most useful discriminator is the overall potential evaluation for which the reporting officer's profile is maintained and added to the report at Coast Guard Headquarters. A distinguishing feature of the Coast Guard OER system is the degree of responsibility placed on the ratee. He/she is specifically tasked to clarify the duty requirements, to obtain feedback and counseling, and to manage his/her performance to meet or exceed the standards. Purpose The purposes served by the Coast Guard OER system include the following: .40
  • 69.
    1. To provideinformation for central personnel management decisions, especially promotions and assignments. 2. To set the standards for officer character and performance. 3. To prescribe a common set of values by which Coast Guard aspirations for its officer corps ý:an be described. 4. To teach each officer what is expected of him/her. 5. To provide a means by which officers can receive feedback about how well they are measuring up to the standards. Process The OER is submitted under the following general conditions: 1. Annually, batched by grade, for officers in grades lieutenant commander (0-4) through captain (0-6); semi-annually, also batched by grade, for officers in grades lieutenant (0-3) and below. 2. Transfer of ratee Transfer of reporting officer (Note: not the supervisor, but the supervisor's supervisor.) 4. Promotion of the ratee (Note: there are different forms for each grade with different performance standards). The process is initiated by the ratee who is required to verify the adm~nistrative information on the OER form and forward it to the supervisor 14 days before the end of the rating period. The ratee may also record the duty description and a list of accomplishments during the rating period on the optional OER support form and 111-41
  • 70.
    forward it alongwith the OER. (This OER support form is mandatory in the case of ensigns and lieutenants (junior grade). For these officers there are mandatory face-to- face meetings with their supervisors at the beginning and end of each rating period at which times the OER support form is used.) Copies of these forms are displayed in Appendix F. The supervisor evaluates the ratee's performance of duties, interpersonal relations, leadership, ard communications skills using graphic scales and narrative. He also prepares the duty description. The supervisor completes the optional OER support form and forwards the OER and support form to the reporting officer. The reporting officer is normally the supervisor's supervisor. He/she may be in the same grade as the ratee provided they are separated by two year groups. The reporting officer evaluates the ratee on a set of personal traits and a set of factors under the title - "Representing the Coast Guard" using graphic rating scales and narrative. The reporting officer comments on overall leadership and potential for promotion and rates on an overall potential scale (range of I to 7). The report is reviewed by a third officer, normally the reporting officer's supervisor. Only Coast Guard officers may act as reviewing officers. The reviewer's responsibility is to ensure that the report is consistent and that it reflects the Coast Guard standards for officer evaluation. At the Coast Guard Headquarters, the OER is reviewed for administrative accuracy and internal consistency. Unsatisfactory reports are returned for correction/revision. The reporting officer's potential rating is entered into the automated personnel record and his/her rating history for that grade is recomputed. A profile of 4'3t rating history is pasted onto the record copy of the OER, just below the reporting officer's evaluation for potential. 111-42
  • 71.
    When accepted ascorrect at Headquarters, a copy of the report, without the rating profile, is returned to the ratee. Form A separate OER form is used for each officer grade. (Appendix F displays the form used for lieutenant commanders.) A distinguishing feature of the Coast Guard OER is that the evaluation standards for each rated factor are printed on the form; thus the need for a separate form for each grade. For each factor there is a brief description of what is to be rated and a scale of I to 7. For values 2, 4, and 6, there is provided a description of the behaviors corresponding to those values on the scale. This is a variant of the behaviorally anchored rating scale described in Appendix B. The scales are so constructed (and the instructions emphasize) that a value of 4 describes the "typical, high performing Coast Guard Officer" of that grade. It is expected (and, to date, experienced) that 70 percent of officers will be found in the range 3 to 5 on the scale for most factors. Raters are encouraged to use the "not observed" block, if appropriate (it should be noted that the instruction does not mandate minimum periods of observation for either supervisors or reporting officers.) The supervisor is responsible for describing the duties performed. He/she also evaluates the ratee in four sections: 1. Performance of Duties Section, Consists of a narrative and five performance factors rated on the scale described above. 2. Interpersonal Relations Section. Consists of a narrative and two factors measuring how an officer affects or is affected by others. 111-43
  • 72.
    3. Leadershln SkillsSection. Consists of a narrative and four factors. One of these factrvs is entitled Evaluating Subordinates. This factor is described as follows: "The extent to which an officer conducts, or requires others to conduct, accurate, uninflated, and timely evaluations for enlisted, civilian and officer personnel." The behavior identified with the midpoint on this scale is described as follows: "Prepares evaluations which are timely, fair, accurate, and consistent with system standards. Required narratives are concise, descriptive, and contribute to understanding subordinates' performance and qualities. Seldom gets reports returned for correction/adjustment. Provides constructive counselling where needed. Does not accept inaccurate, inflated, or poorly prepared reports from others." 4. Communication Skills Section. Consists of narrative and three factors which measure the officer's ability to communicate in a positive, clear, and convincing manner. The reporting officer may comment on the supervisor's evaluation. He/she then rates the officer in two sections: I. Personal Oualltles section consists of a narrative and five personal traits related to the officer's character. 111-44
  • 73.
    2. Renresentin2 theCoast Guard section consists of a narrative and four factors which measure an officer's ability to bring credit to the Coast Guard through appearance and actions. The reporting officer writes a narrative section which describes the ratee's demonstrated leadership ability and overall potential for promotion and command. He/she then rates the overall potential on a scale of I to 7. There is a space on the form for a label (added at Coast Guard Headquarters) showing the reporting officer's rating history for officers of this grade. Discriminatine Factors The Coast Guard Evaluation Office reports that the current system is not experiencing substantial inflation. Therefore, the selection boards can review the reports on their face value without the need to search for hidden discriminators. However, the promotion board procedures are informal and are kept confidential. The Evaluation Office does not have data showing what sections of the OER are most important to these boards. The majority of the OER is oriented toward performance description rather than evaluation. However, it is prudent to assume ti it the reporting officer's potential rating, when reviewed in the light of his/her rating profile, is a significant factor. Feedback The Ctast Guard places responsibility on each reported-on officer to seek feedback and counselling. The OER support form is but one means of gaining such feedback, and use of this form is optional for grades above lieutenant (junior grade) (0- 2). The OER form provides substantial information to the ratee; and, since inflation is not widespread, the majority of reports provide useful information to the ratees on their job performance. The OER copy furnished to the ratees does not contain the reporting Il-4•
  • 74.
    officer's rating profile,but the system is open, and ratecs can view this profile at Headquarters or write for a copy. Ouality Control The central themes in the Coast Guard quality control procass for the OER system are extensive review of reports at all levels and involvement of the chain of command in supervising the rating chain. The review process starts at the local levJ where reports are reviewed first for administrative accuracy and then for excessive inflation. (Note that periodic reports on Coast Guard officers are batched and that all reports on officers of a certain grade are being reviewed at one time.) At Coast Guard Headquarters, reports are routed through the assignment officers who screen the reports for administrative accuracy and for internal consistency. In particular, the reports are checked to ensure that the narrative comments support the numeric ratings in each section. Reports containing administrative errors or inconsistent ratings are referred to the Evaluation Office. Many of these reports are returned to the rating chain for correction with an analysis of the errors or inconsistencies. Returned reports with inconsistent ratings are usually referred to the reviewing officer for resolution. Compliance with this quality control program has becn high. In recent months, 90 percent of rejected reports have been returned to Headquarters with additional narrative and, surprisingly, 50 percent with changed numeric ratings. It has not yet been necessary to adopt any special interventions focused on the reporting officers. The strong support of the chain of command has been adequate to control inflation. A strength of the Coast Guard OER system is that the officer corps 111-46
  • 75.
    accepts it. Thisacceptance has been developed by and is maintained through a strong eaucation program. United States Navy The current fitness reporting system was instituted in 1974 and has not changed substantially since then. The system is well accepted by Navy officers, particularly reporting seniors who think they understand the system and believe that they are communicating well with selection boards. A distinguishing feature of the Navy fitness report (FITREP) is that there is only one evaluator and only one signature appears on the form. This evaluator, the reporting senior, is normally the officer designated in law as the commander. Thus, for most Navy officers the FITREP is not prepared by his/her supervisor but at a higher level. Another distinction evolving from this procedure is that the preparation of FITREPs is an important function of command and, at least in theory, more responsive to direction from the Navy leadership. Purpose The prime use of the FITREP is to support the decisionmaking process of promotion selection boards, and reporting seniors view it so. A secondary purpose that the Navy views as valuable is to support judgments about future assignments. The instruction on preparation of the FITREP cites ten purposes, among which is counseling of junior officers. These other purposes are not viewed as particularly useful; and counseling, especially, is not done well in conjunction with the FITREP. The FITREP is prepared annually for all officers but lieutenants (junior grade) who are evaluated twice a year. FITREPs are prepared in batches by grade so that all 111-47
  • 76.
    FITREPs for anyparticular grade are submitted at the same time. The FITREP is also submitted upon the transfer of the reported-on officer or the reporting senior. The process begins thirty days prior to the end of the reporting period when the ratee has the opportunity to provide information to the reporting senior about the performance of his/her duties during the reporting period. There is no specified format for this information and the reporting senior is not required to include any of it in the FITREP. Also during this period, the ratee's supervisory chain provides information to the reporting senior. This also is an informal procedure, not specified in the instruction. At the end of the rating period the reporting senior completes the FITREP. He/she enters a duty description and a narrative describing the job performance and potential for promotion. The reporting senior evaluates the ratee on twelve performance factors and six personal traits using a scale of I to 9. He/she also indicates whether or not the ratee would be desired as a subordinate in each of five types of possible future duties, using the same scale. Finally, the reporting senior makes a promotion recommendation. The reporting senior indicates the rank of the ratee (I of 3, 3 of 3, etc.) among those officers of any particular grade recommended ror early promotion. There is an appraisal worksheet for use by reporting seniors in preparing the FITREP. In contrast to the procedures of the other services, the worksheet is not used by the ratee and remains in the reporting senior's possession when the FITREP has been completed. The completed FITREP is forwardee to the Navy Military Personnel Command without further review. A signed copy of the FITREP is given to the ratee. In the case of junior officers (0-3 and below), the copy is given at the time of completion. For other officers the copy may be given to the ratee at the time the relationship is severed. i11- 4L3
  • 77.
    Eoms An example ofthe Navy FITREP form is displayed in Appendix F. The FITREP form requires the use of an optical character reader font. All but the narrative portions are entered into the automated personnel system. Subsequently, this system produces numeric summaries of each officer's performance record for use by selection boards. Following the administrative information, there is space for a description of duties assigned. There is then space for the reporting senior to rate on twelve performance factors and six personal traits. The reporting senior also indicates the desirability of having the ratee assigned under his supervision in five types of jobs (command, operational, staff, joint/OSD, or foreign shore). Finally, there is space for an overall performance evaluation. All of these are rated on a scale of A to I (I to 9), "A" being the highest. In the use of the overall performance evaluation (labeled "mission contribution"), the reporting senior is required to show the distribution of ratings for all officers of that grade being evaluated at that time. Finally, the form provides space for the reporting senior to comment on the promotion potential of the ratee. The scale is 1 to 3 (promote early, promote, do not promote). The reporting senior is required to show the peer distribution among all officers of the grade given a rating of "promote early" (I of 3, 3 of 3, 3 of 6, etc). However, this peer distribution is used only for officers in grades lieutenant commander through captain (0-4 through 0-6). Dlscriminatlng Factors Navy promotion board procedures have a bearing on the relative usefulness of various ratings on the form and deserve a brief summary. In contrast to the Air Force and Army, where every panel member reads every file and records a vote, in Navy and 111-49
  • 78.
    Marine Corps boards,selection is by iterative voting by the panel based on briefings given by one of the panel members. In each iteration, each panel member is given a small number of files (about five) for detailed review. After this review, the panel assembles in a briefing room where each panel member briefs his files to the other panelists using visual aids consisting of numeric summaries of all previous FITREPs and qualitative summaries of previous experience and qualifications. The panel members vote on each officer simultaneously and secretly at the conclusion of that briefing. After voting on all officers in the zone, the clear winners and losers are removed, the files are redistributed, and another cycle occurs. This process is fol;owed until the number of selectees allowed is attained. An advantage of this procedure is that the briefer can spend much more time reviewing each file he is given than if he were required to look at the entire zone. This suggests that a better job can be done in integrating all aspects of the FITREP to arrive at a judgment and that any one factor has less importance in discriminating among officers than is the case in other systems such as ihe Air Force and Army. This explanation also supports the statement made to the study team by the Department of the Navy representative that the narrative is the most important discriminator on the form. The briefer has time to read the narratives on all the FITREPs and relate them to other rating sections. Other factors cited as being important discriminators are the promotion recommendations (including the peer ranking) and the job description. Members of promotion boards have observed that promotion recommendations are evaluated in the perspective of the importance of the billet. For example, a promotion ranking of "3rd of 20" in a training command billet is recognized as weaker than a "4th of 8" in a deployed -quadror for the fighter pilot community. 111-50
  • 79.
    Feedback Although providing performanceand career counseling is an objective of the ,officer evaluation system, the Navy believes that the feedback mechanism is not very effective. The FITREP, in particular, is perceived to be an unacceptable counseling tool. This situation derives from the fact that commanders tend to inflate the ratings of less than excellent officers. Therefore, the FITREP does not communicate an officer's strengths and weaknesses. Reporting seniors are encouraged to show reports to ratees (and are required to do so for junior officers). However, for officers in grades lieutenant commander and above, reporting seniors are not required to conduct counseling nor to show reports. There is no alternative mechanism, such as the Army OER support form, to foster counseling. Quality Control There is a substantial amount of inflation in the Navy evaluation system. For example, reporting seniors recognize that ratings of less than "A" for performance factors and traits are regarded as derogatory by promotion boards, so there are few ratings of "B" or less. Similarly, narratives are puffed up; although the feedback from promotion boards shows that most reporting seniors are communicating effectively on performance and potential through the narrative. The ranking among peers remains an effective discriminator for many reported-on officers although some reporting seniors are known to game the system by artificially subdividing the population of officers rated in order to generate more "1"and "2* promotion rankings. However, the ranking system does not apply to officers in the grades of lieutenant (0-3) and below. The Department of the Navy has not chosen to intervene in the fitness reporting system, Consequently, there is no central management of a quality control system ror officer fitness reports. 1II-51
  • 80.
    Uni1ted States MarineCorps The Marine Corps has also revised its officer eva.uation system recently in reSponse to an inflation in ratings. The current Performance Evaluation System (PES) was installed in 1985 in response to a study which indicated that the degree of inflation posed a threat to the credibility of the promotion system. Distinguishing features of the PES are that counseling has been removed from the PES and that those marines rated as outstanding in *general value to the service' are ranked amoag each other. Like the Army, the Marine Corps has recognized the pressures on immediate supervisors to inflate evaluation reports and has installed measures to counter this tendency. Some of these measures include: 1. A policy which forbids the rating chain from showing completed reports to the ratee; 2. Strict requirements for accelerated promotions; and 3. Requirement to rank the outstanding agairnst one another. Pu rpose The primary purpose of the PES is to support the central selection, promotion, and retention of the best qualified marines. A secondary purpose is to aid in the assignment process an(* other personnel management actions. The recent study of the Marine Corps evaluation system concluded that counseling is antithetical to the purposes of an evaluation system and a major source of inflationary pressure. Therefore, while effective counseling is encouraged, a substantial effort has been taken to sepai-ate the counseling process from the PES. 111-52
  • 81.
    Process A report isnot submitted on a marine unless he/she has been under the supervision of the reporting senior, who is the marine's immediate supervisor, for 90 days. The FITREP is submitted under the following general conditions: I. Annually, batched by grade; 2. When the ratee's duty changes or he/she departs the unit; 3. When departing for extended temporary duty or long term schooling; 4. When there is a change in the reporting senior; or 5. Upon promotion. At the end of the reporting period, the reporting senior prepares the FITREP, assisted in administrative processing by the supporting personnel office. He/she rates seven duty performance factors, fourteen personal quality factors, and estimates the ratee's "general value to the service." The reporting senior also completes a narrative describing duty requirements, performance, and general value to the service. The reporting senior forwards the report to the reviewing officer who is normally the reporting senior's supervisor. The reviewing officer is responsible to ensure that the reporting senior has complied with the spirit and instructions of the Marine Corps order governing the PES. The reviewing officer may add comments, especially if he/she disagrees with the evaluation performed by the reporting senior. The completed FITREP is transmitted to Headquarters U.S. Marine Corps where it is reviewed and entered into the official personnel record of the marine reported-on. Administratively incorrect or inconsistent reports are returned to the rating chain for correction. Copies are not maintained in unit files nor routinely furnished to the ratee. 111-53
  • 82.
    Ratees are annuallyfurnished a copy of the Master Record Brief, a report containing the numerical ratings from all FITREPs in his/her record. On entering the zone for promotion, each marine is furnished a complete copy of the microfiche containing all previous FITREPs. Additionally, marines can view their FITREPs at Headquarters, U.S. Marine Corps. Encm One form is used to evaluate all marines in grades sergeav (E-5) throu,3h colonel (0-6). An example of this form is displayed in App'-ndix F. The administrative data is entered with an optical character reader foat. Note that there is no space to enter a duty description, only a title. Additional duty requirements must be placed in the general narrative section. The reporting senioi evaluates seven performance factors and fourteen personal qualities on a six point scale. He/she then estinmates the ratee's "gereral value - the service" on a ten point scale. The reporting senior is required to show how he/she has distributed ratings in this section ('general value to the service*) for all other marines of the same grade during this rating period. The reporting senior then completes a narrative section. On the reverse of the form, the reporting senior is req,2ied to show the rank of the ratee, if he is rated an outstanding (10) in 'general vaau. to the service,' among other marines of that grade also rated as outstanding. Finally, the reporting senior is required to list the names of all maiines of that grade for whom he/she is the reporting senior. The reviewing officer is provided a space to make comments. These comments are mandatory if he/she does not agree with the evaluations or comments by the reporting senior. Reviewing officers are encouraged to add a comment showing the ranking of the ratee among all marines of that grade whom the reviewing officer is 111-54
  • 83.
    responsible to review.The intended purpose is to evaluate the marine reported-on across a wider segment of his/her peers. This technique is especially encouraged when the reporting senior only rates one or two marines of a particular grade. Discrlminatin atr Marine Corps promotion boards are conducted in about the same way as are the Navy boards. Therefore, the comments on discriminating factors in the previous section apply. Beyond this, the Marine Corps representatives informed the study team that the most important discr-minators for promotion boards are: 1. The trend in the numeric ratings; 2. The rank among peers rated as outstanding in 'general value to the service"; and 3. The narrative. EF£Elaik Feedback to the ratees on performance of duties or career development is not a part of the PES. Reporting seniors and reviewing officers are specifically forbidden from using the FITREP as a part of counseling. Reinforcing this practice is a prohibition against even showing the FITREP to the marine report,.l-on. Although the Marine Corps encourages counseling of subordinate officers, such counseling is not related to the evaluation process, and there are no forms or other a 2s in the PES to assist marine officers in this task. Quality Control Improving quality control of the PES was one of the initiatives resulting from the 1985 study. The goal of the quality control program is to limit the impact of inflation 111-55
  • 84.
    on the effectivenessof the PES. At Marine Corps Headquarters, the Promotion Evaluation Branch is responsible for quality control. This branch screens approximately 205,000 reports a year, of which about 6,000 are returned for corrections. A review of a list of most common reasons for rejecting reports reveals that the Marine Corps is not able to audit for internal consistency to the extent of the Coast Guard, and most of the errors are in failure to follow the instructions. However, these screenings, and the knowledge that they. are done at Headquarters, are reported to positively affect the quality of the FITREP accepted. Other elements, previously mentioned, that act to limit the inflation of reports include: 1. Requirement to rank those rated as outstanding; 2. No show policy; 3. Strict limits on accelerated promotions; and 4. Enhancement of the reviewing officer's responsibility to in,'' " certification of the accuracy of the report and the requirerne..t to comment on reports that do not act-irately reflect an officer's performance and potential. Foreien Service Foreign Service officers of the Department of State are evaluated annually through a process Aimilar to those used by the armed services. The assignment and personnel management policies of the Foreign Service are simlar to those used in the Air Force. Specifically, Foreign S' officers are subject to: 1. Frequent reassignments to oversea locations on an involuntary basis; 2. Competitive promotions based on a grade pyramid; 111-56
  • 85.
    3. An upor out policy. Foreign Service officers not keeping up with their peers in promotions are selected for release by promotion boards (if they do not self-select by resigning). 4. Central management of the personnel function to include centralized promotions. For these reasons, a review of performance appraisal in the Foreign Service is appropriate in the context of lessons that could be applied to the Air Force officer evaluation issues. PurDose The primary goal for personnel evaluation is to provide a just basis for career tenure, promotions, and separations. Other goals include: 1. The allocation of within-class salary increases and performance pay; 2. Support to the assignment process; 3. Planning for training; and 4. Improvements in efficiency through feedback on performance and collaborative goal setting. Process An annual report is submitted on each Foreign Service officer as of April 15th of each year, provided the ratee has been under the supervision of the rater for 120 days. Other reports are submitted covering any period of at least 120 days culminating in a change of duty or a change in rating officer (including transfer). 111-57
  • 86.
    The Foreign AffairsManual requires that the rater anW ratee agree in writing on the duty requirements and performance standards within 45 days after the beginning of the rating period. This understanding is recorded on the evaluation report. The rater is required, in addition, to review performance at least twice during the year. (Representatives of the Office of Performance Evaluation indicated that these requirements are honored more often in the breach than in observance.) At the end of the rating period, the rater prepares the evaluation report and rates the employee on overall performance as well as potential. The rater is expected to show the evaluation to the ratee and discuss it. The rater is the ratee's supervisor. The rating officer's supervisor is designated as reviewing officer. The reviewer checks the report and prepares a narrative assessing the ratee's performance and potential. The report is then forwarded to the ratee for comment. Space is provided for the rated officer to comment on the period of performance to include specific accomplishments, areas not otherwise addressed in the report, and aspects which may need clarification or correction. The employee is also encouraged to remark on career goals including training and future assipnments Every bureau within the Department of State and every post abroad with more than tan Foreign Service members establishes a review panel which reviews all evaluation reports. The functions of these review panels include: 1. Checking reports for accuracy, consistency, inadmissible comments, and conformity with rules and policy; 2. Referring poorly prepared -eports outhe reporting chain for correction; and 111-58
  • 87.
    3. Identifying oneach report the officers responsible for any late submissions. Reports are then forwarded to the Office of Performance Evaluation where they are maintained in manual form only. Typical procedure for Foreign Service officers who are dissatisfied with their evaluation reports is for the officer to file a union grievance (most Foreign Service officers are union members). The 8,000 to 10,000 evaluation reports submitted each year typically generate about 100 grievances. One form is used in evaluating all Foreign Service officers. This form is displayed in Appendix F. The form is almost entirely narrative (which suits the Department of State, a writing culture group). Despite the ample amount of white space on the form, the typical report has addendum sheets attached. Part one of the report is a narrative description of the work requirements of the position, which is to be prepared at the beginning of the rating period. There is a section in which the ratee may explain, at the end of the period, special circumstances influencing his/her ability to meet the work requirements. Part two is a narrative evaluation of the overall accomplishments in the job during the period, prepared by the rater. Part three is a narrative evaluation of potential together with a five point rating scale, also prepared by the rater. The Office of Performance Evaluation has observed that both parts two and three are greatly inflated. Most Foreign Service officers expect a top block rating for potential and a narrative that complements this rating. There is a subsection in part three in which the rater is to cite areas in which the ratee should concentrate his/her efforts to improve performance. This section is widely 111-59
  • 88.
    gamed so asto show innocuous or frivolous faults. Rarely does a rater put candid remarks about employee weaknesses in this section. In part four, the rater is required to indicate the dates on which counseling sessions were held. Foreign Service officers generally do very little counseling (as reported by the Office of Performance Evaluation representatives) and this compliance section does not help in improving performance. Part five is a narrative covering boil, perfo;mance and potential which is completed by the reviewer. He/she is asked to certify that the report is adequately documented. The reviewer's comments are also subject to inflation. In part six, the rated employee provides his/her views on the period of performance. This is completed after the rater and reviewer have completed parts one through five. Therefore, it is an opportunity to rebut any negative comments. Finally, there is a section in which the review panel may certify their review of the report. Discriminating Factors There is little on the form to review apart from the narratives, the work requirement statement, and the overall potential scale. Yet the inflation in rating of the overall potential makes that factor useless in discriminating. Nevertheless, the promotion boards report that they are able to discriminate among officers being considered through close reading of the evaluation report files. Feedback Feedback is an integral part of the Foreign Service evaluation reporting process. The mechanisms for feedback are mandatory counseling sessions and the referral of OER 111-60
  • 89.
    reports to rateesfor comment. Yet inflation in the reports renders the reports themselves less than useful for counseling purposes. Perhaps this influences the general reluctance to perform counseling which was reported to the study team. Oualltv Control The system design provides for quality control through a reviewing officer and a review panel. However, the system is not now working to control inflation nor does it result in uniform compliance with such administrative requirements as timely submission of reports. The Office of Performance Evaluation does not have adequate staff to perfbrm substantial amounts of quality control. However, they do read each report (staff of sixteen, 8-10 thousand reports, mostly arriving in May). Most of the reports which are returned for correction contain inadmissible comments in the report or administrative errors that cannot be corrected in the Office of Pecformance Evaluation. A revision of the evaluation system is in progress at the Department of State to deal with rating inflation and the excessive amounts of narrative. The proposed solutions being considered include a system of rating the rater (similar to U.S. Army or U.S. Coast Guard) and computerization of the evaluation process. IMPLICATIONS FOR THE AIR FORCE This subsection will address some of the central tendency observed smong the other services discussed above. There are some features, for example, that reflect lessons previously learned by other services that have application to the issues facing the Air Force. Table 111-2, at the end of this section summarizes the major features of each service's OER system. 111-61
  • 90.
    Purpose While each ofthe services has a different list of objectives for the OER system, the central theme of each is that it provides evaluation to support a central promotion system. Most also state that the OER supports the centralized officer assignment system, but as a secondary objective. The further the stated objectives depart from these two, the less efficient the systems become to accomplish these additional objectives. One purpose which appears contradictory to the central purpose is that of feedback on performance. It is generally observed that raters, recognizing the importance of the OER to the long-range career aspirations of the ratee, will not be truly candid about current job performance in the OER. Also, the necessity to brief the OER to the ratee as part of the feedback process results in inflated ratings. Two of the services have recognized that contradiction by removing feedback on performance from their list of objectives (USA, USMC) and the others acknowledge that the feedback link is not working. Protect the Ratee - Rater Relationshln The uniformed services also recognize that there is a special relationship between an officer and his supervisor that is unique to military service. A part of this relationship is rooted in the dictates of military discipline and obedience to authority. Second, there is a military concept of loyalty between the two that works in two directions among officers. Finally, there is a sense of responsibility for the junior's career development which is fostered in all the services. The requirement to evaluate subordinates, and particularly to evaluate potential is threatening to this relationship. Therefore, the services have taken steps to reduce the conflict. In two (USA, USCG), the requirement to perform meaningful discrimination has been placed on the second writer of the OER, the supervisor's supervisor. In the Navy, the supervisor doesn't even 111-62
  • 91.
    write on theOER (except for those officers directly supervised by commanding officers). Finally, in the Marine Corps, this relationship is protected by a no-show policy and the complete separation of evaluation and feedback. Inflation All the services have suffered from unacceptable levels of inflation and all have developed mechanisms to influence a distribution of potential ratings among officers of a cohort along some scale. Two services rely on a forced, auditable peer ranking (Navy and Marine Corps), and two use persuasion and a rate-the-rater system that has an indirect threat for those officers who don't comply (Army and Coast Guard). The Foreign Service has also begun to consider adopting such a rate-the-rater system. Ouali.v Control There is an evident movement toward managing the quality of OERs from the service headquarters level. Three services (Army, Coast Guard, and Marine Corps) have substantially increased their level of interventions in the system in recent years and another has stated the intention to do so (Department of State). 111-63
  • 92.
    0o% 1: 0 cc )fI.. = ~4 o6 LU) hA- Coo at r- 4 cc 0 uca 6. mi~ ZU E00 4)6 CC OA . 1 z 0 0 IL- cc - E r- r. Qco r U - < 4) s CU L 0 - . v 4) 4)m U.L 06 Q6 U C6U C CC r0 A. < WC .0 CL< 96 LL) < < 11-64
  • 93.
    U 06 -. I.- . v N " 0 " eo 0. CU- , -' - ,, "01 U ) Co 0n 0- "- E w C6 CL E 0. C 06I ~ ,-- - *- 0 Zi 0 .2 0 = . cc = 06 ao >% 0 0 III-6 - Cc 4nA 0 ~' 0 0 c 122 L o co r: Gm M 6. a all. Ad 0 CL -u 111-6
  • 94.
    IW 0a ~h. 6n E ~-) 0 z~• < Q' Ior I-'I 00 o -I- I.6 L"Eno ~ ~
  • 95.
    6... 4 c t- %CA c o 0 w m 004) 0 0 h. 06 4 0 u . .4- W~ 4)cc ~ - o' - E- Mr- 0 7a v co 0 w a- 6. 0' w) ~ V) C6 u c--5 ' 2E E 9 E EU w CD I- 00 00 E CI .d ~d eq(~ U a C.. ) = . . M -0' &. b0 0%- ~ ~ 4)VEb .. E 6. m cd b m - I0.- m C; 0 E 111-67 U.. u < z
  • 96.
    *.2 . n 06 IIIW C6-. 0 0' zz 0 E -& 0 -c - 60 0 . z 90) c 0 c z u r=
  • 97.
    SECTION IV FINDINGS: AIRFORCE OFFICER EVALUATION SYSTEM This section discusses the current Air Force Officer Evaluation System, begiining with a review of the major features of the OER, as determined in our information gathering efforts. This part includes the purpose of the OER and a descriptiott o' the OER preparation process as well as the form itself. It also discusses the discrimi'ning factors operating in the current Air Force system, the provision of feedbac!ý i the officer being evaluated, and the provisions for quality control of OERs. The second part of the section discusses the issues identified by the s , uup in our interviews and focus groups, including those which are cultural as w(%! 1.:. Ise dealing with the OER form and process directly. The third part briefly .lwnr, - izes these findings. MAJOR FEATURES OF THE CURRENT OER SYSTEM PURPOSE OF THE AIR FORCE OER According to Air Force Regulation 36-10: "The puipose of the officer evaluation system is to provide the Air Force with information on the performance and potential of officers for use in making personnel management decisions, such as promotions, assignments, augmentations, school selections, and separations. It is also intended to provide individual officers information on their performance and potential as viewed by their evaluators." IV-l
  • 98.
    Our guidance fromAir Force leadership has reinforced this statement, but has placed emphasis on the objectives of accurately assessing current job performance, differentiating among officers in potential for promotion, and facilitating the provision of feedback to officers which will help them to improve their performance and thus to increase their value to the Air Force. We have kept these purposes in mind throughout the study, and our assessment of the Air Force OER has been performed with these objectives as its criteria. THE AIR FORCE OER PROCESS The Air Force OER process begins when the Consolidated Base lPersonnel Office (CBPO) determines that an OER is required for a given officer. AFR 36-10 lists all of the events which require completion of an OER, but the most common are a PCS move by the rater or ratee, or a change of assignment. As a minimum, an OER must be completed at least every six months for lieutenants with less than three years of service, and annually for all other officers through colonel. The rating officer receives two copies of the computer-generated notice that the OER is required. This notice includes the Ratee Identification Data for the OER, and it is recommended that it be verified by the ratee. The rater then is responsible for collecting all the additional information he/she needs to complete the OER. Typically, the rater may ask the ratee to provide an update on his/her accomplishments during the rating period, and may solicit information on the ratee's performance from other supervisors who have observed the ratee's work. The rater completes the rater portions of OER, and then submits it to the additional rater for completion of the next portion. The additional rater adds comments, signs the form, and forwards it to the indorser for final comments and signature. The indorser retu;'ns it to the CBPO for further processing and quality control in most cases. IV-2
  • 99.
    The above isthe idealized route of an OER. Our interview and focus group subjects indicated that the actual routing is more complex, with extensive communications passing up and down the rating chain, and within the indorser's organization, to determine the level of indorsement for any given officer's OER and to provide the additional rater and the indorser with information to use in generating their comments and recommendations. We were also informed by many officers that it is common for the rater to ask the officer being evaluated to provide a rough draft of his or her own OER, a questionable extension of the practice of providing the rater with an update on activities and accomplishments during the rating period. THE AIR FORCE OER FORM The current Air Force Officer Effectiveness Report, AF Form 707, has been in use since the end of the control era in 1978, although the current form is dated 1982. A copy of the form is shown in Figure IV-l. The form consists of eight sections. Section I contains ratee identification data, which is provided to the rater by the CBPO, and verified by the rater and ratee. Part II is the job description, which calls for duty title, key duties, tasks and responsibilities. Part III is the rating of specific performance factors. As shown in Figure IV-l, the form provides for the rating of 10 specific factors on a five-point scale, and requires narrative comments with specific examples of each factor. The OER regulation, AFR 36-10, provides specific standards for use in rating these factors, although our respondents report that this guidance is seldom consulted. Part IV is the first section of the reverse side of the OER, and provides space for the rater to make recommendations for the ratee's next assignment. Part V is the overall evaluation of potential, with a six point scale to be used by the rater, additional rater, and indorser. Part VI, the rater comments section, is the last portion of the form completed by the rater, and provides space for comments on the promotion IV-3
  • 100.
    FIGURE IV-j AFR 36-10Attachment 1. 26 October 1982 Effective 1 Novembew 1982 SAMPLE I. RATEE IDENTIFICATION DATA (RaidAPR 36-- O•cefully before fitiing in env im") 1. N A M 9 (Las. Fstf. Middle Initial1) 2. *SAN (IRCb~ide Suffix) S. OADS dA. UA c SMITH. Jack II 231-34-5432 Captain A1321X 6. ORGANIZATION. COMMAND. ,OCATION 4- PAS COOD 345 Tac Ftr Wg (TAC), Mt Home AFB, ID MTOTDKLS 7.p ,cRIo or07 REPORT I. 040. DAYS 00 S REASON FORREPORT FROM. 13 Jul 81 T,,u, 31 Oct 82 SUPERVISION 120 Annual ii. JOB DESCRIPTION I. OUVTTITLd, Enter --- PMi.. v 1. ."-.. d approved duty title as of the L0CYOUTICS, fASK5^ANO CSPONSISILIT193 closeout date of the report (paragraph 2a this attachment). Item 2: Describe the type and level of responsibility, the impact, the number of people supervised, the dollar value of projects managed, and any other facts which describe the job of this particular ratee. Ill. PERFORMANCE FACTORS " "660 Sci*kW 6CL0OW 1 MCITI A8OV I ASOVr •,;,dl rexample o performance required NOT OISI[VCD I5lg,&M STANARD SITAPI0AND ITADOARD STAN DARD I, JOag KNOwLEOGE 'D•pth., cslretney. What has the ratee done to actually demonstrate depth, currency or breadth of job knowledge? Consider both quality and quantity of work. 2. JUDMA NT AND 0[C:ISIONSOU COnJrVrES r, Mac npof w'e fertand a0 1 E) Does the ratee thank clearly and develop correct and logical conclusions? Does the ratee grasp, analyze, and present workable solutions to problems? oS . LACMN (ua.acpto L._J 'L__ L_.J -1 J L 3. II6,AN ANDO ORGANI'Zr WORK (TtFMely. Does the ratee look beyond immediate job requirements? How has the ratee anticipated critical events? A. M4ANAGErMErNT o~r lRESOURlCElS Does the ratee get maximum return for personnel, material and energy expendedl Consider the balance between minimizing cost and mission accomplishment. ,esporuibiiry) 0) How has the ratee demonstrated initiative, a~ceptance of responsibility, and ability to direct and motivate group effort towards a goal? . AOAAITY To TESS (Stable, o L LJ LJ flexible, dependable0 How has the ratee handled pressure? Does quality of work drop off? Improve? 7. ONL MMUPNgCATIO0H (CkAts. __._confident)J L __ €o. PII . o ,5IS rnI 0UITg Atrd, aL...i L . .. J L . ... How has the ratee demonstrated the ability to present ideas orally? I.WRITTE[N COMMUNICATION fCltar. How has the ratee demonstrated the ability to present ideas in writing? dreg. cooper.,o , o L__ E_ L _e a12_5 How well does the officer meet and enforce Air Force standards of bearing, dress, grooming and courtesy? Is the image projected by the ratee an asset to the Air Forcre' .. 0. v N,.A,, 09 ,L.ATIONS (Equal oppo,•,n._y parrielon.sentslVit-IyJ How has the ratee demonstrated support for the AF Equal Opportunity Program, and sensitivity for the human needs of others? Evaluation of this factor is MANDATORY, AF ,V- -17 •.CVIOUggEIIONILLStUSED C'FICER EFFECTIVENESS REPORT IV-4
  • 101.
    FIGURE IV-I Effective INovember 1982 AFR 36-10 Attachmnet 1 25 October 1982 SAMPLE IV. ASSIGNMENT RECOMMENDATION: 1. STRONOGER QUALIFICATION, Perserver nce a. sucawsTEO joe lInclude AFSCP: I 1. ORGANIZATION LEVEL: A TIMING! V. EVALUATION OF POTENTIAL: Compare the milee'$ capability toasirumeIncrtaredresponsibilirywith that of otherofficers whom you know in the sme grade. Indicateyour rating by piecing an "'X in the designatedportionof the most appropriateblock I DZI X L'1III 11111 1 iZ ZIZIW RATZEN ^DON INDORI. RATER A004 INDOOWf RATER AODN INDOKS. RATER ADON 0#40O0S RATER EP RA^TER Et WATER up RATER aR VI. RATER COMMENTS Oroanize comments within the standards of good writing. Do not use headings: underline, or capitalize merely to add emphasis. Include those comments required by paragrapii 3-15. Add any other comments not covered elsewhere and not excluded by paragraph 3-14 which will increase the value and meaning of the report. Amplify those positive aspects of the ratee's performance deserving special note. AME. GRADE.o I& OF SVC. COMo. ocATaoN D ILE DAT JACK LAMB, JR., Lt Col, USAF Operations Officer 1 Nov 82 529 Bomb Sq (H) (SAC) rIBA- SIGATUR Plattsburg AFB NY 012-34-5678FR VII. ADDITIONAL RATER COMMENTS Qc)CONCUR ONONCONCUR Review the ratings and comments of the rater for completeness and impar- tiality. If the additional rater does not concur with any rating in section III or V, or any commenmts, check the nonconcur block. To reflect disagreement, initial appropriate blocks (section III) and mark additional rater block (section V). Significant disagreement (para 2-26) requires justification. NAME. GRADE. SIR OF SVC, ORON. COMO, LOCATION DUTY TITLE DAT FRANK HARRIS, COL, USAF Commander 2 Nov 82 529 Bomb Sq (H) (SAC) $SAN SIGNATURE Plattsburg AFB NY 987-65-4321 Vill. INDORSER COMMENTS OCONCUR NNONCONCUR Review the ratings and commenfs of the rater and additional rater for completeness and impartiality. if the indorser does not concur with the additional rater's comments or ratings, check the nonconcur block. To reflect disagreement, initial appropriate block (section III) and mark indorser block (section V). Significant disagreement (para 2-26) requires justification. mA.ME.* GRADE. SR OF SVC. O,.N. COMO. ,.OCATION UT V, TITLE DA1TE James M. Robinson, Col, USAF Commander 4 Nov 82 380 Bomb Wg (SAC) $SA2N SI-NATURE r Plattsburg AFB NY 234-56-7890FR M ffl. AF Form 707. (Reverse side.) |V-5
  • 102.
    recommendation, as wellas for any other information the rater wishes to provide. Parts VII and VIII are for additional rater and indorser comments, respectively. DISCRIMINATING FACTORS IN THE AIR FORCE OER Our respondents indicated that the indorser comments, especially regarding promotion, and the indorser's rating of potential, as well as the rank and position of the indorser, have become the most important factors in differentiating between officers for selection purposes. The explicit ratings of performance factors have become so inflated that they differentiate only the most deficient officers, with virtually all others "firewalled" in the highest block. Thus the words used by the indorser to communicale his or her enthusiasm for the ratee and to justify the promotion recommendation have taken on great importance. The rank and position of the indorser, considered with his/her narrative comments, arc perhaps the most important differentiators for promotion. Because of this, indorsement inflation has occurred, and it has become necessary to place considerable pressure upon the major commands to limit the highest level indorsements they provide. In fact, the Headquarters, US Air Force, provides guidelines to the major commands on the upper limit of reports for each grade which should be indorsed by senior general officers. The pressure of these guidelines and other informal communications has led to the establishment of elaborate but largely invisible procedures within each command to determine which officers receive which levels of indorsement. De facto quotas of high level indorsements are thus apportioned among the officers in a manner quite similar in effect to the apportionment of "one" and "two" ratings during the control era, although different in application and method. Officers in the field perceive the similarity to the controlled era. In addition, it was widely reported to the study team that indorsements are often managed so as to "peak" when an officer is about to IV-6
  • 103.
    meet a selectionboard, just as there was management of controlled ratings for this purpose. FEEDBACK TO THE RATEE ON PERFORMANCE The Air Force regulation on Officer Evaluation, AFR 36-10, specifically states that the OER is not to be used as a "counseling device", but it does instruct the supervisor to counsel ratees "as the need arises" and suggests that periodic counseling is advisable as well. The Air Force provides no formal counseling or feedback form, however, to facilitate such a process. The ratee has access to his OER as soon as it has become a part of the permanent record, although he/she is not given a copy as part of the normal OER preparation and routing process. Our focus group respondents were mostly in agreement that supervisors should provide job performance feedback to their subordinates, although the term "counseling" was not comfortable for some of them. Few officers reported receiving sufficient job performance feedback at any time in their careers, and many admitted that as supervisors they did not give as much feedback as they should. Some officers expressed the feeling that, although they gave little formal counseling, their subordinates "know where they stand", and nearly all said that they were quick to inform a subordinate when his performance was L riously deficient. Many officers appeared uncomfortable with the idea of compulsory periodic counseling, and they agreed that considerable training would be required to prepare most Air Force officers to counsel effectively. Some were familiar with the Army OER Support Form, but we found no consensus on whether a similar counseling and feedback form would be effective in the Air Force. Most officers who were asked felt that the Air Force was not currently in a position to implement management by objectives (MBO) performance management techniques. IV-7
  • 104.
    AIR FORCE OERQUALITY CONTROL AND RATINGS CONTROL The current Air Force system relies on the CBPOs to perform quality control checks on OER forms, with the Headquarters USAF level retaining the responsibility to "administer rating policy and to determine qualitative adequacy, rating trends, and adequacy of command management" (AFR 36-10). Guidelines for quality control, including statements on what subjects are appropriate and inappropriate for discussion on the OER, are given in AFR 36-10. The Headquarters, USAF quality control capability is resident at the Military Personnel Center. There are approximately three manpower spaces devoted to OER policy development and interpretation. Quality control of Air Force OER ratings distributions is the responsibility of the major commands and agencies. There is currently no published system of ratings control or distribution in the Air Force, and no control is imposed on the numerical ratings of performance factors or of potential for promotion. However, our bricfings and interviews revealed that there is an unpublished mechanism in use to limit the number of three and four star level indorsements given within the major commands. As discussed above, this pressure to limit the number of high level indorsements has given rise to fairly elaborate unwritten guidelines within the commands, which serve as an implicit control mechanism. In our interviews and focus groups, officers indicated that they were aware that such a system exists, though few were able to describe its operation in their own commands. Some officers expressed dissatisfaction with the *invisibility" of this system, and clearly wished it were more open, but many were quite accepting of the status quo. ISSUES AFFECTING OFFICER EVALUATION Our information gathering activities yielded much data on the Air Force OER system, and in our analysis of this data it became clear that several major issues could be IV-8
  • 105.
    identified. These issueschiefly are the outcomes of interactions between the people, (Air Force officers), and the OER system. These interactions produce reactions: values, opinions and beliefs which must be taken into account if modifications are to be successful. We have organized these issues into four categories: i. Air Force Culture 2. OER Process 3. OER Content 4. Non-OER Promotion Issues AIR FORCE CULTURE Over the past few years a great deal has been written about the topic of culture as applied to corporate environments. Through our information gathering in the Air Force we observed a number of cultural characteristics and beliefs which have a very inmportant bearing on the question of how likely it would be for a new OER process to be successful. The following is a description of these characteristics and beliefs. All officers are above average The focus group discussion revealed a strong belief that because of the successive screening processes an individual must go through to become an Air Force Officer, the resulting group is an elite corps well above an "average" population in many ways. From a statistical standpoint it seems quite likely that the selection process would indeed produce an above average population in terms of intelligence, education, persistence, and energy level. The consulting team members strongly concurred that the group of Air Force officers with whom we had come in contact were comparable or superior to most professional and managerial groups we had worked with in other client settings. IV-9
  • 106.
    The implication ofthis very strongly held Air Force belief is that for an officer to be labelled as "below average" is a very severe blow to his/her ego and perceived career potential. Our respondents indicated that this factor was a major cause of the very strong negative reaction which the "controlled" system elicited. Thus, any newly designed system should avoid the need to label as "below average" any officers who are viable candidates for futvre promotion. In today's Air Force culture any rating of "below average" is a strong si,"kl to the individual to seek his/her future career elsewhere. Unwillingness to differentiate onenlv Two major reasons were given for the unwillingness of most officers to differentiate openly among the officers they must rate. The first goes back to the previous discussion. Since there is a strong feeling that all officers are above average, rating officers strongly resist any system whereby they must identify those officers who are below average. In our interviews, however, there was some willingness to ident;fy the truly outstanding individuals, and the individuals whose performance or potential is so poor that they should be released from the Air Force. A second factor concerns the closeness of the superior/subordinate relationship. Here, officers feel that to advise an individual that he/she isn't meeting performance expectations is demotivating and may have negative effects on the individual's job performance. In the absence of potential merit increases or bonuses for short-term performance, rating officers feel they have to give "pats on the back" through the OER system, even to those whose performance is acceptable but not outstanding. The superior/subordinate relationship, along with the group cohesiveness encouraged by tl-c Air Force culture, also leads to officers' feeling an obligation to "promote their people". It is a matter of pride for an officer to have his or her subordinates receive promotions, and reflects adversely upon his/her ability to develop subordinates if they are passed IV-10
  • 107.
    over. The importanceof this value sometimes appears to override the need to select the best possible leade-s for the Air Force. However, most officers expressed the belief that there are n..,,y more good officers than there are promotion opportunities at the higher grades. They consequently believe that there seldom is a conflict between promoting "one's own" and promoting the best leaders for the Air Force. Up or out system Because of budget requirements, legislative controls and a number of other factors, the Air Force system requires an officer either to be promoted at each oppnrtunity or to leave the service at some point prior to completion of a full career. It is this fact that places so much of a burden on the OER system. There is no parallel in private industry whereby one performance appraisal can, in effect, dictate a decision to lay off a person many years in the future. While we did not take a random sample, the bulk of officers we questioned believed that the "up or out" system was good for the Air Force insofar as it assured that officers would continue to be motivated to perform well throughout their careers. The controlled QER system Our interviews and focus groups indicated that the controlled system has left deep scars within the officer ranks. It has an almost uniformly negative image and people are quick to relate instances of "good" officers leaving or being forced out of the ser- :e because of a "three" rating. There is thus a negative feeling toward any type of statistically-based controls on ratings. However, as our interview and focus group discussions of the problems of inflation unfolded, many participantsoffered suggestions which amounted to some type of control. Thus, the desire to curb rating inflation is expressed as a willingness to see some type of "controls" implemented at an appropriate level. Most frequently mentioned in such discussions is the Wing level. It is also clear IV-, I
  • 108.
    that if asystem that limited ratings in some way were to be installed, a terminology avoiding the word "control" might avoid the worst of the negative reactions. Distrust of promgrotn board sensitivity There appears to be a feeling, among junior officers in particular, that individuals on promotion boards may look at surface data only, and therefore miss many o( the more important aspects of an officer's record. For instance, some officers were concerned that if the level of indorsement declines from one OER to the next. the board will automatically treat this as a very negative factor without looking any further, when in fact the person had changed assignments to where he/she was much further removed organizationally from an indorser of the same rank. One source of this belief is the common knowledge that boards cover so many candidates in so little time. A simple division of time by candidates yields only a few minutes per candidate, so the general feeling among many junior officers is that no in-depth reading or understanding can be achieved. Promotion board memoers report, however, that they need spend little time on those records that clearly go in either the "yes" or the "no" piles. They then report spending much more time with those on whom there is more doubt (the records in the "gray" zone). Also, as one might expect, promotion board members report that they dk look behind +thesurface facts when inconsistencies appear in a record. Careerism/focus on perinherals Because of the lack of differentiation in OER ratings a cultural phenomenon of "focusing on peripherals" has developed. That is, many officers feel that since they cannot stand out on the basis of their ratings they must pursue certain types of education and assignments, which may have nothing to do with preparing them to assume greater responsibility, in order to provide ihe promotion board with the proper "image". A corollary to this phenomenon is the feeling of unfairness caused by the fact IV-12
  • 109.
    that certain primaryassignments make it much more difficult to accomplish these peripheral activities. For instance, certain aircrew members may find it impossible to attend evening classes to improve their educational attainments on a regular basis, if much of the time they are away on temporary duty (TDY). These then are some of the cultural issues we discovered which surround the OER and promotion process. The next sections deal with some of the issues concerning the process and form itself. OER PROCESS ISSUES Nomination process for determtnine indorsements An extensive system currently exists for differentiating among officers on OERs for the purpose )f promotion recommendations. Because the ratings ha%'.- become so inflated, the differentiation no longer appears in the ratings thcmselves, but rather is fouuid in the level of the final indorsing official and the words which that individual uses or does not use to recommend the officer for promot'on. Clearly, higher level indorsements indicate more favorable OERs. The choice of who will receive the highest indorsements is made with great care. This choice is the result of considerable dialogue, both verbally and in writing, between levels of command to determine who are the best performers and those mo-t worthy to "push' for promotion. Thus, the overt rating process for which the OER form was designed has really been replaced with one which is not visible to the ratee. While most ufficers we interviewed were well aware of the fact that the level of the indorsing official was the primary differeitiator, there was little spontaneous conversation in the focus groups on how the decision of who will indorse the OER is made. It may be that officers do not wish to offset the positive feelings they receive from inflated OERs with a more critical examination of how they will or will not be differentiated from others in the promotion decision. IV-13
  • 110.
    "Creative" use oflaneauze Because officers feel they must "firewall" the ratings, and because the form requires a description of performance to justify each rating, the result is that much description of meritorious behavior is exaggerated. This results in an ethical and an administrative issue. Many officers report that they are disturbed about having to say things which they do not truly believe, but they feel forced to do so to avoid destroying the career of an acceptable officer. In general, the level of ethical discomfort expressed was not severe, but in a few cases it was quite intense. In addition, there is some feeling that by encouraging such behavior in the writing of OERs the Air Force is setting the wrong example for what might be expected in other areas of behavior, especially for junior officers. The need to provide verbal descriptions for superlative ratings also creates an administrative burden. That is, since the rating officer must back up any rating with "facts" about the person's performance that justify the ratings, rating officers spend a good deal of time marshalling their facts. The process becomes a maximization game. The rating officer knows he/she must fill ten spaces for the performance ratings and a larger space for the rater comments. The rater also knows that promotion board members normally will not read the comments on the front of the OER. Therefore, his/her "best" facts are saved for the rater comment section on the back. However, given this number of spaces to fill, many separate facts n'ust be described, and a good deal of time is spent collecting and documenting them. In addition, some rating categories are more easily observed in peripheral activities than in the major assignment (such as oral communication for a fighter pilot). Such ratings are often made on the basis of a performance as peripheral as conducting a tour of an airplane for a grammar school class, rather than on flying performance. IV-14 ______|_|_____--_____________________________________________
  • 111.
    Administrative burden Some ofthe sources of the administrative burden of OER preparation were discussed in the section above. In addition, the need for absolute correctness and neatness with no erasures, and the unwritten ground rule that all spaces must be filled with verbiage, has led to the situation where OERs often are retyped many times and proofread by officers many times at the originating unit, and read and reviewed for correctness at higher level units as well. Although word processing equipment is used in some cases, it is estimated from survey data that Air Force officers may spend an aggregate 650,000 hours a year in the writing process alone. Adding to this the repeated proofreadings, the typing time, the successive reviews and indorsements, the total time involved in the OER process is enormous. Most importantly, this time is all spent in the process of documenting performance; it is not the very productive time that might be spent by rating officer and subordinate in a performance planning or review session to actually improve performance. Control of inflation While reactions to the control program that was instituted in the 70's are still very negative, many officers expressed the belief to the project team that there was a need for some way to remedy the current inflated ratings situation. Most often the Wing level was mentioned as a logical place for a review and differentiation process to take place, and for controlling influences to be applied. Frenuency of OERs The yearly time cycle of an OER is not an issue with the officer corps but certain aspects were mentioned as problems. The six-month interval for lieutenants' OERs is felt to be overly burdensome and not very useful, since a lieutenant typically shows little change in his/her level of performance in six months. The other problem IV-15
  • 112.
    mentioned was therequirement to produce a report on an individual because of a change of assignment in either the rater or ratee, when the period of the report was only a few months. The same problem of lack of sufficient time for observation of significant performance changes applies in this case. Implementatlon of chan.e The Air Force is a relatively conservative institution with a strong staff orientation. In such organizations, except under crisis conditions, change must be evolutionary rather than revolutionary. Thus, new systems must be tied to old and must flow out of established values and practices. Given the strong concentration of authority in the major commands it is imperative that the command staffs be part of developing and implementing any change to the OER system. Our respondents felt strongly that any change would need reinforcement through as many channels as possible. Need for tralning The officers we spoke to all agreed upon the need for training raters, reviewers, indorsers, personnel staff, promotion board members and anyone else involved with the OER so that they will be prepared for their changed roles in any new system, While the requirements to accomplish such training may be very substantial, It will be necessary If any significant cultural change Is to take place, Training and information distribution deficiencies were seen by many officers as having contributed to the failure of the controlled OER, OER acceslbillltv There are two issues here, one concerning the availability of past OERs to the rating and indorsing officers during the preparation of an OER and the other having to do with tht number of past OERs which are made available to the promotion board. On IV-16
  • 113.
    the first issuethere was some concern that raters and/or indorsers referred to previous OERs in preparing the current one. Some officers interviewed believe that this is unfair in :he case of someone who may have had a bad experience (such as a personality conflict with his rater) in the past, but who has performed differently over the period of the current report. By referring to past reports for making current ratings, a rater would, in effect, be usurping the function of the promotion board which is charged with reviewing the entire record. The second issue is the question of how long OERs should be kept in the personnel and selection record. Presently, the record consists of all OERs from the time the officer was commissioned, but there are reasons why this may be inappropriame. For example, many senior officers, who had been in the Air Force during the controlled OER period, felt that they or their peers were still feeling the ill effects of that period, since many still had "3" ratings from that time in their selection folders. They were certain that if a selection board had to decide between two folders which were otherwise equivalent, the one with a "3"from 1977 would be at a disadvantage. The expression "a one-mistake Air Force" was another phrase we heard referring to the perception that one poor OER, even when followed by years of fine performance, could jeopardize an officer's career. This was seen by most officers as unfortunate, if not unjust. Feedback to officers belne rated For the most part, the officers we interviewed expressed strong interest in obtaining feedback on their performance from their immediate superiors. They agreed, however, that the OER wu not an effective vehicle for accomplishing this. This desire for feedback was keenest among younger officers--a phenomenon that is not unlike that found in private industry. The current generation of professionals coming out of our colleges is much more attuned to an "open" environment where performance feedback, IV- 17
  • 114.
    career planning, andthe use of individual initiative are an expected part of the job environment. CONTENT OF THE OER FORM Job descrintion It was unanimously stated that the job description was an important part of the OER and definitely should be retained. There was, however, a feeling that the description could be improved by greater concentration on what the officer actually does and on the scope of his or her responsibility and authority (e.g., number of people, budgets, etc.). Greater focus on lob verformance Many officers believe that the OER as it is now constituted encourages excessive attention to peripheral activities at the expense of the primary job and performance in that job. The performance rating factors were 3een to engender this problem especially for rated officers in flying jobs. These jobs provide little opportunity to demonstrate performance factors such as "oral communication" or "management of resources", but since a rating of "Not Observed" is culturally unacceptable, the rater must find something to justify his ratings. It is in these cases that peripheral duties, such as management of a coffee fund, or presentations to community groups, may be assigned as opportunities for the officer to perform on these factors. Not surprisingly, many rated officers feel that this is not a productive use of their time, nor is it seen to promote the best long-term interests of the Air Force. The general feeling was expressed that too many factors were being rated that were not directly related to job performance in many jobs, There was a strong desire to rate factors that were directly pertinent to performance in the primary position together with significant additional duties. IV-18
  • 115.
    Performance ratines There wasgeneral agreement that because of inflation the performance ratings no longer perform the function for which they are designed. There were, however, few suggestions for improvement of these ratings. In those instances where differentiated ratings were discussed, respondents talked about identifying the extremes rather than finding differences at all levels of performance. Also, where differentiation was discussed, the suggestion was made that such differentiation could best be introduced at the Wing level. There was almost universal agreement that the required comments on the performance ratings should be eliminated since they are not useful. Promotion board members acknowledged that they did not read these descriptions of performance except in very, very rare cases. While the suggestion was made that perhaps these comments are useful for assignments, our discussions with those responsible for assignments indicated that they were not read for that purpose either. Format of narrative iportions Air Force Regulation 36-10 suggests that narratives be written in straight prose style and discourages the use of headings, underlining, or capitalization to add emphasis. Many officers felt that bullt ig and similar techniques should be used to shorten the required prose and to highlight the points that are most important. Such techniques are used currently by some of the other services on their OERS. Statement of nromotabllltv Promotion boards indicated that they put considerable weight on what the indorsing officer writes about promoting the individual. Thus, an indorsing officer can inadvertently hold a person back from being promoted by not making an overt statement IV-19
  • 116.
    about "promotion now"even though he/she has described the officer's performance and potential in glowing terms. It appears that a more structured process for obtaining a statement of promotability from indorsing officials would avoid potential misunderstandings. NON-OER PROMOTION ISSUES Role of auuumentation Today, nearly all officers are augmented to the regular Air Force by their seventh year. It is possible that some greater degree of selectivity in augmentation may serve to eliminate people with lesser chances for a long and successful career at a time when they are more employable on the outside and to assure an almost universal promotion to major for all who are interested in an Air Fo-ce career and pzss through the augmentation screen. This is, however, a subject which has implications far beyond our ability to generate the appropriate facts and we merely raise it as an issue that might be pursued more aggressively by the Air Force staff. Picture in the folder A good deal of hostility is expressed over the inflated importance of details which have become associated with the photograph of the officer in the selection folder. Variables such as the skill of the photographer, how photogenic the officer is, or individual likes and dislikes of those serving on promotion boards are all factors which are seen as unnecessarily biasing in relation to the picture. Many officers would prefer removal of the photograph from the folder. Instruction to boards It appeared to us that selection boards receive a good deal of instruction on techniques for making their selections and coming to agreement but only very general IV-20
  • 117.
    guidance on thecriteria for selection. It seems that if the Chief of Staff were trying to emphasize certain criteria then specific instructions about such factors should go to promotion boards. This could relate to such policy issues as the Chiers desire to view a record of good performance in cockpit jobs as sufficient reason for promotion through lieutenant colonel. The instruction mechanism could also be used to assure that boards pay particular attention to the needs of the service at any particular time for particular types of skills or backgrounds. In general, more pointed instructions about the philosophy the Chief of Staff is trying to reinforce can be given to promotion boards as one of the major factors in the reinforcement system. SUNINIARY This section has identified many issues and problems relating to the Air Force OER system. Some of these are vitally important to the functioning of the system while others are minor or peripheral issues which will not be given high priority in the search for ways to improve the OER. The issues and problems which the study team considers most important are those relating to: 1. The honesty and integrity of the OER system; 2. The adequacy of the OERs focus on job performance; 3. Means for differentiating and identifying promotion potential; 4. The provision of performance feedback to the officer being evaluated; 5. Discipline or control of OER ratings and indorsements; 6. The administrative burden associated with the OER process. IV-21
  • 118.
    Of all theissues we identified,these are the ones which relate most directly to th,.. fundamental objectives of the OER system, as stated in AFR 36-10 and as expressed in the guidance we received from Air Force leadership. Thus these are the ones which must be addressed by any conceptual designs for an improved OER system. The next section will discuss the process by which the study team developed its proposed conceptual designs to deal with these issues and will present the three designs in detail. IV-22
  • 119.
    SECTION V CONCEPTUAL DESIGNSFOR THE AIR FORCE OER This section describes the process by which the conceptual designs were initially formulated and refined. The specific designs are then explained in detail. FORMULATION OF CONCEPTUAL DESIGNS The first step taken by the project team in developing conceptual designs for Air Force officer evaluation was to determine what tests would be applied to each design in order to determine that they have potential use to the Air Force. Given all of the previous input, the project team developed the following set of design criteria as being the most pertinent against which to test any recommended design: An improved OER system should: i) focus on job performance, not peripherals; 2) provide differentiation in potential for promotion; 3) be acceptable to the Officer Corps; 4) provide means for developing subordinate officers; and 5) minimize administrative burden GUIDING CONSIDERATIONS In addition to the design criteria outlined above, the project team worked with a number of considerations which had emerged from interviews and discussions with members of the Air Force officer corps as well as from corporate knowledge and experience of human resources management. These guiding considerations are discussed below. V-I
  • 120.
    Alternative OER DesignsShould Reflect the Lareer Air Force Culture. This consideration takes into account that the Air Force officer corps is a group of highly trained professionals which perceives itself to be above average in ability and performance. Along with this perception is the historical inclination by the Officer Corps to place great emphasis on rewarding subordinates and assisting in their promotion opportunities by rating subordinates very highly on their OERs. In conflict with these realities is the fact that the Air Force, like all other services, must work within the constraints of the "up or out system" which mandates selection of an ever smaller population at each officer grade. This conflict breeds an unwillingness to differentiate openly for appraisal purposes. In consequence, the Air Force OER process, like many other performance appraisal systems, has been characterized by high inflation in overall ratings. The controlled OER (1974-1978) struck directly at the inflation problem by requiring a forced distribution of ratings. Initially, the top 2 blocks were controlled such that no more than 50% of the officer corps could be in these two blocks. The perception at that time was that a 3 rating or below was akin to the end of an upward Air Force career track. Terminated in 1978, the controlled OER generated a great deal of anxiety and loss of morale which are well remembered today. A lesson to be learned from this era is that the requirement to rate a subordinate in an "unpromotable"category, real or perceived, is at odds with the culture and probably will not be accepted. A second lesson is that avoidance of design features which resemble the controlled system should ease implementation and acceptanceof a new system. V-2
  • 121.
    Alternative OER DesignsShould Encourage Change In Cultural Attitudes and Habits Concernine the OER. This consideration recognizes that over time and many changes to the OER, certain cultural habits surrounding the OER have become ingrained within the Officer Corps. These habits include not only the inclination to give high ratings on potential across the board, but also puffery in narrative comments. In addition, there is the understandable tradition of seeking the highest level indorsement possible. To encourage change in these habits the project team decided that alternative OER forms and indorsement patterns should be sufficiently different to require raters, indorsers, and promotion boards to adopt new modes of behavior and not merely apply old habits to substantively different report forms. Judement. not Statistics Should be the Ultimate Method of Making Career Decisions While numerous interviewees mused about the possibility of being able to "score, OERs to make a promotion decision, it is the project team's firm belief that this is the wrong direction in which to head. The Air Force created promotion boards for good and sufficient reason. The human brain is far more powerful than any computer even envisioned at the present time. Also, the field of psychoohysical measurement (the physical measurement of psychological phenomena, e.g., a rating of "leadership traits") is worl.•s behind computer technology. To suggest that these technologies replace the judgment of a small group of experienced and mature officers in the interest of "fairness" is folly. We have therefore directed our efforts not toward mathematical exactitude, but to produce the richest colle:tion of information practically obtainable for promotion boards to use in their deliberations. V-3
  • 122.
    Alternative OER DeslensShould be Practical to Implement. Apart from the criteria of minimizing administrative burden, the projec't lk.Rm felt that any alternative OER design should be formulated to take advantage of avaiiable technology to the extent possiblc. This would apply to storage as well as processing of OER information for both individual rater and promotion board purposes. Practicality as a consideration also extended to implementation of an alternative OER system. Again, drawing from lessons of the controlled OER, the project ,11:im believed that gradual and perhaps evolutionary implementation might be more acc:*-.Iable to the officer corps than an abrupt full scale implementation. For example, alternative OER design assumed voluntary conformance with rating procedures sufficient conformance did not occur, then stronger review techniques could be ad,;,,, the system as needed. RANGE OF FEASIBLE ALTERNATIVES. Given the criteria established for an improved OER system together witli ol-! guiding considerations, a range of feasible alternatives was determined to e•!. Although the initial alternatives formulated by the project team varied accor' , certain individual features of form, process and content, this range can best be exp)-,'seQ in terms of degree of change -- from alternatives causing the current OER s%,. change very little to alternatives causing rather radical change to the OER process The preliminary designs shared some common components. All of the pi designs assume greater usage of computer technology than currently exists. In ac,•itinn, all of the designs retain job performance factors, although the number of fact i,• been reduced. In each design. however, the requirement for supporting narrative ; , ,;• rating on each performance factor has been eliminated. In addition, each de-i . 1ý4 incorporated a space for the rater to define job accomplishments for the ratin . V-4
  • 123.
    Finally, each designassumes use of an off-line OER worksheet for job counselling purposes. The designs varied one from another primarily in the way discipline would be introduced. This variance ranged from no change in the current, covert indorsement system to overt control of the top block. Once the preliminary design ideas had been formulated, the project team entered into a second stage interview process to test major elements of the designs by gathering the views of selected members of the Air Force officer corps. TESTING AND REDESIGN OF CONCEPTS The interview guide used in the second stage interview cycle is given in Appendix E. These interviews, held with 20 Air Force officers ranging from 0-3 to 0- 6, were fairly informal discussions to determine respondents' reactions to the various design features and to obtain their opinions on issues surrounding implementation of these features. A summary of the results from the interviews is given below while a complete tabulation of the results of these interviews, broken out for junior and senior officers, is shown in Appendix E. The overall impression from these interviews is that there is a desire for a streamlined and discriminating OER process. Computerization of OER processing was strongly supported as was the proposal to use pre-developed job descriptions which could be revised or amended at the time of OER preparation. The idea of having a separate OER for company and field grade received fairly strong support but was accompanied with concerns over increasing the administrative burden. Retention of the twice-a-year OER for lieutenants received very little support (only 27% of the respondents were positive overall). V-5
  • 124.
    A proposal toinstitute an off-line OER work sheet for use in setting goals and reviewing past performance received very favorable reaction from the respondents. By contrast, proposals to show a developmental goal for an individual officer on the formal OER form or to show the officer's strongest performance area were not well received. A number of officers believed such additions would simply be gamed and that raters would have a difficult time in forming such opinions. Officers did want to retain the graphic scale for potential but did not have strong feelings about omitting numeric scales on performance factors. Elements which would introduce greater discipline in ratings also received strong support. Such elements included limiting the Wing Commander to giving top potential scores to only 10% of ratees; providing rater histories to superv',sors; and showing rater and indorser tendencies to the selection boards. The preliminary designs were reviewed in the light of these findings and appropriate revisions were made. The final forms of the conceptual designs are explained next. CONCEPTUAL DESIGNS FOR OFFICER EVALUATION This section presents three conceptual designs for Air Force officer evaluation. Presentation of these conceptual designs will be in three main parts. First, a set of features will be discussed which will be uniform across all of the designs. These are features which the study recommends for adoption, no matter what specific design for evaluation may be chosen. Second, the variable features of the three designs will be presented. Finally, each of the conceptual designs will be compared to the design criteria which were presented on page V-I. V-6
  • 125.
    UNIFORM ELEMENTS OFTHE CONCEPTUAL DESIGNS There are a set of features which the study team believes should be adopted by the Air Force and incorporated into any evaluation system which may be selected. These features are: 1. Use computer technology to reduce the administrative burden and provide reports and summaries not now available to the evaluation system; 2. Improve job descriptions incorporating computer technology wherever feasible; 3. Provide a separate OER worksheet to assist in the evaluation process and to enable off-line counseling and feedback; 4. Enhance the information given to the promotion boards bearing on the discrimination among officers; 5. Provide additional training to the participants in the OER process. Use Comnuter Technologv Currently, OERs are largely hand-processed, although many activities employ word processing equipment to generate OERs. Our recommendation is that the Air Force take greater advantage of available data processing capability, to include: using ADP equipment to store OER data, tracking the schedule of OERs (in coordination with other personnel actions), and providing some review and quality control functions. In addition, statistical analysis of OERs can and should be performed by computer, A centralized database for OERs (probably at MPC) could provide information as needed to be distributed to (command, wing, or base level) data bases, and in turn, receive input V-7
  • 126.
    from them forstorage, tracking, and analysis. The evolving "PC3" system would be one potential host for such a database and its software. The increased use of computer technology is envisioned in each of the three conceptual designs that form the core of this section. A computer would be useful in generating reports on rater and endorser tendencies, in tracking the distribution of top block ratings and in analyzing the pattern of senior levels of indorsement. Computer technology offers the promise of a major reduction in administrative costs in the preparation of OERs. By linking the computer to an advanced printer, the need to procure, distribute, and store forms can be eliminated. A related, indirect cost ,avings that could be realized is in the elimination of the many iterations in producing OERs to conform to the current notion that exceptionally high standards of typing, word and line spacing are required. We also suggest that software be developed which will provide user-friendly, menu-driven data entry screens for use by either rater/endorser or clerks. Improve Job Descrlntions Nearly all of our Air Force sources, in interview and focus groups, expressed the opinion that the job description is an important part of the OER. and th-t it should be strengthened and made more informative. The job description can provide important information to selection boards, especially for officers whose jobs are not well-known "standard" operational positions. Our recommendation is that standard "shell" job descriptions be prepared for as many officer jobs as possible and stored in a central database. The rater will update the "shell" description as needed, add specifics where applicable, and ensure that the final job description provides a clear, complete picture of the officer's duties and responsibilities. (We envision participation by the ratee in this process, through the V-8
  • 127.
    medium of theOER worksheet, at the beginning of the rating period.) This product should provide promotion boards and other OER users with accurate, up-to-date information to aid their decision-making, while the process of defining the job should facilitate job counseling and communication between the rater and his/her subordinates. An illustration of what such a shell might look like and how the rater might modify it are displayed at Figure V-I. It should be clear tha. %;is recommendation is not offered as a means to inhibit the freedom of the rater to describe/establish job requirements, but rather as a job aid with the potential to make job descriptions more useful both for promotion boards and for job incumbents. Provide Senarate OER Worksheet Again, through the first round interviews, we found that many young officers want the opportunity for job counseling from their superior officers. This need for institutionalized counseling was also part of the overall guidance for the project objectives. After evaluating the findings about other organizations and some of the opinions expressed by officers, the study team decided to recommend that a separate OER worksheet and counseling form be used to support communications between the rater and ratee, This worksheet would be used at the beginning of the rating period to document the rating chain and to clarify the job requirements. At the end of the rating period, the worksheet would be used by the ratee to cite accomplishments during the period and by the rater to counsel the ratee on performance and career development. A model of such a worksheet is displayed at Figure V-2. V-9
  • 128.
    FIGURE V-i SAMPLE JOBDESCRIPTION A. Computerized Shell. (This model job description would be provided to the rater from the computerized OER data base). MATERIEL MANAGEMENT OFFICER, SUPPLY SQUADRON, The Materiel Management Officer (MMO) directs and supervises the administration, maintenance and availability of supplies and equipment in the Materie; Management Branch of the _ Supply Squadron. The MMO is responsible to the Supply Squadron Commander/Chief of Supply for the efficient management of all items in the supply accounts. The Materiel Branch monitors stock levels, projects future supply needs, responds to requests covering a wide variety of items, and protects against shrinkage or theft of supplies. Principal challenges include responding promptly and effectively to normal and emergency supply requests, supervising subordinates, and assuring adherence to very stringent and detailed administrative controls. Additional challenges include determiniig priorities for responding to conflicting requests and using ingenuity when normal channels do not suffice. Important dimensions include: Account class: Number of subaccounts: == Value of equipment accounts: Personnel supervised: Direct Indirect Officers .. Enlisted .... U.S. civilians -- -- Foreign nationals .... V-10
  • 129.
    FIGURE V-I (Continued) SAMPLEJOB DESCRIPTION B. Modified Job Description. (This is an example of how a rater might revise the shell job description to fit the particular circumstancei at that job site). MATERIEL MANAGEMENT OFFICER, 1776TH SUPPLY SQUADRON, ANDREWS AEM The Materiel Management Officer (MMO) directs and supervises the administration, maintenance and availability of supplies and equipment in the Materiel Management Branch of the 1276th Supply Squadron. The MMO is responsible to the Supply Squadron Commander/Chief of Supply for the efficient management of all items in the supply accounts. The Materiel Branch monitors stock levels, projects future supply needs, responds to requests covering a wide variety of items, and protects against shrinkage or theft of supplies. Principal challenges include responding promptly and effectively to normal and emergency supply requests, supervising subordinates, and assuring adherence to very stringent and detailed administrative controls. Additional challenges include determining priorities for responding to conflicting requests and using ingenuity when normal channels do not suffice. MMO services and balances the needs of several organizations located in Andrews AFB. such as the Reserve and Systems Command HO. Acts as Chief of Suoolv in the absence of the Sauadron Commander. Important dimensions include: Account class: I. i.tIL..IV Number of subaccounts: Note Value of subaccounts: Note Personnel supervised: Direct Indirect Officers Enlisted 3 U.S. civilians 3 Foreign nationals Note: This sample job description was prepared by interviewing an incumbent materiel management officer. The missing data was not available at the time of the interview but should be available to the rater if sufficient advance notice were given. V-1I
  • 130.
    FIGURE V-2 OER WORKSHEETAND COUNSELLING FORM PART I RATEE IDENTIFICATION DATA 1. NAME 2. SSAN 3. Grade 4. DAFSC 5. ORGANIZATION. COMMAND, LOCATION 6. PAS COD)E PART II RATEE - YOUR RATING CHAIN FOR THE EVALUATION PERIOD IS: NAME GRADE POSITION TITLE RATER ADDITIONAL RATER NAME GRADE POSITION TITLE (ifany) "INDORSER NAME GRADE POSITION TITLE PART III RATEE - YOUR UNDERSTANDING OF THE JOB REQUIREMENTS IS: JOB TITLE: Significant duties and responsibilities: PART (V RATEE - LIST YOUR SIGNIFICANT ACCOMPLISHMENTS DURING THE PERIOD REPORT PERIOD _ TO dam date sipsia'l date V-12
  • 131.
    FIGURE V-2 PARTV RATERIDENTIFICATION DATA 1. NAME 2. SSAN '3. Grade 4. DAFSC 5. ORGANIZATION, COMMAND. LOCATION 6. PAS CODE PART VI DESCRIPTION OF RATEE'S JOB 7. PERIOD OF REPORT 8. NO. DAYS OF SUPERVISION 9. REASON FOR REPORT From: IThru: 10. JOB TITLE: 11. JOB DESCRIPTION PART VII COMMENTS ON JOB PERFORMANCE PART VIII AREAS OF CONCENTRATION FOR IMPROVEMENT OF PERFORMANCE PART IX AREAS OF CONCENTRATION FOR CAREER DEVELOPMENT signature date V-13
  • 132.
    The OER worksheetprovides a means for a ratee to influence his/her report by providing specific information on the manner of performance of duties to the rater. This merely provides structure and a specific form to what has been an informal procedure. However, adding the requirement for the ratee and rater to agree on the job description and job requirements at the beginning of the rating period provides a means to positively influence job performance. The other feature of the worksheet which is proposed as a means of improving job performance is the comment of the rater on job performance at the end of the rating period. The subsection labeled "areas for . . . improvement" was included specifically to encourage the rater to identify negatives if they exist and to influence changes in the direction of desired performance. The Air Force culture is such that it is not likely that rating officers would be led to include such comments in the OER itself. This concept proposes that the worksheet, not the OER, will be the principal mechanism providing feedback to the officer corps on performance. The decision not to rely on the OER for feedback on performance recognizes that the primary purpose for the OER is to discriminate among officers for the purpose of making selections (primarily for promotion). The use of one form for both counseling and discrimination would create conflicting demands on the author (the rater is asked on the one hand to provide documentary evidence, which will help get a good officer promoted and, on the other hand, to list that officer's weaknesses needing improvement.) Resolving this conflict has been the most difficult challenge to revisers of OER for decades. The solution proposed here is to divorce the OER from the counseling process. V- 14
  • 133.
    "ReviseInformation Provided toPromotion Boards This element addresses the file information provided to the selection boards on each officer under consideration for promotion. First, it is recommended that the number of OERs in the promotion folder be limited. Current practice dictates that all the evaluation reports generated during an individual's career be it, luded in the promotion folder. We are proposing to limit the number of evaluation reports to all reports generated in the present grade, or five evaluation reports (whichever number is higher). For example, if an individual has received four evaluation reports as a captain, then these four reports, and the last OER as a first lieutenant, would be included in the promotion folder. Similarly, if a lieutenant colonel has received six evaluation reports, all six would be part of the promotion folder. This measure would have considerable impact upon the Air Force officer corps. First, it would reinforce the message that the performance evaluation system has been re-focused to accentuate current or recent performance. In addition, it would take some pressure off both the rater and ratee; since the OER would not have the long-term impact that it has today. This should result in more candid and accurate evaluations. Finally, it would focus promotion board members' limited time on those reports which should have the greatest impact on the promotion decision. Second, there is a group of special category organizations (SPECtT) which, according to Air Force regulations, receive preferential manning considerations as a matter of policy. In a study of major, lieutenant colonel, and colonel temporary promotion boards for fiscal years 1972-1974, 25 agencies identified as SPECAT were recognized as having "higher quality" officers than did the highest MAJCOM. It is recommended that such a study be updated and those units identified which, by regulation, receive special consideration in terms of the quality of officers assigned Wn are shown to have significantly higher promotion board scores than the MAJCOMs. It is V-15
  • 134.
    further recommended thatthe list of such organizations and a summary of recent promotion selection rates be provided to each promotion board with instructions that the board is to recognize that the proportion of outstanding officers who are assigned to such organizations is probably significantly higher than most other units. Finally, it is proposed as a part-of each of the conceptual designs that pertinent rating tendencies be furnished to selection boards. Through the use of the computer technologies recommended earlier in this section, the rating/indorsing history of the persons or commands (depending on what level is chosen to provide the discrimination on individual OERs) can be displayed to the promotion boards. Through such reports, individual OERs can be interpreted accurately to differentiate those reports which are inflated from those which represent the canuid judgment of the writer about the rated officer's potential. Train All Partici1ants Any change in administrative procedures would require additional training for those responsible to execute this procedure. However, any substantial change in the officer evaluation system will require training and educating the entire officer corps. This is true because the OER process affects every Air Force officer as a participant. It is even more significant in light of the study finding that successful implementation of any major changes in the system will require changes in Air Force culture that go far beyond procedure. Thus, training is a major activity addressed in the implementation plan presented in Section VI. To ensure continued success in any officer evaluation process, training must be on-going and continuous. V-16
  • 135.
    CONCEPTIONAL DESIGN 1:DIFFERENTIATION THROUGH COMMAND PERSUASION This alternative OER design, recognizing the strong culture surrounding the current OER process and the potential stress that will be associated with any change, seeks to improve the process while retaining the method of providing discrimination among officers that, to date, has widespread acceptance, i.e., level of final indorsement. Distinguishing features of this design are: I. The list of performance factors has been reduced in number and the requirement to comment on each has been eliminated. 2. The rater is no longer required to evaluate potential. 3. The discriminating factor will continue to be level of indorsement. Process The OER will be prepared annually and batched so that all reports for officers of the same grade are closed out on the same date. Since the discrimination for potential is to be the level of indorsement, and since there is a closed process following command lines to determine which officers receive the higher level indorsements, it appears prudent to rate all officers in a peer group together to provide a fair assessment of each officer in the command. The argument supporting this statement is that if the major commands are going to discipline the system, then competition among officers must be within the command. Otherwise, the commands will be competing with each other for promotion opportunity, an anarchical situation that would work to defeat the system of discrimination proposed. V-17
  • 136.
    The identity ofthe rater, additional rater, and indorser would remain the same as under the current system. The allocation of indorsements at each level of command would be determined in accordance with major command policy. At the completion of each rating cycle, the military personnel center would produce a report which displays the indorsement tendencies of each major command and separate activity. This report, together with the analysis of the distribution of quality officers to SPECAT units, would give promotion boards the tools needed to intei'pret OERs and to select the best Air Force officers for promotion. OER Form A model form that could be used in this design is displayed at Figure V-3. In this scenario, the rater will provide numerical ratings for each of a list of six job performance factors on a five point scale. The performance standards will be displayed in the OER regulation. The rater will also provide comments on duty performance. The regulation will emphasize that the narrative should focus on the performance factors and that it should emphasize accomplishments, not adjectives. There is space for a career development recommendation. This is a narrative in which the rater may make any comments about the future development of the ratee as a career Air Force officer. Appropriate comments would include future assignment patterns, training and education, and self-improvement. In this section, the rater will make a recommendation on whether or not to augment a reserve officer. On the reverse side of the form the additional rater and indorser will add narrative comments on performance of duties and potential and evaluate potential on a six point scale. The rater will not evaluate potential. V-18
  • 137.
    FIURE V- CONCEPTAL DESIGNI OFFICER IDE~NTFICATION DATA 1.NAME 2. bbAN I. GRADE 4-. DAFSC- ,5.DUTY titLE -... 6. PAS CODE 7. ORGANIZATION, COMMAND, LOCATION 8. PERIOD OF REPORT '9. DAYS OF SUPERVISION 10. RE•A•SON FOR REPORT FROM THRU BY RATING OFFICIAL [ ii. JOB DESCRIFPION OflMR ASSIGNED DTI.ES ASSESSMENT OF PERFORMANCE BY RATING OFFICIAL JOB PERFORMANCE FACTORS I , APPU CATION OF TECHNICAL KNOWLEDGE AND SKILLS m M PLANNING AND ORG ANIZATION OFWORK ~C ]C ]C THE EXERCISE OF LEADERSHIP C -C r-] -] -] "- MANAGEMENT OF RESOURCES ]]C] ] IDENTIFICATION AND RESOLLTION OF PROBLEMS ] ] C C -- C] COMMUNICATIONS = c z]C] ] ] COMN'IS ON PERFRMANCE (Defh *Uaffmti for raung puiod) NAME, GRADE, BR OF SVC, COMD, LOCATION DUTY TITLE DATE SSAN "-INATU9t. FRTNCIAL V- 19
  • 138.
    CAREER DEVELOPMENT RECONbqDATION EVALUATIONOF POTENTIAL Coupa the nmu's caf sum r m0ibkliY wwith " of odwmr offon whom yco kow inlue srnp'd. IndMf your mfu by plWiWg fm -)ra f doIpMd pofmi of UI most WfiW blIct RATIU ER RATIM ER RATME E RAT1 Et tzwest COMMEYTS by ADDITlONAL RAT NAME, GRADE, BR OF SVC, CX,(D, LOCAniON DLrTIYTIU DA17E SSAN IGNATURE OF RATING OFFICIAL COMMIETS BY INDORSER NAME. GRADE, BR SVC, ORON,CZM• LOCATION DUTY Trr DATE SSA• S;IGNATUkE OF; GE•S• OFFI= CIRT[FICATION OF REEORT BY CONSANtYAOm4CY NAME, GRADE. BR SVC, ORON, COMD, LOCATION DUTY 1TIT DATE SSAN SIGNATURE OF CERTIFYING OFFICIAL V-20
  • 139.
    This design enhancesthe evaluation of job performance by reducing the number of performance factors to those which are demonstrably pertinent to all jobs. Then by tying the rater's narrative to these factors it can be expected that a more meaningful description of job performance can be attained. This expectation is heightened by the fact that the rater is directed to focus on the performance, not the potential. There also is an expectation that the narrative will focus more on accomplishments and less on puffery, although this may be an unreasonable expectation. The rater-ratee relationship is protected by retaining the discrimination at the level of the indorsement. The results of the study team's interviews suggest that, absent meaningful numeric ratings, promotion boards can discriminate among officers based on narratives and level of indorsement. The thrust of this design is to enhance the discipline which the major commands are already providing the system. The effect would be to increase the level of discrimination specificity on each report and to give Air Force leadership more visibility of (and influence over) the process of differentiation being performed by the major commands. This result is achieved by generating more detailed reports the indorsement patterns in each command and by requiring that annual reports be batched. Feedback from Air Force officers of all grades suggests that the enhancements to morale offered by inflated reports are important to the culture. The effects of the changes offered in this design are to retain a morale-enhancing report that discriminates for promotion purposes and that substantially reduces the administrative burden now experienced throughout the Air Force in preparing OERs. What this method does not accomplish is to eliminate grossly inflated ratings and their conmitant dangers. V-21
  • 140.
    CONCEPTUAL DESIGN 2:DIFFERENTIATION THROUGH RATER PERSUASION This alternative OER design concept would alter the existing Air Force CER system substantially. Therefore there is a risk that the culture woulcd not adapt to the change and the decision would not be accepted by the officer corps. The major features, however, are now being used ;n other uniformed service OER systems. As such. they have been demonstrated to be feasible, and there is an existing set of information concerning the effectiveness of each feature used. (This does not suggest that, removed from the parent services' cultures and their integrated OER systems, each feature will work in the same way in an Air Force environment and context). The distinguishing features of this design are as follows: 1. The rater is required to focus on duty performance only. 2. The indorser provides the principal information used in discriminating among officers. 3. Raters/indoraers would be persuaded to distribute their rating scores along the available scales by publication of their rating tendencies for use both in interpreting their ratings and in evaluating their own leadership abilities. This concept is sometimes referred to as the "rate the rater' technique. Process The OER will be prepared annually, and batched so that all reports for officers of the same grade are closed out on the same date. The purpose of this procedure is used to reinforce the guidance to indorsers to consider all officers of a grade when V-22
  • 141.
    preparing the promotionrecommendation so as to achieve a realistic distribution of scores. The rater should be the ratee's immediate supervisor. This is the person who determines what the duty requiremeuts will be and who is best situated to evaluate how well the ratee accomplishes the duties. Criteria will be established for the selection of indorsing officers to ensure that responsible, mature officers perform this duty;, but unnecessary inflation of level of indorsing official will not be permitted. For example, the indorsing officer might be designated as the rater's supervisor with the additional requirement that he/she be at least a field grade officer and be at least one grade senior to the officer being rated. There would be provision for an additional rater if there were a level of supervision between the rater and the iuidorser. This might happen most often when the additional rater was not at least a field grade officer or when he/she was not one grade higher than the ratee. There would not be a space on the OER form for an additional rater's narrative. Rather, that narrative would be attached on an additional sheet. This is predicated on the belief that additional raters would only be needed on a small minority of the reports. The report will be prepared on a computer so that, when completed and reviewed at the installation, the administrative information and quantitative ratings will be a part of the data base at the base level. This data base can be shipped electronically to the Air Force Military Personnel Center. At the base level the ratings would be used to re- compute the ratings histories of both rater and indorser. These historical summaries would then be available for review by their supervisors when subsequent evaluations are prepared. Thus when officer "A* is evaluating officer "B", "A" should consider "B's' evaluation history and whether "B" complies with Air Force policy. The operative policy V-23
  • 142.
    here is thatthe ability to make candid, realistic evaluations of subordinates is a measure of good leadership. At the Military Personnel Center, the updated data base would be used to electronically generate a label showing the rating history of each rater and indorser. This label would be affixed to the record copy of each official OER. Thus the selection boards and assignment officers would be able to evaluate ratings for performance and potential in respect to the rater's and indorser's long term tendencies, isolating and discounting the worth of those ratings being inflated. The concept envisions that a three year running average would constitute the rating history for each officer with evaluation responsibilities. Finally, it is proposed that a report showing each officer's rating history be prepared and placed in the selection folder when he/she is being considered for promotion. OER Form A model of the form that could be used in this design is displayed at Figure V- 4. The rater will provide numerical ratings for each of a list of six job performance factors on a seven point scale. The performance standards will be displayed in the OER regulation. The rater will also provide comments on duty performance. The instructions will emphasize that the rater is to structure his/her narrative around the job performance factors as an outline and that the narrative should focus on deeds, not adjectives. The indorser prepares the reverse of the form beginning with a career development recommendation. This is a narrative section in which the indorser may make any comments about the future development of the ratee as a career Air Force officer. Appropriate comments would include future assignment patterns, training and education, and self-improvement. V-24
  • 143.
    FIGURE V-4 CONCEPTJAL DESIGN2 RATEE IDENTIFICATION DATA 1. NAME 2. ,SAN ' 4. ADAFSC 5. DUTY TITLE 6. PAS CODE 7. ORGANIZATION, COMMAND, LOCATION 8. PERIOD OF REPORT 19. DAYS OF SUPERVISION 10. REASON FOR REPORT THRU BY RATING OFFICIAL 11. JOB DESCRIPTION OTHER ASSIGNED DUTIES ASSESSMENT OF PERFORMANCE BY RATING OFFICIAL SFACRS DNM=DOES NOT Consistently MEET the performance standards, MSE-MEETS and SOMETIMES EXCEEDS the performance standards CE-CONSISTENTLY EXCEEDS the erformanue standards DNM MSE CE APPLICATION OF TECHMCAL KNOWLEDGE AND SKRLLS 0 N E C E 0 PLANNING AND ORGANIZATION OF WORK 0 0 0 0 0 0 0 THE EXERCISE OF LEADERSHI 0 0 0 0 0 0 0 MANAGEMT OF RESOURCES 0 0 0 0 0 0 0 IDENTFCAn ON AND RESOLUTION OF PROBLEMS 0 0 0 0 0 0 0 COMMUNICATIONS 0 0 0 0 0 0 0 SPACE RESERVED FOR W. USE [This rater's grading for all (grade) __ % % % % % % % fr period (date) -to CONM:ENTS ON PERFORMANCE (Deri:e somp~iahments for ring period) NAME, GRADE, BR OF SVC, COMD, LOCATION DUTY TITLE DATE MSAN SIGNATURE OF RATING OFFICIAL V-
  • 144.
    FIGURE V-4 CAREER DEVELOPMENTRECOMMIENDATION l DNM-DOES NOT Consistently MEET thperonnsudr OFFICERSHIP FACTORS MSE-mETs AN SOM mETES EXCEES =the eormamicc,standards CE-CONSISTENTLY EXCEEDS the ierforrmanci stnarss DNM MSE CE IITATIE 0 0 0 0 0 0 RESPONSIBLITY 0 0 0 0 0 0 0 DECSIVENESS OFUED W 0 0 0 0 0 0 0 ADAPTBiLTYTO sS 0 0 0 0 0 0 0 PROFESSIONALISM 0 0 0 0 0 0 0 SPACE RESERVED FOR MPC USE rh is ra~ter's grading for all (grade) - forperiod (date) to • NDORSER COMMENTS PROMOTION POTENTIAL Do Not Promote Promote With Pee. Promote Ahead ofPam SPACE RESERVED FOR MPC USE mm tThis Indorser's ratings for (number) of (grades) _ during th period (date) to NAME, GRADE, BR OF SVC, COMD, LOCATION DLTY T=IT DATE SSAN SIGNATURE OF INDORSINO OFFICIAL REVIEWING OFFICER CONCUR EJ NONCONCUR Comrnments (only Ifnonconcur) REVIEWER'S NAME, GRADE, BR OF SVC, ORGN, COMD, LOCATION DLTrY TITLE DATE $SAN SIGNATURE II i • V-2i
  • 145.
    Next the indorserwould evaluate five officership factors on the same seven point scale. Again, the standards would be displayed in the regulation. These traits are assigned to the indorser under the philosophy that traits are more closely related to potential than to current performance and the burden of estimating potential should be placed on the indorser rather than the rater. Finally, the indorser would evaluate the promotion potential of the ratee (scale of I to 7) that reflects the potential of the ratee to perform the duties associated with the next higher grade, in comparison with all other Air Force officers of the ratee's grade. The indorser will also provide a narrative that justifies the officership ratings and the estimate of potential. The report should be reviewed by the indorser's supervisor unless the indorser is in the grade of colonel or higher. Under most circumstances, when a reviewer is used he/she should be in the grade of colonel or higher. The purpose of the review is to ensure that a senior Air Force officer has viewed the report. In interviews conducted by the study team, colonel is the lowest grade where it was observed that officers consistently expressed concern about a relationship between a credible OER system and the future well-being of the Air Force officer corps. The focus of quality control measures will be on the behavior of indorsing officers. This behavior can be influenced by publishing the indorser's rating history in two forms. First, on each OER a computer generated indorser rating history reveals to selection boards whether the indorser is complying with the spirit of the regulation. An indorser who inflates all reports degrades the value of those OERs which he/she prepares. Second, a computer generated rating history will be placed in the selection folder of each officer being considered for promotion showing how that officer has performed the responsibilities incumbent on indorsing officers. These computer generated reports will create stress for those indorsing officers who do not comply with V-27
  • 146.
    the spirit ofthis OER concept. In addition, inflation of scores can be influenced by a thorough education program for indorsing officers. This program should provide periodic updates of information about statistical trends in OER inflation, a means of reassuring indorsers who comply and pressuring those who do not. The OER process protects the relationship between an officer an his immediate supervisor by not requiring the supervisor to furnish the most obvious promotion discriminators in the OER. The indorser, who is forced to provide quantitative discriminators, is separated from the ratee by one level of supervision; and the indorser is thus presumed to be more impartial to the conflict between the needs of the individual (recognition through promotion to a higher grade) and that of the organization (select the best qualified through Air Force-wide competition). Even with the computation of rater histories, the rater can not be expected to contribute much discrimination using job performance and officership factors on the front side of the form. The culture would not permit this much of a change in behavior from the current traditions. However, these factors should be included -- somewhat for the discrimination (a chance to separate the sub-marginal) but more for the purpose of educating the officer corps on the Air Force expectations about performance of duty and the qualities of officership. The principal discrimination on the OER will be the indorser's rating for potential. This rating would not be specifically controlled; however, by requiring that annual reports be batched by grade and through persuasion it is reasonable to suppose that the majority of indorsers can be influenced to distribute their ratings along the potential scale. The value of a maximum rating will be degraded in the cases where an indorser gives everyone a maximum score. This distribution of scores will be the basis, observed over a time period, that provides a number of reports on each officer for discrimination among levels of potential for promotion. V-28
  • 147.
    CONCEPTUAL DESIGN 3:DIFFERENTIATION THROUGH TOP BLOCK CONSTRAINT The third alternative OER design also alters the existing Air Force OER substantially. In this third alternative, discipline is introduced overtly through a 10% limitation on the number of top block ratings allowed. This alternative runs the risk of being negatively compared to the controlled system although specific identification of a small percentage oi" high achievers is now being done through the covert indorsement allocation process. The distinguishing features of this design are: I. This entire system is envisioned as a computer-based process. That is, all information on an OER is entered directly into a remote terminal/PC, where it is stored for future access while certain decisions are made about its viability. It is not released to the official record until it has been validated. 2. Rating officers make differentiations between officers but only it the extremes. 3. The indorsing officer is limited to rating only ten percent of the officers in each grade in the top block for potential. OER Process This design does not incorporate a change in the current timing of OERs. That is, they would continue to be based on anniversary dates, change of assignments, etc. The major change in this system is that OERs would not enter into the official record until the end of each year. Using current computer technology, OERs would be written or entered on a personal computer or computer terminal so that the ratings are V-29
  • 148.
    immediately "banked." Inaddition, a printout of the form (which is printed entirely by the computer) is signed and sent through the chain of command to any intermediate commanders, who enter their indorsements on the form, and into the computer data bank. The form is ultimately forwarded to the wing commander. The wing commander's promotion rating is entered into the computer, but not on the physical form which is maintained at wing headquarters until the end of the year. At that time, the wing commander's mtings are validated against the ten percent limitation (see the following section). As will be explained later, the primary promotion recommendations will be made by the wing commander or equivalent level. The wing commander will be limited to recommending no more than 10% of each grade for below the zone promotions. The form will allow intermediate supervisors to make a recommendation on promotion, but these recommendations will not have to meet the 10% test. These intermediate recommendations are vehicles for supervising officers to encourage the promotion of their best people, those with the greatest potential for greater responsibility in the Air Force. Clearly, it is in the interest of intermediate raters to be selective in their ratings since if they rate all officers as "promote early," they would in effect be leaving the decision entirely to the wing commander, with no real input from themselves. This identification of highest potential together with some amount of variation in performance ratings provides the promotion board with more overt and factual input than is now available. It is anticipated that this input will be most useful initially in making decisions on below the zone promotions. However, with the passage of time, as the number of OERs in a file builds, individuals will: !. Be rated as outstanding on some performance factors and not ot:Jers; V-30
  • 149.
    2. Receive differentratings on the same factors for different time periods; and 3. Receive different indorsements at different times. Given this type of variation, boards will be able to reliably differentiate between officers in a much wider spectrum than just identifying the "top* ten percent. "Wing commander" is used here as the most typical command level at which rating distributions would be tested. For commands which are not organized into wings, an equivalent level would have to be determined. Also, for levels above the wing level, the indorsing officer would be at least a step removed from the individual, at a rank of 0-6 or higher. In any case, the final indorser must have at least ten officers of the rank to be indorsed reporting through the chain of command to him/her or the OER would be forwarded to the next level for indorsement. This concept also envisions that an additional rater will evaluate the ratee. This additional rater will be the rater's supervisor, unless the rater's supervisor is a wing commander or the equivalent in which case there will be no additional rater. Space will be provided for a narrative where the additional rater can comment on both performance of duties and potential. There will also be a space for a promotion recommendation. As each OER is indorsed, and the promotion recommendation entered into the computer, the computer will *bank" these ratings against the indorser's "account". This bank will be available for examination by the indorsing officer and/or his designated staff members (through use of an access code) at any time during the year. Thus, the officer (and his/her staff) will be able to verify his/her own records as to whether the indorsing officer is staying within the 10% top block limitation. At the end of the year, the total pattern can be reviewed and changes made. This is intended to give the indorsing officer a chance to review his/her recommendations in light of all officers V-31
  • 150.
    rated. This isdone simply by changing the recommendation in the computer. When the indorsing officer is satisfied with his/her final ratings, the recommendations are entered on the hardcopy OERs, which are then signed and forwarded to the appropriate MAJCOM and ultimately to MPC. The process is then begun again for the new year. As the performance ratings are entered by the original rater (or staff person), they are also "banked" against the rater's "account." It is envisioned that this account will contain a running, three-year average of performance ratings given by each rater for each officer grade. This account can be maintained in the exportable OER data base. Each rating officer will be supplied with a computer report at the end of the year on the distribution of ratings he/she has given. This distribution will go to the rating officer and his/her immediate superior. Space has been provided in the job performance factors section of each OER to display the rater's rating distribution history. This distribution will be produced by the computer at the end of the year and before indorsing officers make their final review. This information will also be on the OER when it is considered by the selection board. It is recommended that the FY 72-74 study of Special Category Units (SPECAT) be updated to identify those units which, by regulation, receive special consideration in terms of the quality of officers assigned and are shown to have significantly higher promotion board scores than the MAJCOMs. It is further recommended that the list of such organizations be provided to each promotion board with instructions that the board is to recognize that the proportion of outstanding officers who are assigned to such organizations is probably significantly higher than 10%. This design does not recommend having indorsing officers rate promotion potential within such organizations against a standard that is different than the 10% for the entire Air Force. V-32
  • 151.
    FIGURE V-5 CONCETU.AL DESIGN3 RATEE IDENTIFICATION DATA 1,NAME 3. GRADE 4. DAFSC 6. PAS CODE 7. ORGANIZATION, COMMAND, LOCATION S. PERIOD OF REPORT 9. DAYS OF SUPERVISION I0 ESN FOR REPORT FROM THRU BY RATING OFF•CIAL II. JOB DESCRIMIION OTHER ASSIGNED DUTIES JOB PERFORMANCE FAC'TORS •TID:G Not Observed DNM MSE CE APPLICATION OF TECHNICAL KNOWLEDGE AND SKILLS PLANNING AND ORGANIZATION OF WORK DNM MSE CE THE EXERCISE OF LEADERSHIP DNM MSE CE MANAGEMENT OF RESOURCES DNM MSE CE ID.NTIFICATION AND RESOLUTION OF PROBLEMS DNM MSE CE COMMUNICATIONS DNM MSE CE SPACE RESERVED FOR MPC USE (%) (%) (%) (Th7a rawr' ratings for all (grade) in (year)] I___II DNM - DOES NOT Coniuisently MEET the performance standards MSE - MEETS and SOMETIMES EXCEEDS the performance standards CE - CONSISTENTLY EXCEEDS the performae standards COMMEFiTS ON PERFORMANCE (Defoe accomplishments for the rating period) NAME, GRADE, BR SVC, ORGN, COMED, LOCATION DAmh SSAN SIGNATURE OF CERTIFYING OFFICIAL V-33
  • 152.
    FIGURE V-5 CAREER DEVELOPMENTRECOMMENDATION ADDMONAL RATER COMMENTS PROMOTION RECOMMENDATION = DO NOT PROMOTE PROMOTE WITH PEERS PROMOTE AHEAD OP PEWR NAME, GRADE, BR SVC, ORGN,COMD, LOCATION DUTY TITLE DATE SSAN IGNATURE OF RATING OFFICIAL INDORSER COMMENTS PROMOTION RECOMMENDATION DO NOTPROMOTE -= PROMOTE WrTH PEERS = PROMOTE AHEAD OF PEERS NAMNE, GRADE, BR SVC, ORGN. COMD, LOCATION DUTY TITLE DATE SSAN SIGNATU RE OF INDORSING OFFICIAL CERTIFICATION OF REPORT BY COMMANDIAGENCY NAME, GRADE. BR SVC, ORON, COMD, LOCATION DUTY TmIT DATE SSAN SIGNATURE OF CERTIFYING OFFICIAL V-34 I I O N
  • 153.
    E=r The proposed OERform for this design is displayed as Figure V-5. This design shows a reduction in the number of performance factors to six, on the basis that the more the overall performance is fractionated the less the rater is able to distinguish between the individual aspects which are frequently interdependent and the more the overall attitude toward the individual or "halo effect" will operate. Also, this list isolates those aspects which are separate and critical to the widest variety of jobs. Narratives for each factor will not be required. These performance factors are: 1. Application of Technical Knowledge and Skills; 2. Planning and Organization of Work; 3. The Exercise of Leadership; 4. Management of Resources; 5. Identification and Resolution of Problems; and 6. Communication. This design also provides for only the rating officer to fill out the performance factor ratings. Each factor will be rated in 3 categories: 1. Does not consistently meet the requirements of the job. 2. Consistently meets and may sometimes exceed the requirements of the job. 3. Consistently exceeds the requirements of the job in significant and substantial ways. In the Comments on Performance section, the rater makes narrative comments on what the individual has accomplished during the rating period. Orienting the comments in this manner clearly directs the rater toward talking about things that have to do with the primary job. This should be as factual as possible, with the use of descriptive adjectives kept to a minimum. Key points should be bulleted or highlighted to draw the attention of those reading the OER. V-35
  • 154.
    The Career DevelopmentRecommendation is a narrative section in which the rater may make any comments about the future development of the ratee as a career Air Force officer. Appropriate comments would include future assignment patterns, training and education, and self-improvement. In this section, the rater makes a recommendation on whether or not to augment a reserve officer. This section ends the portion of the OER prepared by the rater. Space is provided on the form for a unit administrator to certify that the report is correct. It is envisioned that this will be completed at the end of the reporting year by the administrative office having visibility of the wing commander's evaluations during the past year. This section would be completed when the administrator had certified that the number of top block promotion recommendations during the year had not exceeded the 10% limit. Rationale Given the history of "firewalled" ratings, it is the intention of this system to have rating officers make some differentiations between officers but only at the extremes. While this is certainly far from an ideal system it is one which may be workable, given the recent OER history and the Air Force culture. Furthermore, because different people will be considered outstanding on different performance factors at different times, it will, over time, be possible to make much broader distinctions between records than just the extremes. Specifically, the system was built to recognize that 1. Air Force officers are not a random selection from the general population, but rather an elite group of highly achieving individuals. V-36
  • 155.
    2. -In anyelite group, there is still a range of talent, including those individuals who stand noticeably above their peers, having an unusually high level of skill and energy for recognizing problems or opportunities and applying the leadership to deal with them. The opposite is just as true, that no matter how select the group, there are always some individuals who fail to live up to the standards. 3. Since most officers are well qualified to perform any assignment for which they have the technical skills, it is not necessary to make fine differentiations in either performance or potential for most of the officer force. There are, however, certain highly challenging and vital positions for which it is necessary to identify that small percentage of our officers who perform best in particular aspects of their current positions and pre the natural leaders among their peers. EVALUATION OF CONCEPTUAL DESIGNS Section IV presented several critical design criteria which the study team derived from our data analysis. These criteria are not all equally well satisfied by all three of our conceptual designs for the OER. We realized that it was probably not feasible to satisfy all of these criteria in any one design, so each design concentrated on particular criteria, and often failed to completely satisfy some of the others. Table V-I presents a summary of our evaluation of the extent to which each of the three designs is likely to satisfy each criterion, if it is implemented as we suggest. The following paragraphs evaluate each design, in turn, against the five criteria. V-37
  • 156.
    TABLE V-1 CONCEPTUAL DESIGNSCOMrPARE•D TO DESIGN CRrTERIA PRO1ARILITY OF SATISFYING CRITERION COMMAND RATER TOP BLOCK DESIGN CRITERION P PRSUASION CONSTRAIN FOCUS ON JOB PERFORMANCE HIGH HIGH HIGH PROVIDE DIFFERENTIATION ON POTENTIAL MODERATE HIGH MODERATE/HIGH BE ACCEPTABLE TO OFFICER CORPS HIGH MODERATE MODERATE PROVIDE MEANS FOR DEVELOPING SUBORDINATE MODERATE MODERATE MODERATE OFFICERS MINIMIZE ADMINISTRATIVE BURDEN SHORT-TERM LOW LOW LOW LONG-TERM MODERATE HIGH MODERATE/HIGH V-38
  • 157.
    CONCEPTUAL DESIGN 1- COMMAND PERSUASION Focus on Job Performance Conceptual Design 1, the one which requires the least change from current OER practices, does provide an improved focus on job performance, with the number of performance factors being reduced to six and the narrative comments on each eliminated. The regulation accompanying this form would emphasize that the rater should focus on job accomplishments in writing his narrative. Dlfferentlitlon on Potential Differentiation of potential would be provided much as it is on the present form, although the additional information provided to selection boards should give more insight into the true value of the potential rating. This design is therefore moderately likely to improve the differentiation of potential. Accentabilitv to Officer Corns This design would probably be quite acceptable to the officer corps because of its similarity to the current form and process: it requires few painful adjustments. This is one of the strong points of this design, and one of the reasons for its inclusion. D'•elogin, Subordinates This design and the other two are virtually identical in the way in which they provide for the development of subordinate officers; therefore they will not be separately discussed. All would be accompanied by an off-line counseling form which is designed to facilitate the provision of performance feedback and career counseling to the officer being rated. The study team feels that this will constitute an improvement over the current system, which lacks a formal feedback mechanism, but that its real success V-39
  • 158.
    will depend uponthe effort devoted to training officers to provide effective counseling and feedback to subordinates. The effectiveness of the off-line counseling provisions will also depend upon the Air Force leadership's commitment to and enforcement of the counseling and feedback requirement. Admlnistrstive Burden Conceptual Design I will have little effect on the administrative burden of the OER system in the short term, although the removal of some narrative sections and the use of automation in form preparation will reduce the burden somewhat. The tracking of indorsement histories will require some administrative investment in the short term to develop an automated system, but in the long term is likely to reduce the burden on the commands and the selection boards. CONCEPTUAL DESIGN 2 - RATER PERSUASION Focus on Job Performance Design 2 has a strong focus on job performance, separating the performance factors, which have been chosen to be applicable to all Air Force officers jobs, from the "Kofficership"factors. The instruction accompanying this form would give clear examples of exemplary behaviors for each factor, further emphasizing the focus on how well the officer performs his primary duties. Differentiation on Potential Design 2 provides distinct rating factors for officership or potential, which are rated by the indorsing officer. These would support the overall potential recommendation by the indorser. This design, therefore, provides for clear and explicit rating of potential, separate from job performance, and is likely to yield better differentiation than the current system, without the current *covert" component. V-40
  • 159.
    Accentabllitv to OfficerCorns Conceptual Design 2 should be moderately acceptable to the officer corps, although there will be some risk in this respect, since it requires some major changes in rating behaviors. The major risk with this design is that officers will continue to perceive that any rating or indorsement other than top block will be devastating to their career as it is now. Only time and experience would reduce this fear, and the risk is that the officer corps would not give it that time. The keys to such acceptability will be the effectiveness of the training and indoctrination which accompany the introduction of the design, and the widespread credibility of the Air Force leadership's commitment to the new system. The mechanisms for controlling rating inflation should be acceptable if they are applied uniformly across all officer grades and commands. Administrative Burden This design, like the first, will require administrative effort to be invested in startup procedures, such as development of software to produce statistical summaries and rater/indorser histories. However, once the system is in place and operating it should be simpler and less burdensome for the officers and the MPC than the current system, since it will be highly automated and it decreases the amount of narrative material to be written and edited. CONCEPTUAL DESIGN 3 - TOP BLOCK CONSTRAINT Focus on Job Performance Conceptual Design 3 has a strong focus on job performance, with an improved job description and simplified performance factor ratings. The performance factors have been chosen to be applicable to the widest possible variety of Air Force officer V-41
  • 160.
    jobs, and torepresent truly critical behaviors. Narrative comments on performance will be required to deal with accomplishments on the job. Differentiation on Potential Design 3 provides for the differentiation of potential for promotion by the indorser's explicit promotion recommendation. Indorsement level will not be used to provide this differentiation. The limitation of 10% top block promotion recommendations by the wing commander will force the selection of the very best officers for this rating, although there will be no differentiation among the large number of good but not outstanding officers on this item. However, over time and through a series of reports, discrimination can be made through a much wider range than 10%. Therefore, we estimate that this criterion will be quite likely to be satisfied by this desigit. Accentabllltv to Officer Corn It is our opinion that this design is moderately likely to be accepted by the officer corps, after some initial resistance to the idea of explicit constraint on ratings. As with the other designs and other criteria, much will depend upon the credibility of the Air Force leadership's commitment to this design, and upon how well this commitment is communicated to tho officer corps. Administratlve Burden Design 3 will be similar to Design 2 in the requirement for a fairly heavy administrative investment in the initial implementation phases. A mechanism will be needed to track wing commander rating distributions and to keep statistics on performance ratings. However, once the system Is up and running, the administrative V-42
  • 161.
    burden should bereduced from that of the current system. There will be less narrative to write and edit, and much of the work will be computer-aided. Viewed against the criterion of acceptability to the officer corps, Design I is predicted to do the best, since it requires the least change in "business as usual". The other two designs are somewhat more threatening to the status quo, and are likely to meet stronger resistance. They will require carefully developed and intensive training and information programs to insure acceptance. All three designs use the same method, an off-line counseling and feedback form, to provide a means for fostering the career development of subordinate officers. As mentioned above, the success of this method will depend largely .upon the preparation, training and reinforcement provided to the officers who must work with it. The criterion of minimizing the administrative burden of the OER system is best accomplished in the long run by Design 2, with Design 3 nearly as efficient. Design 1, with the least change from the current system, is not expected to reduce the burden as much. All would require a front-end investment of resources to develop the requisite hardware, software, documentation, etc., but Designs 2 and 3 would eventually return this investment with automation and aiding of some of the more onerous OER functions. V-43
  • 162.
    SECTION VI IMPLEMENTATION PLAN Thissection presents the recommendations of the study for implementation of a revised officer OER system into the Air Force. Obviously in an effort as large as implementing a new OER system there are literally thousands of details which must be addressed before the system becomes a reality. Such an effort is clearly beyond the scope of our contract or our capabilities. What follows are our conclusions about the major issues and aspects of implementation. The need for a detailed and well thought-out plan for introducing the new system can be best appreciated through review of the lessons learned from the controlled OER era (1974-1978). That OER system is not viewed as successful, and one of the reasons given for its failure was the way it was introduced into the Air Force. This recommended implementation plan takes account of the mistakes and successes of that period, as reported in the Air University study of May 1979 (Phillips, 1979). This plan is based on an assumption that the Air Force will select a new OER system concept that is substantially different both in process and form from the current OER system. Adopting a minor revision to the current system (such as conceptual design 1) would not require as long to complete, although the case could be made that all of the steps described below would be necessary. A conclusion presented elsewhere in this study is that the principal flaw of the current system lies neither in the process nor in the form but in the culture surrounding the OER and the resulting behaviors which have inflated scores and compromised the value of the ratings placed on the OER forms. Consequently a strong emphasis should be placed on actions necessary to influence a change in officer attitudes about the OER VI-l
  • 163.
    process. A substantialnumber of such recommended actions are grouped below under the topic of training. However, the scope of actions needed is broader than training, and an effort has been made to integrate this indoctrination program throughout all phases of the implementation plan. The plan is divided into eight phases: 1. feasibility assessment and final decision; 2. design; 3. development, 4. testing; 5. full scale training; 6. full scale operation; 7. evaluation; and 8. refinement and maintenance. Each of these phases will be discussed below. Table VI-I at the end of this section is an implementation schedule. This schedule suggests that, in an orderly transition, the first rating periods under a new OER system could begin about twenty-four months after a decision is made to proceed. FEASIBILITY ASSESSMENT AND FINAL DECISION The plan assumes that the Air Force, at the staff level, will select one of the OER concepts under consideration. The first phase of this implementation plan is to prepare the concept for scrutiny by the top leadership and to make a decision to commit significant Air Force resources to implementation. A second assumption is that, rather than entering the Planning, Programming, Budgeting System to compete for resources, the implementation will receive suffi-.ient priority to be funded by diversion of resources from other missions. VI-2
  • 164.
    In this phasethe Air Staff and the Military Personnel Center will test the feasibility of adopting the changed OER system and estimate the resources in terms of dollars, manpower and time needed to successfully adopt the new system -- in other words, conduct a feasibility analysis. An important aspect of feasibility is the assessment of how the proposed change in the OER system will affect other systems in the larger human resource management function. A part of this feasibility analysis should be to present the recommendation to the major commands and stiff agencies for comments. These comments should be incorporated into a decision briefing for Air Force senior leadership. The outcome of this phase will be the decision to implement the change and an allocation of the resources necessary to execute the change. DESIGN So far the change to the OER system has been worked out in terms of outcomes and process. In the design phase of implementation the specifications of the system will be written as well as the specifications for each subsequent phase of the implementation plan. It would be of great future benefit to the success of the revised OER system to integrate the major commands into the planning process so that they share ownership of the outcome. For this reason, and to provide a staff knowredgeable of a wide spectrum of Air Force issues, it would be beneficial to assemble a multi-command task force to complete the detailed implementation plan. In this phase the detailed plan will be developed to implement the change. Some aspects requiring particularly fine detail include: 1. systems requirements and specifications; VI-3
  • 165.
    2. identification ofimplementing agencies (Air Staff, MPC, Air University, contractor, etc.): 3. test plan; 4. training, 5. publicity; 6. time-phased start-up; and 7. evaluation. The outcome of the design phase will be a detailed plan encompassing each phase of the implementation program. A particularly significant element of this plan is that of evaluation. In the evaluation plan the design team will write the standards by which the success of the implementation will be measured. The importance of designing the evaluation plan early is that evaluation can begin early and the developers and implementers have an on-going evaluation as a control to assist them in maintaining standards of quality throughout the implementation cycle. A second significant aspect of the design phasc is the designation of the lead agency and suppor"'.g activities to accomplish the implementation. Public relations activities should begin immediately after the decision is made to proceed with a revision to the OER system. This activity should be integrated with each phase of the implementation and, therefore, is not appropriately a separate phase. During the design phase the Air Force officer corps should be informed that the decision has been made to revise the OER, that design of the revised system is underway, and of the reasons militating for a change. Thorough planning for publicity in the design phase will be highly supportive of success in shaping officer attitudes about the OER change. VI-4
  • 166.
    DEVELOPMENT In the developmentphase the materials, programs and systems envisioned in the design will be created. These are the tangible assets of the revised OER system which must be in place before the changeover to a new process and form can be made. The development phase will also produce those training and education materials that will be used to influence officer attitudes and behaviors toward the cultural changes needed if the revised OER system is to be a success. Development need not be deferred until all design work has been completed. The proposed milestone schedule at Figure VI-I suggests that design and development can proceed to some extent in parallel with a phase lag in development to preclude the double effort.that could result wnen a design change is made in a sub-system for which products might have been developed otherwise. Some of the activities during the development phase include the following: 1. Validate the information management system requirements and write the detailed systems specifications. 2. Procure or identify existing information processing equipment which will be used to support the revised OER system. 3. Write, test, and debug the software which will be needed to enter, process, storc and retrieve the OER data to be developed in the new system. (This may be a step on the critical path toward completion of a successful implementation.) 4. Write and validate the OER and related forms to be used in the new system. VI-5
  • 167.
    .5. Prepare revisedregulations, instructions and supporting information that will be used by administrators, raters, and indorsers under the new system. An important subset of this information would be that documentation of the automated information system needed by users. These materials should be prepared, coordinated, and published prior to the next phase. 6. Develop training materials to be used in training of users and administrators of the new system. 7. Prepare additional publicity and promotion materials. TESTING A test of the new OER system should be conducted prior to proliferating the system Air Force-wide. This test should be constructed to simulate as closely as possible its projected use when fully in place. For that reason, the test should not be conducted until the completion of the development phase. The test should be conducted in representative smaller units of each of the major commands and several of the more significant separate activities (Air Staff, MPC, Air University, etc.). The size of each test unit should be restricted to the smallest necessary to exercise the system fully and to yield a statistically significant sample of reports. On the other hand, as many different commands should be included as resource availability will allow. Some mechanism should be included in the tent which will heighten the realism of the exercise. (One of the lessons learned from the controlled OER period was that the test did not reveal the extent of resistance to the change which the officer corps would express when the new system was fully operational.) An example of a mechanism VI-6
  • 168.
    which might makethe test more realistic would be a requirement for the rater and/or indorser to brief the report to the ratee and for the Air Force to collect attitude data from all three by means of a survey conducted in the evaluation of the test. Some actions which should be conducted in the testing phase include the following: 1. Select and notify the test units; 2. Train representatives from each test unit to train their units and administer the test; 3. Train administrators, raters, and indorsers in the test units; 4. Conduct a rating cycle using the new system; 5. Evaluate the results. Some issues to be evaluated would include: - administrative procedures; - effectiveness of information systems; - the distribution of ratings. - the usefulness of the OER data to selection boards; - counseling compliance and its effectiveness; - officer attitudes about the revised system; and - success of the training programs. 6. Following the test evaluation consideration should be given to adjusting the system to account for lessons learned from the test. VI-7
  • 169.
    The study teambelieves that the best control group is either an external set of units or a set of previous reports on the same officers. Doing simultaneous reports under new and old systems is likely to introduce an auto-correlation error that will confound the results. Therefore, such a technique would not provide an effective control. FULL SCALE TRAINING Lessons learned from the implementation of the controlled OER in 1974 suggest that a good training program is essential to the successful conversion to a different OER system. Therefore, the training phase should be carefully planned and vigorously executed. The training conducted for the test units as a part of the previous phase should be carefully evaluated and the results incorporated- into the full scale training programs. Training is needed in two major areas. First, there is an obvious need to train officers in the procedural steps they will take in executing the OER system cycle. As a part of this aspect of the training program, prosisiuns should be made for training that will change officers' attitudes about the OER proc,-ss. It is an observation of the study team that it would not be practical to design an OER system which cannot be "gamed" by officers determined to do so. Therefore, in concert with the persuasion and control mechanisms built into the system, the training program should seek to create an attitude in the officer corps in which the majority of officers comply with the spirit of the revised system. A second area on which training should be focused is that of the counseling of subordinates. The expel ience of the other Services and that of the firms observed in private industry parallels that of the Air Force -- counseling is a task that supervisors are reluctant to do, which most do poorly absent adequate preparation, and one for VI-8
  • 170.
    which good trainingprograms can increase the effectiveness of most supervisors. This is a chronic rather than an acute challenge and thus suitable for a long-range training perspective. In that regard counseling may be a subject best addressed through a combination of pre -commissioning and professional military education programs. Steps which may be included in the training program include: 1. Develop sets of training programs suitable for use in uniu as well as in the various institutional environments; 2. Train major command and separate activity training teams: 3. Major command and separate activity teams train raters to perform evaluations and counseling; train indorsers to evaluate and maintain quality control of OERs; 4. Train the promotion secretariat in the revisions and to prepare materials for orientations of promotion board members; and 5. Begin revised training/education in the QER sytem in the Air Force institutional programs. FULL SCALE OPERATION Air Iorcc-widc implementation of the revised OeK system is dependent on the speed with which the supporting systems can be developed and proliferated. The milestone schedule at Figure VI-I suggests that evaluations under a revised OER could begin two years after the decision to proceed is made. The principal question concerning full scale operation is, what schedule should be followed in converting from evaluations uging the Air Force Form 707 to the new form and procedure? The operative consideration is that the revised OER system requires VI-9
  • 171.
    that a culturalchange be effected among the officer rorps. This change must be such that evaluators are more candid in their ratings. Therefore, it is desirable that the conversion be accomplished in a short period of time, and that the Air Force not operate two O£R systems simultaneously which have different perspectives on what honest and candid evaluations should say about officers who are being evaluated. The transition should be initiated with a close-out report for all officers using AF Form 707. This will be the opportunity for all units that are now manipulating the system to complete whatever distribution of indorsements they are working toward. Having a close-out report for all officers means that all start under the new system from the same point and have more or less equal opportunity to receive favorable evaluations in the future. It would be desirable to make all the close-out reports effective on the same day, but such a procedure would create an extraordinary administrative burden. Therefore, the transition should be planned to occur, by grade, over a period of not more than 90 days. Following the close-out, reporting would begin on a routine basis for each grade. The transition will be the smoothest if the sequence is in inverse grade order (begin with Colonels). Thus, in the transition to the new system, each evaluator (rater and indorser) is already being evaluated under the system before he/she is required to complete a report. It is also prudent to schedule the close-out report for lieutenant colonels immediately prior to a primary zone promotion board for selection for colonel. Therefore, lieutenant colonels, who have relatively low promotion opportunity, will be the last grade group to meet a promotion board under the new system. Similarly, the promotion boards for selection to captain and major, where the promotion opportunity is relatively high, should be scheduled so that many officers meet the board with an evaluation under the new system in their file. The high selection rate of these officers VI-10
  • 172.
    should be publicizedto demonstrate that the new system will operate fairly and that the right officers (high performing) will be promoted. Steps in the full scale operation phase include: 1. Expand the information program; 2. Disseminate regulations and instructions; 3. Install and test hardware, and software; 4. Phase out AF Form 707 with close-out reports by grade; 5. Begin reporting under the revised system, also by grade; and 6. Continue training. EVALUATION There is a need for continuing evaluation from the outset of the implementation period, but a well thought-out and energetic evaluation phase should begin with full scale operation under the revised OER system. The evaluation program should be centralized in the Air Force rather than being delegated to the major commands, as it is today. Also, there should be provision to continue the evaluation phase indefinitely into the future as an Air Force headquarters function. (In this regard there is a separate recommendation, elsewhere in this report, that the Military Personnel Center OER quality control capability be augmented.) Some of the items which should be evaluated include: 1. Operation of the developed technology; VI-l I
  • 173.
    2. Compliance ofraters and indorsers with the instructions and the spirit of the new system. This should include an evaluation of the distribution of ratings, 3. Quality of OER related information furnished to promotion boards; 4. Promotion board results using the new OER input; 5. Compliance with the counseling provisions of the system; and 6. Officer corps attitudes concerning the changes. REFINEMENTS AND MAINTENANCE An effective evaluation program will provide the basis for making changes to improve the operation of the OER system. In this regard it is the view of the study team that future changes would be feasible and desirable if they could be accomp!izhed by an evolutionary rather than a revolutionary process. Such changes could be viewed as necessary maintenance to the system. The concept designs proposed in Section V are thought to be feasible but may not accomplish all that the ideal evaluation program would do. Some future refinements which might be necessary or desirable include: 1. More stringent discipline to the distribution of ratings may be necessary if inflation is excessive; 2. if counseling does not prove to be adequately performed, compliance measures may be added to the system; 3. The Air Force may wish to institute performance improvement measures that resemble management by objective more closely, such as participative goal setting, for example. VI-12
  • 174.
    K 44AAt F-2 "t 1 C c Ut7 2 1. VI-I3 c 0 0 w) 0I C 0 b(D 0 L) V) V= = V1- 13I
  • 175.
    SECTION VII CONCLUDING COMMENTSAND RECOMMENDATIONS In the course of this project we have studied performance appraisal from a historical perspective, as it is practiced in the private sector, as it is conducted in the military services, and, of course, as it is conducted in the Air Force. While each organization has some distinguishing needs or cultural characteristics, it may be said overall that performance appraisal is at best an inexact science as well as a highly emotional issue. Inflated ratings are typical and recurring in almost all organizations. In short, performance appraisal is a very onerous but necessary human resource management function. Performance appraisal in the United States Armed Forces is differentiated from almost all other organizations because of the up or out system. Most organizatioi use performance appraisal for short-term compensation decisions, e.g., annual merit increases, bonuses, etc. Performance appraisal in the Armed Services, however, is the basic tool for shaping the officer workforce; the ultimate function of the r'o.ess is to select an ever smaller population at each successive officer grade. With ihis ".-.c,Ught in mind, the case could be made that the military services have a greater responsibility towards achieving accuracy in performance appraisal than most organizations. This need for accuracy in leadership identification is extremely important for each service, in part because of the training and development costs invested in each officer, but more importantly, to assure that the best possible leaders reach the higher grades. In addition, this consideration extends to the need to provide individual officers with the information necessary to make career and career development transition decisions. The current Air Force performance appraisal instrument, the OER Form 707, is probably as sound as most performance appraisal instruments used in large organizations. The process surrounding this instrument, however, as well as the culture do not support VII-1
  • 176.
    efficient or accurateuse of it, precisely because of the r I-ssible negative implication of such accuracy, i.e., a terminated Air Force career. During most of the history of the Air Force OER, this cultural orientation toward inaccuracy, seen in inflated OER ratings and gaming of the system in a multitude of ways, has become ingrained as basically acceptable, and has become an almost obligatory responsibility of principal raters. A primary observation of this study is that it is not so much the OER form which must be changed to introduce control, nor is it the process. The ingrained cultural attitude of the officer corps must be reoriented from acceptance of inaccuracy in OER preparation to a reauirement for accuracy. We realize that such an attitudinal/cultural change would have to occur gradually and would have to be reinforced from several different sources. RECOMMENDED INITIAL STEPS DEFINE THE PURPOSE(S) OF OER Air Force regulations cite no fewer than six purposes for the OER, substantially more than the number of purposes for evaluation systems reported by other organizations. The Air Force should focus the purposes for which the OER is to be used on those for which it is most effective, and communicate those purposes to users of the system. PROVIDE STRONG LEADERSHIP SUPPORT First, the Air Force leadership should clearly define and publish the exact purpose(s) of the OER as it is intended to be used on a day to day basis. Along with this definition should come criteria for the selection boards for promotion decisions, which would again be public knowledge. (For example, the Chief of Staff's desire to VII-2
  • 177.
    view a recordof good performance in cockpit jobs as sufficient basis for promotion through lieutenant colonel.) Different criteria are relevant for different grades, and these differences should be articulated and published so that junior officers become familiar with and internalize the fact that their perspectives and leadership abilities must grow if they are to continue to be promoted to higher grades throughout their career. In addition, it is essential that the Air Force leadership give a strong signal that it is committed to a candid, accurate OER process. This could include such actions as advising MPC to return OER's from raters, indorsers, or commands with inflated distributions or advising the selection boards to give less credibility to the ratings of such raters, indorsers, or commands. "Accuracy in PER preparation" could also be included as a performance factor on the QER. RECOMMENDED CHANGES TO OER PROCESS INSTITUTE NEW RATING PROCEDURES Although we believe that an attitude change toward the PER process is more important than a "fix" of the current form, wve do not want to discount the assistance that procedural change could lend in achieving cultural change. As described previously in this report, there are many habits in PER writing and rating which have become institutionalized. Adoption of one of the conceptual designs given in Section V would, at the very least, appear different from the current process and would require changes in how an PER is prepared. In addition, adoption of the second or third conceptual designs should mandate substantive change in the ratings officers receive. Of these two alternatives, we believe that the alternative of having the wing commander select 10% for top block ratings would be the more acceotable alternative to the officer corps. This is recommended VII-3
  • 178.
    because the resultsof the data collection showed that Air Force officers are willing to differentiate the top and bottom extremes or performance but are uncomfortable making finer distinctions or differentiating among the majority of competent officers as would be required more in the second alternative. PROVIDE FEEDBACK ON PERFORMANCE Each of the three conceptual designs described in Section V includes provisions for off-line job/career counseling. In addition to the valuable advice a subordinate could receive from his/her supervisor, we see such counseling as another opportunity for institutionalizing a commitment to accuracy in evaluation. This institutionalization could occur if the rest of the overall scenario was functioning as recommended. For example, we have recommended that criteria for selecti,)n be better defined to the boards and that these criteria be made public knowledge. In turn, through PME and other training, raters would learn these criteria, receive instruction on how to counsel subordinates relative to these criteria, and finally, receive guidance as to the importance of giving advice as well as accurate assessments of performance during the off-line counseling sessions. Over time it would become apparent to the population at large that OER assessments and promotion results were congruent with each other, and the system would develop the required credibility. REDUCE THE FREQUENCY OF OERS FOR LIEUTENANTS The current Air Force policy is for lieutenants to be formally evaluated every six months, The study conclusions are that lieutenants should be evaluated on the same basis as all other officer grades (yearly). There are two reasons supporting this recommendation. First, not enough additional information accumulates in a six month VII-4
  • 179.
    period for arater to add significantly to the previous report of performance. We recognize the need for added feedback at this early stage, but feedback could be provided through non-OER channels. Second, reducing the number of evaluation reports would significantly decrease the administrative burden of performance evaluations upon the units. RECOMMENDED IMPLEMENTATION ACTIONS Implementation of a new OER form will, of course, be the first opportunity to publicize the s in policy. We assume that this will be done through promotional literature, PME, GER-specific training, and guidance through the chain of command. We would also expect that a rather high percentage of the officer corps will view the new form as simplý another drii; in piocedural change. For this reason we recommend that heavy emphasis be placed on advertising the other steps recommended above. No matter how thorough the implementation phase is, there other steps are required to form the foundation as well as the maintenarce structure for a real and continued commitment to accuracy in OER preparation. PROVIDE TRAINING AND INDOCTRINATION SUPPORT A commitment to accuracy in PER preparation s,, uld be supported by anpropriate instruction being included in pre-commission training, transition training, and Professional Military Education (PME) schools and courses th, ughout an officer's career. The idea here is to bring about and conInually fuppnrt a code of accuracy -- akin to an honor system -- toward the OER. This training, as well as the other actions recommended, could also assist in removing some of the discomfort which some officers, particularly younger ones, feel toward the current system. Apparently there is a heavy emphasis in the current training VII-5
  • 180.
    and indoctrination materialsconcerning the honesty and integrity of the Air Force officer corps and systems. Some officers see the current and conflicting system of allocating indorsements covertly and firewalling ratings publicly as being in contradiction to "honesty and integrity.* CHANGE INFORMATION PROVIDED TO SELECTION BOARDS Limit the Number of QER s in the Promotion Folder Current practice dictates that all the evaluation reports generated during an individual's career be included in the promotion folder. The Air Force should consider limiting the number of evaluation reports to all reports generated in the present grade, or five evaluation reports (whichever number is higher). For example, if an individual has received four evaluation reports as a captain, then these four reports, and the last CER as a first lieutenant, would be included in the promotion folder. Similarly, if a lieutenant colonel has received six evaluation reports, all six would be part .' he promotion folder. This measure would have considerable impact upon the Air Force officer corps. First, it wuuld reinforce the message that the performance evaluation system has been re-focused to accentuate current or recent performance. In addition, it would take a fair amount of pressure off both the rater and ratee, since the QER would not have the long-term impact that it has today. This should result in more candid and accurate evaluations. enif , Special Catggg.ry -,aninztions (SPECQA'2,I.. According to Air Force regulations, certain organizwtions receive as a matter of policy, preferential manning considerations. In a study of FY72-74 major, lieutenant colonel, and colonel temporary promotion boards, 25 agencies identif:ed as SPECAT VII-6
  • 181.
    were identified ashaving "higher quality" officers than did the highest MAJCOM. It is recommended that such a study be updated and identify those units which, by regulation, receive special consideration in terms of the quality of officers assigned and are shown to have significantly higher promotion board scores than the MAJCOMs. It is further recommended that the list of such organizations be provided to each promotion board with instructions that the board is to recognize that the proportion of outstandirg officers who are assigned to such organizations is probably significantly higher than ten percent. Reduce Importance of Photo in the Promotion Folder A considerable degree of hostility was expressed to the study team over the inflated importance of details which have become associated with the picture in the folder. Variations such as how good the photographer is, how photogenic the officer is or individual likes and dislikes of those serving on promotion boards are all factors which are seen as unnecessarily biasing in relation to the picture. It is recommended not to eliminate the picture from the promotion folder, but to reduce its size (e.g., to 3" X 5"), in order to decrease the amount of attention given to potentially biasing minute details. OTHER ISSUES Several issues not directly associated with officer evaluation were idenfified during the data collection and analysis stages of the project. The scope of the study did not allow for development of each of these issues into a well substantiated conclusion and recommendation, but the project team was motivated to mention several of these issues because of the breadth of coficein observed among Air Force officers interviewed. V11-7
  • 182.
    CAREER DEVELOPMENT ISSUES First,the tearn observed widespread uncertainty over the fundamental question of what the desired or expected career paths for Air Force officers are. It is suggested that a more precise concept of professional development should be articulated by the Air Force to the officer corps. For example, in today's Air Force, is it valid for an individual to say that he/she just wants to be a pilot? The answers to these and other career-related questions should be pursued, along with an assess'ment of theii impact on the performance evaluation system. Second, it was observed that many junior and mid-grade officers are reluctant to admit or are ignorant of their reasonable promotion expectations. The existence of the grade pyramid is a fact bearing heavily on attitudes about the GER system, yet the observations accumulated by the project team suggest that the Air Force has not clearly articulated the implications of this grade pyramid for the career planning of officers. Finally, there are a group of career development issues that center around the phase points for promotion. Among these are: 1. The large opportunity for b)elow the zone promotion selection has a profound impact on the OER system, Among other implications, it encourages widespread "gaming" of the distribution of top i:,Jorsements. 2. The selection for promotion to major has profound ps~chologicrýl effect on officer attitudes; as this is the first point where significant rui..bers of competent officers are selected out of the Air Force. The phase point occurs at a time when it may be difficult for the officer selected out to transition back to a c;% .ian carep, because of his/he- age and lack of recent, civilian experience. Under the current OER system, many of VII-8
  • 183.
    these officers havenot been prepared for the prospect that they might be released. The anxiety extends far beyond the cohorts who might be effected. It is the conclusion of this study that these issues are not readily addressed by changes to the OER system. Rather, it is recommended that the Air Force look to other career development solutions to these challenges. AIRMAN PERFORMANCE REPORT Senior non-commissioned officers are evaluated using the Airman Performance Report (APR). This report is allowed to escalate above the level of immediate supervision for final indorsement, in a manner similar to the OER. It is recommended that, if the Air Force chooses to change the OER process, an evaluation of the APR be immediately undertaken with a view toward coordinating the two systems and the policies which underlie them. VII-9
  • 184.
    APPENDIX A REFERENCES Beacham, S.(1979). "Managing Compensation and Performance Appraisal under the Age Act." Management Review. January. Brinkerhoff, D. W. and Kanter. R. M. (1980). "Appraising the Performance of Performance Appraisal." Sloan Management Review, Spring, pp. 3-16. Bureau of N.itional Affairs (1974). Labor Polic' and Practice -- Personnel Management, Was'iington, D.C. Bureau of National Affairs (1975). 'Employee Performance: Evaluation and Control," PersonnelPolicies Forum, No. 108, February 1975. Cascio, W. F. (1982). Applied Psychology In Personnel Management, Reston, Virginia, Reston Publishing Co., Inc. Cascio, W. F. and Valenzi, L. R. (1978). "Relations Among Criteria of Police Performers," Journal of Applied Psychology, 63, pp, 22-28. Cook, D. (1968), "The Impact on Managers of Frequency of Feedback." Academy of Management Journal, 11 (2) pp, 263-77. Cummings, L, and Schwab, D. (1973), Performance In Organi:ations. Glenview, Illinois: Scott Foresman & Co. Eichel, C. and Bender, H. (1984). Perfor!Ttancc Appraisal.- A Study of Curren" T7chniques. American Management Ossociation, New York. French, W, L. (1982). The Personnel Process., ttuman Resources Admintistration and Dryclopmcm, Boston, Massachusett: Houghton Mifflin Co. Glueck, W. F. (1978). P'ersonnel,; A Diagnostic Approach, Dallas, Texas: Business Publications, Inc. Gordon, L, V. and Medland, F. F. (1965), 'The Cross-Group Stability of Peer Rating% of Leadership Potential," PersonnelPsychology, 18, pp. 173-77. Greenbcrg, G. (1968). "Determinants of Perceived Fairness of Performance Evaluations." Journal of Applied Psychology, 71 (2), pp, 340-342. flay Associates (197S). Survey Uf Human Resources PriJ.. N'-w York: Ilay Associates. Ilollander, L. P. (1965). "Validity of Peer Nominations in Predicting a Distant Performance Criterion." Journal of Applied Psychology, 49, 434-438. Kane, J. S. and Freeman, K, A. (1986). "MIJO and Performance Appraisal: A Mixture "1htl'a Not a S',lution, Part I." 1'er.ionnel, December, pp. 26-36, A-1
  • 185.
    Korman, A. K.(1968). "The Prediction of Managerial Performance: A Review." Personnel Psychology, 21, pp. 295-322. Landy, F. J. and Fan, J. L. (1980). "Performance Rating." Psychology Bulletin, 87, (1), pp. 72-107. Latham, G. P. and Wexley, K. N. (1980). Increasing Productivity Through Performance Appraisal. Massachusetts: Addison-Wesley. Lazer, R. I. and Wikstrom, W. S. (1977). Appraising ManagerialPerformance: Current Practiceand Future Directions. New York: The Conference Board. Locher, A. H. and Teel, K. S. (1975). "Performance Appraisal - A Survey of Current Practices." Personnel Practices,pp. 245-247. Meyer, H. H. (1980). "Self-Appraisal of Job Performance." Personnel Psychology, 33. pp. 291-295. Meyer, H. H., Kay, E., and French, J. R. P., Jr. (1965). "SDlit Roles in Performance Appraisal." HarvardBusiness Review, 43 (1), pp. 123-129. Phillip, Thomas D., Major (1979). Evolution of the Air Force Officer Evaluation System: 1968-1978. The Air University, Maxwell Air Force Base, Alabama. Porter, L. W., Lawler, E. E. and Hackman, J. R. (1975). Behavior In Organizations. New York: McGraw-Hill. Schneicr, C. E., Beatty, R. W., and Baird, L. S. (1986). "How to Construct a Successful Performance Appraisal System." Training and Development Journal, April, pp. 38-42. Tarnowieski, D. (1973). The Changing Success Ethic (An AMA Survey Report)." New York: American Management Association. Taylor, R, L. and Zawacki, R. A. (1984). "Trends in Performance Appraisal: Guidelines for Managers." Personnel Administrator. March, pp. 71-80. Wexley, K. N. and Yukl, G. A. (1977). Organizational Behiaviors and Personnel P.%ychology. Illinois: Richard D. Irwin, Inc. OTHER READINGS Bjerke, David G., Cleveland, Jeanette N., Morrison, Robert P., and Wilson, Wirham C. (1986). "Officer Fitness Report Evaluatiou Study." Unpublished report. Navy Personnel Research and Development Center, San Diego, California. Davis, B. L. and Mount, M. K. (1986). "Design and Use of a Performance Appraisal Feedback System.' Personne Administrator, pp. 91-97. Harper, S. C. (1986). "Adding Purpose to Performance Reviews." Training and Development Journal,pp. 53-55. A-2
  • 186.
    Kelly, C. M.(1986). "Reasonable Performance Appraisals." Training and Development Journal, January 1986, pp. 79-82. Lewin, A. Y. and Swary, A. (1976). "Peer Nominations: A Model, Literature Critique and a Paradigm for Research." Personnel Psychology, 29. pp. 423-447. Martin, D. C. (1986). "Performance Appraisal 2: Improving the Rater's Effectiveness." Personnel Psychology, August 1986, pp. 28-33. United States Air Force (1982). "Officer Evaluations." Department of the Air Force Regulation 36-10. United States Air Force (1983). "You and Your Promotion System." Department of the Air Force Pamphlet 36-32. A-3
  • 187.
    APPENDIX B SUMMARY OFPERFORMANCE APPRAISAL METHODS Numerous techniques or formats have been developed in attempts to evaluate ratee performance accurately, reduce the judgmental and mea.;urement difficulties associated with performance appraisal, assist in providing feedback to ratees, and lessen the administrative burden appraisals place on an organization. Each type of appraisal method has, of course, both advantages and disadvantages, depending on the specific objectives intended for it and the organizational setting in which it is to be employed. The purpose of this appendix is to describe the major performance appraisal methods in use today. Evaluations of the potential usefulness of these methods to the Air Force is contained in Section III of the text of the report. The following is a list of methods to be described: Method A. Graphic Rating Scale B-2 B. Trait Appraisal B-2 C. Narrative Essay B-3 D. Work Sample Tests B-3 E. Critical Incident Technique B-4 F. BARS/BES B-5 G. Behaviorai Observation Scales B-8 H. Behavior Discrimination Scales B-10 I. Weighted Checklist 1-13 J. Simple Ranking System B-15 K. Forced Choice B-17 L. Forced Distribution Ranking B-20 B-I
  • 188.
    M. Paired ComparisonB-20 N. ,M.;Aed Standard Rating Scales B-22 0. Management By Objectives B-24 A. GRAPHIC RATING SCALE The graphic rating scale is an appraisal method in common use, particularly for positions below managerial levels. All rating scales share the properties of calling for the rater's judgment of the ratees job performance along an unbroken continuum (e.g., excellent to unacceptable), or into discrete categories (e.g., superior, satisfactory, unsatisfactory) within a continuum. in the typical appraisal using graphic rating scales, the rater is given a list of job dimensions and told to rate the employee in each of the dimensions using the scale. A major problem with such scales is that words like "superior" and "average" have different meanings to different raters, which affects the reliability of the in3trument. Contemporary versions are likely to use scales featuring descriptive statements of different levels of performance for each dimension. Choices along the scale for each dimension may be assigned points, and total scores may then be computcd for each employee. The Performance Faciors section ,f the Air Force Form 707 is an example of a graphic rating scale technique. B. TRAIT APPRAISAL The ratee is understood as an individual composed of various amounts of initiat*,-e, cooperativeness, loyalty, creativity, commitment and the like. The trait approach is based on such personality characteristics. In this approach the appraiser B3-2
  • 189.
    focuses on thepersonality traits of the employee, and uses these to rate the employee's performance. For instance, employee A shows initiative, therefore, is committed to the job. The emphasis is on the potential predictor for performance and not performance itself. A typical trait performance appraisal form contains a number of employee qualities and characteristics to be judged, such as leadership, emotional stability, attitude, job knowledge, communication skills, ability to adapt, and so on. These traits are then evaluated on rating scales. The scales may be broken into many parts or points, and the appraiser is required to mark zgainst which point best describes the employee. For example, on employee dependability, the points may be a) above average; b) usually dependable; c) sometimes careless; and d) unreliable It is also usual to find a question like, "What traits may help or hinder the employee's advancement?" The trait approach is more inclined towards the individual as a person, and rates the individual as such, rather than his or her job performance. C. NARRATIVE ESSAY The rater prepares a written subjective report of the performance of the ratee. Specific issues or performance in given areas can be highlighted by the rater. Frequently raters are asked by their organizations to indicat., the ratees' performance in certain areas, e.g., equal employment opportunity and affirmative action. D. WORK SAMPLE TESTS Individuals being rated are given tests, usually hands-on type exercises, of specific critical skills of their job. Tnese tests are then scor,.d to determine the individual's proficiency in the job. 13-3
  • 190.
    E. CRITICAL INCIDENTTECHNIQUE Job incumbents and/or supervisors are asked to develop incidents that discriminate between successful and unsuccessful performance, or those behaviors which are crucial to the job. This method requires the observer (usually the supervisor) to be knowledgeable of the requirements and goals of a given job. He/she must be a person who sees these people perform the job on a regular basis, so that they may describe to a job analyst incidents of effective and ineffective job behavior that they have observed over the past six to twelve months. The specific steps in conducting a job analysis based on the critical incident technique is as follows: 1. Introduction - The job analyst tells the observer to determine what makes the difference between an effective and ineffective (Dosition) (e.g., a secretary, engineer, or technician). The analyst must then explain exactly what he/she means by effective and ineffective. 2. Interview - The observer is asked to think back over the past six to twelve months and come up with specific incidents that they themselves have seen occur, without mentioning any of the specific employees' names. They are asked to report at least five effective and five ineffective incidents, and in order to collect a representative sample of incidents it is recommended that at least 30 people be interviewed for a total of 300 incidents. B-4
  • 191.
    This method focuseson key dimensions of responsibilities which then help in the selection and appraisal of personnel for such positions. Examples of critical incidents are: POSITION: PERSONNEL OFFICER In classifying a position, fails to take into account other functions in the unit or in the larger organization which impact the position being classified. In discussions related to filling a difficult position, will explore all possible mechanisms for filling the position and talk to program officials to ascertain cause of difficulty in locating applicants before making a recommendation. Does not ask employees for additional information which might help in becoming qualified for a position. Agrees with supervisor's request that an overgraded employee be overlooked during the review period. Identifies potential interpersonal conflicts due to differences in personality, age, race, etc., between parties to a grievance before making a decision. F. BARS/BES PERFORMANCE APPRAISAL SYSTEM BARS/BES, developed by Smith and Kendell in 1963, is based on job analysis, notation of critical incidents and a rating scale. The critical incidents of the employee must be observed by the supervisor. This system deals with expected behavior. B-5
  • 192.
    This system requiresthe manager to work with the employee to achieve mutually acceptable goals and desirable behavior. BARS/BES forces the supervisor and the employee to communicate ideas which promote better understanding as well as ensuring behavioral changes to improve employee performance. Critical Incidents Illustrate what the employee has done or failed to have done that have resulted in unusual success or failure. They are NOT opinions or generalizations concerning the employee. BARS Behaviorally Anchored Rating System - Uses a rating scale and behavioral anchors (or critical incidents) related to the criterion being measured. BES Behavioral Expectation Scale - Focus on expected performance. Development Of BARS/BES System Group I - Using job analysis, critical incidents are gathered describing competent, average and incompetent behaviors from categories relevant to the job. Ex: Math/technical, administrative ability. Each category corresponds to criterion for evaluating the employee. Group 1! - Group allocates each critical incident to a criterion category. If incidents are not assigned to the same dimension by 80%, those incidents will be onfitted, thus eliminating ambiguous incidents. B-6
  • 193.
    Group III -Members receive a booklet containing criteria categories plus a list of incidents defining each criterion. Group rates each incident typicall, using a 7-point system (7 - outstanding, I - poor job performance). The numeric vaue is derived from the mean of all the members' ratings. These become the ANCHORS on the rating scale. Anchors aid the supervisor when defining the employee behavior. Items will be worded as: "could be expected to work overtime" rather than "works overtime". RATING SCALE TO DETERMINE ANCHORS RELATED TO CRITERION OF "PERSEVERANCE" (COMPUTER PROGRAM) How perseverant is the employee? Could be expected to keep working until difficult task is completed. Could be expected to continue working on task beyond normal working hours. _Could be expected to continue on task until an opportunity arises to work. Could be expected to need frequent reminder to continue on task. Could be expected to ask for new assignment rather than fa:e difficult task. Could be expected to stop work on difficult task at first indication of complexity of the task. B-7
  • 194.
    G. BOS -BEHAVIORAL OBSERVATION SCALES BOS is a behaviorally based appraisal measure whereby judges rate incidents obtained in the job analysis in terms of the extent to which each incident represents effective job behavior. The specific steps in developing a BOS Appraisal System are as follows: 1. Individuals who are aware of the aims and objectives of a given job, who frequently observe people performing that function, and who are capable of determining whether the job requirements are being performed satisfactorily are interviewed. These individuals are asked to describe incidents that are examples of effective or ineffective behavior (critical incidents). Incidents which describe essentially the same behavior are grouped into a behavioral item. 2. Clusters of behavioral items which are similar are grouped together to form one overall criterion or behavioral observation scale (BOS). The grouping can be done by job incumbents or analysts. 3. Incidents are placed in random order and given to a second individual or group who reclassifies the incidents. Interjudge agreement is assessed by counting the number of incidents that both groups agree should be placed in a given criterion divided by the combined number of incidents both groups placed in that criterion. If the ratio is below a previously agreed upon number, the items under the criterion are reexamined to see if they should be reclassified under a different criterion and/or if the criterion should be rewritten to increase specificity. B-8
  • 195.
    4. The BOScriterion are examined regarding their relevance to content validity. People who are intimately involved with the job evaluate the system to see if the criterion include a representative sample of the behavioral domain of interest as defined by the job analysis. 5. A 5-point Likert scale is assessed to each behavioral item. Percentages are assigned to the five points on the Likert scale, designating the number of times an employee has been observed engaging in a particular behavior. 6. A decision must be made as to whether the scales will be weighed. This is needed because each scale or criterion contains a different number of bcha-viural items. An overall performance rating is usually compiled by averaging across all criterion regardless of the number of items used in each criterion. The score received on each BOS criterion can be used to compute the overall performance rating for each incumbent. Example of one BOS criterion for evaluating managers. For each behavior a 5 represents almost always or 95% to 100% of the time; a 4 represents frequently or 85% to 94% of the time; a 3 represents sometimes or 75% to 84% of the time; a 2 represents seldom or 65% to 74% of the time; and 1 represents almost never or 0% to 64% of the time. B-9
  • 196.
    Overcoming Resistance toChange:! 1. Describes the details of the change to subordinates. Almost Never 1 2 3 4 5 Almost Always 2. Explains why the change is necessary. Almost Never 1 2 3 4 5 Almost Always 3. Discusses how the change will affect the employee. Almost Never 1 2 3 4 5 Almost Always 4. Listens to the employee's concerns. Almost Never I 2 3 4 5 Almost Always 5. Asks the employee for help in making the change work. Almost Never 1 2 3 4 5 Almost Always 6. If necessary, specifies the date for a follow-up meeting to respond to the employee's concerns. Almost Never 1 2 3 4 5 Almost Always Total _ Below Adequate Adequate Full Excellent Superior 6-10 11-15 16-20 21-25 26-30 H. BEHAVIOR DISCRIMINATION SCALES In "Behavioral Discrimination Scales: A Distributional Measurement Rating Method," Kane and Lawler state that the BDS "represents an attempt to achieve the ideal operationalization of the distributional measurement model." The steps of BDS: 1 A pool of statements describing the full range of satisfactory and unsatisfactory job behaviors and/or outcomes is generated. This should be accomplished by having supervisors and their subordinates list all job functions. Then the subordinates should list all of the satisfactory and unsatisfactory ways of carrying out these duties. I Latham, Gary P. and Wexley, Kenneth N., Increasing Produeuvity Through Performance Appraisal, 1982, p. 56. B-10
  • 197.
    2. All incidentsshould be pooled to avoid duplications and all other incidents that are similar should be grouped together. This is called performance specimens and is done so that the number of items rated on each object is reduced. A general statement is then written to express the behavior. 3. The performance specimens are then inserted on a questionnaire administered to at least 20 supervisors and their subordinates. There are two different forms of questionnaire3. Each questionnaire is given to half of the sample. One form asks three questions in regard to each specimen: a. During a normal six-month period, how many times would a person have the opportunity to exhibit this behavior or outcome? b. It would be moderately satisfactory performance to exhibit this behavior or outcome on how many of these occasions? c. I-low good or bad is the performance described by this behavior or outcome? (I = very bad; 8 - very good.) The other form is exactly the same except question two refers to moderately unsatisfactory performance. 4. The results should be analyzed by converting question two responses to percentages of question one responses for each specimen and then computing the T-statistic for the difference between the mean percentages of the two subsamples for each specimen. All specimens for which the t-value doesn't reach .01 p should be eliminated. B-Il
  • 198.
    5. Each specimen'smedian occurrence percentage and mean rating on question three are computed for the combined sample. With the extensity (occurrence rate goodness) scale value for each specimen can be derived. 6. Next the appraisal form is constructed by listing each specimen in random order at the left side of the form. To the right side of each specimen is a column headed by the following question: To your personal knowledge, how many times did this person have the opportunity to exhibit thi: behavior or outcome during the appraisal period? (Note: If zero, so indicate and proceed to the next item.) If the response is greater than zero the rater is asked to complete the following statement This person actually exhibited this behavior or outcome on of these occasions. 7. The rating should be scored in the following manner: a. The frequency assigned to the object on each specimen should be converted to a percentage of his/her opportunities to exhibit the specimen. b. Extensity scale value corresponding to this percentage for each specimen should then be determined. c. The value should then be multiplied by its intensity weight, which can consist of the specimen's t-value. d. Overall performance is ready to be formed. This is obtained by summing up the dimension scores. B-12
  • 199.
    Example: "Kane and Lawler(1980) presented the following items for grouping: 1) "Had to stop a press run to remove grease from a roller." 2) "Had to stop a press run to make a paper adjustment that should have been made before the press run started." 3) "Failed to check the ink reservoir before a press run started." 4) "Had to stop a press run to fix a mechanical problem that should have been discovered in the routine inspection." These items were grouped, and the following statement was written to reflect the meaning: "Had to stop a press run because of a problem caused by the failure to properly make normal checks and adjustments before the run started." These are known as performance specimens. I. WEIGHTED CHECKLIST The weighted checklist performance appraisal system was introduced by Knauft in 1948. It consists of statements, adjectives, or individual attributes that have been previously scaled for effectiveness in worker's behavior. The most common type of item used in the weighted checklist is behavioral in nature. The first step in constructing a weighted checklist is to generate a large number of behavioral statements relevant to all aspects of the job. These statements should represent all levels of effectiveness in that job. A list of rules for writing these statements were developed: B-13
  • 200.
    1. Express onlyone thought per statement or scale. 2. Use understandable terminology, and eliminate double negatives. 3. Express thoughts clearly and simply, avoid vague and trait-oriented statements. The second step consists of having a panel of job experts then judge the extent to which each statement represents effective or ineffective job beha-ior. One method for accomplishing this is called 'equal-appearing interval.' This method asks the experts to classify each statement into one of 11 categories ranging from *highly effective to highly ineffective job behavior." The ratings are then summarized in order to identify those statements which are consistently placed at some point on the continuum of effectiveness. On the basis of this scaling procedure, the most reliable rated items are selected for use on the checklist. The mean or median rating of effectiveness calculated by the experts becomes the scale value for each item. Statements are then selected so that every point on the continuum of effectiveness is represented on the checklist. Items are usually randomized in terms of their relative levels of effectiveness, and scale values are unknown to the rater. The rater simply checks those statements to be descriptive of the ratee. The method of scoring is based either on the sum total of scale values, or median score of the checked statements. B3-14
  • 201.
    Ratings by 15Experts on Four Behavioral Statements Using a Behavioral Checklist Categories of Effectiveness Highly Highl) Ineffective Effective 1 2 3 4 5 6 7 8 9 10 11 Statement 3 5 7 .2 4 5 4 1 ! 3 8 7 4 1 2 6 2 2 I 1 Examples of Items From Weighted Checklist Performance Rating for Bake Shop Manager SIcem His window display has customer appeal. 8.5 He encourages his employees to show initiative. 8.1 He seldom forgets what he has once been told. 7.6 His sales per customer are relatively high. 7.4 He has originated I or more workable new formulas. 6.4 He belongs to a local merchants' association. 4.9 His weekly and munth!y reports are sometimes inaccurate. 4.2 He does not anticipate probable emergencies. 2.4 He is slow to discipline his employees even when he should. 1.9 He rarely figures the costs of his products. 1.0 J. SIMPLE (ALTERNATE) RANKING SYSTEM Description Of The System The simple ranking system is a comparative approach to the evaluation of employee performance. Regarded as one of the oldest and simplest methods of performance appraisal, this system is so popular that it is used, in practice, by many personnel administrators to make decisions related to merit pay increases, promotions, and organizational rewards. It aims at providing an overall ranking of a group of employees. B-15
  • 202.
    Specifically, the simpleranking system involves comparing an employee against other employees in a work group. It requires an appraiser to arrange employees in rank order from the best to the poorest (or highest to lnwost). Although overall rankings are commonly made, employees can be ranked on a number of separate factors such as "ability to work with others" or "ability to grasp new ideas." Virtually, two or more appraisers may be asked to make independent rankings of the same group of employees and their lists are averaged to i-PID reduce biases. Since it is practically easier to distinguish between the best and worst employees than to simply rank them in descending order, an "alternation" ranking method is commonly used. It is a very elementary variation of the order of merit ranking. It places a group of comparable employees in simple rank order in terms of their overall work performance, future potential, or other characteristics. This method is illustrated by the following example. Example: Assume that an appraiser wants to rank ten employees: A, B, C, D, E, F, G, H, I, and J cn the basis of their overall work performance. Looking at a list of these employees' names, the appraisvr eliminates those whose work is so different that they cannot be compared to the other members of the group (e.g., H and J). Then. the appraiser looks over the remaining names (i.e., A, B, C, D, E, F, G, and I) and decides which one he thinks is the best on the list (e.g., C). He draws a line through this name (i.e., C) and writes it in the blank sp. -. e labeled "I - Highest" at the top of the page (see the figure). He then looks over the remaining names (i.e., A, B, D, E, F, G, and 1)and decides which person is not as effective as any of the others on the list (e.g., G). He draws a line through this name (i.e., G) and writes it in the blank space marked l - Lowest" at the bottom of the page. He then examines the remainder of the names (i.e., A, B, D, E, F, and 1), selects the best (e.g., A), draws a line through his name, and B- 16
  • 203.
    places the namein the box labeled "2 - Next Highest.' Thus, the appraiser cin "alternate" between thinking of the best and poorest employee on an increasingly smaller list. He continues this procedure until he has drawn a line through each name on the list. Apparently, the middle position in the rank order is the last to be filled. Emnlovees to be Ranked Bnking A 1) Highest ................................... C B 2) Next Highest ......................... A C 3) Next H ighest ........................... 4) Next H ighest............................ D E F 4) Next Lowest ............................ G 3) Next Lowest ............................ 1 2) Next Lowest ............................ 1) Lowest ...................................... G K. TIlE FORCED CHOICE TE( IINIQUE OF PERFORMANCE APPRAISAL The forccd choice technique was developed between 1940 and 1945 in an effort to improve performance appraisal in the U.S. Army. The forced choice technique is based on the assumption that any real differences that exist among workers in competence or efficiency can be described in terms of objective, observable behavior. The technique was intended to eliminate the appraiser from indicating how m'Ich or low little of each characteristic an officer possessed. Instead, raters were instructed to choose from several sets ,i tetrads (a set of four adjectives, two of a favorable nature and two of an unfavorable nature) which would best and least describe the appraisee. This t,-hnique was also intended to reduce the appraiser's ability to produce the desired outcome due to its method of construction. Thus, favoritism and personal bias are diminished. B-17
  • 204.
    Construction of theForced-Choice Tetrads: Forced Choice rating elements are sets of four phases, or adjectives, pertaining to job performance or personal qualifications. Generally, a six-step procedure is used in constructing the tet'ads: (I) Instruct a first group of appraisers who are familiar with the appraisees to write brief essays which describe successful and unsuccessful felloer workers. These essays serve as the source of the behavioral items relevant to the job (i.e., critical tasks). (2) Behavioral items are extracted from the essays and put into list form. These items should cover all important aspects of the jot and the number of items covering each aspect should be related in some rational way to the importance of that aspect. (3) The list is distributed to a ý,econd group. Each person in this second group is asked to select, from among his/her peers, one person s/he knows well enough to confidently rate. For each item, the rater assigns one of the following scores: "This item describes the appraisee (A) to an exceedingly hifh or to the highest possible degree; (B) to an unusual C.- outstanding degree; (C) to a typical degree; (D) to a limited degree; (E) to a slight degree; or (F) not a" all." The evaluator is then asked to rate the person being appraised on a scale showing his/her position with respect to overall .,ompetence in a representative group of 20 workers of the same grade. (4) Lists are collected and arranged in order of rating of uve.rall competency and separated into Upper, Middle, and Lower thirds. An analysis is conducted to determine, in each of the 3 groups, the frequency with which each of the 5 rating alternatives was chosen for each item. (5) Based on the above analysis, two values are statistically computed for each item: B-18
  • 205.
    1. The PreferenceValue: Indicates the degree to which raters tend to rate others too high or too low on a particular characteristic. 2. The Discrimination Value: Indicates those items which differentiate between a good and a poor worker. In other words, these adjectives are truly indicative of the degree to which the items measure the characteristic which they are intended to measure. (6) Each tetrad consists of two pairs of adjectives or phrases; each pair consists of two items which are equal in preference value, but differ in discrimination value. Obviously, the rater is not aware which adjective or phrase is the preference word and which is the discrimination word. Each tetrad consists of a pair of favorable words with similar preference, but dissimilar discrimination, indices; and a pair of unfavorable words with similar preference, but dissimilar discrimination, indices (see example below). Scoring: The ratee receives a positive score if: 1. The item which is most descriptive of him/her is a discriminating desirable characteristic. 2. If the item which is least descriptive of him/her is the undesirable discriminating item (i.e., indicates poor job performance). Read instruction sheet carefully before marking this section. igjn IV. JOB PROFICIENC. MOST LEAST A. Cannot assume responsibility B. Knows how and when to delegate authority C. Offers suggestions D. Too easily changes his/her ideas B-19
  • 206.
    Section V. PERSONALQUALIFICATIONS MOST LEAST A. Coolheaded B. Commands respect by his/her actions C. Overbearing D. Indifferent L. FORCED DISTRIBUTION RANKING Ranking techniques compare ratees' performance to that of others on the job or in similar positions, as opposed to comparison against an absolute standard of performance. Forced distribution ranking is a comparative performance appraisal technique where the rater places specific portions of the group of ratees into various categories depicting different degrees of performance. The performance categories may be: excellent, good, fair, poor and unacceptable. The rater is instructed for example to allocate 10% of the ratees to the excellent category, 20% to good, 40% to fair, 20% to poor, and 10% to unacceptable. The rankings are the result of the rater's subjective opinion. N1. PAIRED COMPARISON Paired comparison is an appraisal technique in which each employee is compared to every other employee to produce a ranking of employees on a particular trait. B-20
  • 207.
    The steps fordeveloping the paired comparison technique includes the following: 1. A chart is made of all possible pairs of employees to be evaluated. The names of the employees to be evaluated are placed on a chart in a predetermined order such that each employee is compared with every other employee in the group. 2. A separate chart is constructed for each trait. The traits include such things as quality of work, cooperation, creativity, quantity of work, etc. 3. For each comparison of pairs, the evaluator judges one employee as being better than the other on a particular trait. If an employee is better than the other a (+) is placed in the appropriate box and if an employee is worse than the other a (-) is placed in the appropriate box. 4. The number of times an employee is judged as being better than the other is tallied. So, for each chart the evaluator totals the number of +'s in each column to get the highest ranked employee, 5. Then, based on the number of better evaluations (+) received, a ranking of employees can be formulated. An employee with the greatest number of +'s would be ranked the highest on a particular trait, followed by the next highest. This ranking would continue until you reach the employee with the least amount of +'s, who would be ranked the lowest. Example of Paired Comparison Rating for Tabulating Machine Operators. Trait: ACCURACY. Which employee produces more consistently accurate work? Which do you feel you do not have to check on much? B-21
  • 208.
    AS COMPARED TO ADAMS BAKERCOOPER DALTON EMORY ADAMS - + - - BAKER + + + - COOPER .... DALTON + - + EMORY + + + + The list of employees on the top row are compared, one by one to each employee in the left column. The appropriate mark is placed in each square to indicate the better employee of the pair. For example, ADAMS is compared to BAKER. ADAMS is chosen as the better employee so a (+) is placed in the square. The number of +'s are added up for each person and results are as follows: COOPER 4 (Ranked the highest) ADAMS 3 DALTON 2 BAKER I EMORY 0 According to the ranking, COOPER would be the most accurate employee and EMORY the least accurate employee. N. MIXED STANDARD RATING SCALES (Blanz, F., and Ghiselli, E.E. The mixed standard scale: A new rating system. • nel P'chologv, 1ý9-7-, 22, 185-200). U-22
  • 209.
    Items representing good.average and Door performance on a given dimension are mixed randomly with items representing good, average and poor performance on other dimensions. Each item is rated as follows: + ratee is better than the statement; 0 statement fits the ratee; - ratee is worse than the statement. Rater is not told the dimension being measured by the statement. nor the level of performance renreetc. Performance Dimenionii Ratine Job 1. The officer could be expected to misinform Knowledge public on legal matters through lack of know- ledge. (poor) + Relations 2. Officer carefully answers rookie's questions. W/Others (good) 0 Job 3. This officer never has to ask others about points Knowledge of law. (good) Job 4. This officer follows correct procedures for Knowledge evidence preservation at the scene of a crime 0 (average) MIXED STANDARD RATING SCALE SCORING Statements Good Averaize Poor Points + + + 7 0 + + 6 - + + 5 - 0 + 4 - - + 3 - - 0 2 Officer in our example received: Good -; Average 0; Poor + for job knowledge dimension or a score of 4. B-23
  • 210.
    0. MANAGEMENT BYOBJECTIVES (MBO) MBO is a process whereby the superior and subordinate members of an organization jointly identify its common goals, define each individual's major areas of responsibility in terms of results expected of him/her, and use these measures as guides for operating the organization and assessing the contributions of each of its members. MBO is a human system; a communication vehicle among the people involved in it. STRUCTURE Roles and Key result Indicators Objectives Action Controls Mission Areas Plans Roles and Missions are stated by higher management; subordinates' goals reflect their contribution toward the role and mission (sometimes stated in the annual plan or 5- year plan). Cascade of Goal-Setting Process Board of Directors, and the Chief Executive Division Vice-Presidents Department Managers Unit Managers Individuals The superior and subordinate meet and discuss objectives which, if met, would contribute to overall goals of the organization. They iointly establish objectives for the subordinate. Key Result Areas are major aspects of the job where there are results significant enough to warrant specific attention. Examples: staff development cost control management communication unit production client contacts contract negotiations B-24
  • 211.
    Inictr are measurablefactors within a key result area on which it is worthwhile to set objectives or performance standards. Examples: output per workhour turnover cost per unit output actual yj budget absenteeism training participation Obiectives are statements of results to be achieved. Four elements make up each objective: 1. action or accomplishment verb 2. single measurable key result 3. date or time period within which result is to be accomplished 4. maximum investment in money, workhours or both that we are willing to commit toward accomplishment of the objective Sample Objective: To reduce by 10% the cost of operation A by January 1at an implementation cost not to exceed 50 workhours. Action Plans are the sequence of actions to be carried out in order to achieve the objective. Action plans fix accountability. Controls are the means by which the accountable manager will keep informed of progress; the way of ensuring their accomplishment. Controls should be visual (charts, graphs) and should provide for adequate visibility in a timely fashion so that required action can be taken as soon as it is seen to be required. SAMPLE Roles and Mission: To produce competitive products Key Result Area: Cost control Indicators: Cost per unit of output Objective: To reduce by 5% the cost per unit off output of product A by July 1 at an implementation cost not to exceed 50 workhours. Action Plan: I. Reduce waste 5% per unit output (Production Manager) 2. Implement pre-production quality checks to screen out minimum 1% unusable base units. (Quality Control Supervisor) B-25
  • 212.
    APPENDIX C PRIVATE SECTORPERFORMANCE APPRAISAL INTERVIEWS A telephonic interview survey was conducted with representatives of a sample of large, well known industrial organizations. The purpose of these interviews was to gather information about the performance appraisal systems in use in each of these firms. Enclosure I is the interview guide used to conduct the interviews. C-I
  • 213.
    ENCLOSURE I TOAPPENDIX C PRIVATE SECTOR INTERVIEW GUIDE Company: Contact _ Date: 1. Type and purposes of performance evaluation system 2. Process - who (rater supervisor, peers, committee) - what (behaviors, outputs, performance, bottom line) - when (timing) 3. Instruments/Forms 4. Feedback 5. Rater Training 6. Review Process 7. Controls 8. Additional information C-2
  • 214.
    APPENDIX D INITIAL AIRFORCE INTERVIEWS Early in the project, the Air Force OER study team conducted two series of interviews with Air Force officers. The first of these series was with officers having maior responsibilities for the functioning of the OER system. The purpose of this series was for the study team to learn more about how the Air Force conducts performance appraisals and what issues are in the minds of the major players in the system. The information received during the course of these interviews has been incorporated into the body of this report in Section IV, Findings: Air Force Officer Evaluation System. A list of those persons interviewed is at Enclosure 1, page D-2. The interview guide is displayed at Enclosure 2, beginning on page D-3. The second series of interviews consisted of nine focus groups conducted with small groups of officers (6-8) of varying skills and grades. The purpose of these interviews was to learn what attitudes about the OER systems are characteristic of a larger spectrum of the Air Force officer corps. The identity of these focus groups is displayed in the text of this report at Table II-1, page 11-3. A summary of the comments made in the course of these focus groups is at Enclosure 3, beginning at page D-5. This summary is organized into fourteen topics. These topics were not restricted to those identified in the interview guide, but rather those topics that developed during the interactions among the focus group members. A copy of the focus group discussion guide is at Enclosure 4, beginning on page D-25. D-1
  • 215.
    ENCLOSURE 1 TOAPPENDIX D AIR FORCE OFFICERS INTERVIEWED Name Oreanizatjgl Lt. Gen. Thomas J. Hickey Deputy Chief of Staff for Personnel, HQ, USAF Lt. Gen. John A. Shaud Commander, Air Force Training Command Maj. Gen. Ralph Havens Commander, Military Personnel Center Maj. Gen. Donald D. Lambertson Assistant DCS, Research, Development and Acquisition, HQ, USAF Colonel Gary Clark DCS, Personnel, Air Force Tra.'ning Command Colonel Charles Curran Military Executive to Assistant Secretary of Defense (FM&P) Colonel Lee Forbes Deputy Director, Secretary of the Air Force Personnel Counsel Colonel Vincent J. McDonald DCS, Personnel, Air Force Systems Command Colonel Donald Peterson Chief, Operations Officer Assignments, Military Personnel Center Colonel Paul E. Stein DCS, Personnel, Tactical Air Command Colonel Michael Wright Chief, Mission Support Officer Assignments, Military Personnel Center Lt. Col. Donald R. Davie Chief, Officer Force Structure, Office of the DCS, Personnel, HQ, USAF D-2
  • 216.
    ENCLOSURE 2 TOAPPENDIX D AIR FORCE (OER) PROJECT INTERVIEW GUIDE A. INTRODUCTION 1. PERSONAL INTRODUCTION 2. OVERVIEW OF HAY/SYLLOGISTICS BACKGROUND AND CAPABILITIES 3. BRIEF DESCRIPTION OF PROJECT a. Review and conceptual redesign of officer performance evaluation system. b. Three parallel efforts. 4. EXPLAIN FORMAT AND PURPOSE OF INTERVIEW a. Unstructured, flexible format. b. This interview has two major purposes: 1. Collect data about problems with and potential improvements for the officer evaluation system. 2. Obtain information that will assist the project team in conducting focus groups. 5. OBTAIN PERSONAL INFORMATION FROM INTERVIEWEE a. Name, rank, pertinent demographics, and other relevant information. b. Primary mission/responsibilities. c. OER-related functions Or Rccountabilities. B. TARGETED INFORMATION (data we would like to obtain) 1. INTERVIEWEE'S KNOWLEDGE OF OER SYSTEM a. How long have you been in a position of accountability in relation to the OER system? b. What is your overall experience as a rater, additional rater, indorser, etc.? D-3
  • 217.
    2. EFFECTIVENESS OFCURRENT OER SYSTEM a. Is the OER system achieving its purposes as stated in Air Force policy a..d regulations? If not, why? 3. ADVANTAGES OF'CURRENT SYSTEM a. What are some of the advantages offered by tt-,! evaluation system currently in use? 4. DRAWBACKS a. What are the main drawbacks of the officer evaluation system? 5. DIFFERENTIAL EFFLCTS OF OER SYSTEM a. Is the OER system more or less effective depending on rank? b. Can any differences in OER system effectiveness be attributed to the nature of the "job" within the Air Force? (e.g., pilots, staff positions, scientific/technical occupations.) c. Are there any other factors which affect the effectiveness of the OER system? 6. OER IMPACT ON THE INDIVIDUAL a. Does the individual receive a "fair shake" from the current evaluation system? 7. OER IMPACT ON AIR FORCE ORGANIZATION a. What is the overall impact of the OER system on the Air Force orgfmization? 8. IMPROVEMENT OF OER SYSTEM/PROCESS a. What are your suggestions fc- improving the OER process? 9. IDENTIFICATION OF ISSUES a. What are the key issues that need to be addressed in a project of this nature? b. Are there any other pertinent issues we bave not cove-ed in the interview? D-4
  • 218.
    ENCLOSURE 3 TOAPPENDIX D SUMMARY OF FOCUS GROUP INTERVIEWS, BY TOPIC TOPIC 1: Focus on Job Performance GRADE: COMMENTS: LT/CA PTA IN (OPERATIONS) The OER should incluc>, specific flying related items, which directly reflect a pilot's duty performarnce. The job description box is important and it should be expanded. Perhaps the job description could be written in bullet form reflecting major duties and responsibilities. The OER should have two sections: one section would evaluate specific duty performance (e.g., flying) and another section would evaluate 'other things." LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJORS (OPERATIONS) The job description section is one of the more meaningful items in the OER form. SENIOR CAPTAIN/MAJORS (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) Human relations block is useless. Actual performance of the job - flying, time in vault - don't count on the OER. People are learning that flying is not important to the Air Force. Categories (on the OER) are not appropriate to people in operations, so we look for additional roles but often exclude primary duties. MAJOR/LT. COLONEL (SUPPORT) It is especially difficult to create "facts" for page one in the case of young rated officers whose job consists solely of flying-related tasks. Conversely, it is easy for junior support officers to provide facts to document performance factor scores. A solution is to eliminate the r ,rrative on page 1 of the form that pertains to performance factors. D-5
  • 219.
    TOPIC 1: Focuson Job Performance (Cont.) GRADE: COMMENTS: LT. COLONEL (OPERATIONS AND SUPPORT) Define officership; management vs. technical skills. Current performance vs. management potential emphasis should be defined. Develop better performance standards for rating. COLONEL (OPERATIONS & SUPPORT) Most of the front side of the OER is not useful, although the job description may be somewhat useful and may be worth retaining. "Credit for attendance" at PME or Master's program does not reward what is best for the Air Force; should rate on performance improvements resulting from the education. There are difficui~tes in doing, however, including the time required to. observe performance change. PME and Master's are used as discriminators by boards because they are easy to see, few other discriminators can be found. It is difficult to find culturally acceptable ways to measure job performance; need to measure in terms of output (performance), rather than input (PME, etc.). GENERAL D-6
  • 220.
    TOPIC 2: PrtentialRating GR12 COMMEliNTS: LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJORS (OPERATIONS) SENIOR CAPTAIN/MAJORS (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) MAJOR/LT. COLONEL (SUPPORT) The traits which should be measured in identifying future leaders are: Initiative - ability to make things happen; Situational Awareness; Integrity; Decisiveness, and Knowledge. LT. COLONEL (OPERATIONS AND SUPPORT) Define officership; management vs. technical skills. Current performance vs. management potential emphasis should be defined. COLONEL (OPERATIONS AND SUPPORT) GENERAL D-7
  • 221.
    TOPIC 3: DifferencesAcross Grades. Rated/Non-Rated GRADE .QMM.EM: LT/CAPTAIN (OPERATIONS) It is somewhat unfair to be rated with the same form that is used to evaluate administrative duty officers, OER should be de-emphauized at the lieutenant level. LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJORS (OPERATIONS) Some things, e.g., PME, Masters Degree, are very important and this perception is supported by promotion board statistics. Rated officers do not have the opportunity to pursue these degrees. There should be separate OER forms for rated and non-rated officers. An officer suggested that they also need separate promotion boards! SENIOR CAPTAIN/MAJORS (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) We need different forms for different grades, more general language for field grades. Possibly should have a form for rated/operations as compared to support - maybe not, for that would be tough on a board, MAJOR/LT. COLONEL (SUPPORT) Junior officers do not necessarily need to be evaluated on the same form as seniors. Also, semi-annual reports are not necessary, LT. COLONEL (OPERATIONS AND SUPPORT) There is an ongoing debate about the performance evaluation issue for rated Ys support officers, Raters/supervisors feel that they are forced to create acceptable additional duties as assignments for rated subordinates for the sale of the OER when these people should be devoting all their time to flying. They do not like a form driven system. There should be two forms - rated and non-rated. D-8
  • 222.
    TOPIC 4: AdministrativeBurden LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJOR (OPERATIONS) Inefficiency - OER requires too much effort for what you get Out of it. A lot of time is wasted writing and proofing the OER, to then have the promotion boards look at the bottom line (indorser). SENIOR CAPTAIN/MAJOR (SUPPORT) MAJOR/LT COLONEL (SUPPORT) The front page of the form is ueiess apart from the job descriptions. (However, the rumbers can be used to eliminate sub-marginal officers). Yet providing the narratives takes hours of work and some creative writing to prepare. Preparation of the OER form is an administrative burden on units and raters. On average, each form is retyped more than four times, and raters spend endless hours preparing narratives, both for substance and for form. In addit'ao, preparing the supporting documentation requi'red to tecure the proper level of indorsements ac '" sv,,stantially to the administrative burden, LT. COLONEL (OPERATIONS & SUPPORT) The form takes too many hours to process for the amount of time it is evaluated. COLONEL (OPERA'TIONS & SUPPORT) The OER requires too much effort and time to complete for the benefits it provides; the burden is too great. The system is probably okay, if only the administrative burden were reduced. GENERAL D-9
  • 223.
    TOPIC 5: Contentsof Promotion Folder GRP%.oE:COMMENTS: LT/C!CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) Remove photograph from the file. SENIOR CAPTAIN/MAJOR (OPERATIONS) SENIOR CAPTAIN/MAJOR, (SUPPORT) Recommendations about the promotion and selection system include placing a limit on how far back promotion boards can look through folder. Also recommend removal of photograph from file. MAJOR/LT. COLONEL (OPERATIONS) MAJOR/LT. COLONEL (SUPPORT) LT. COLONEL (OPERATIONS & SUPPORT) Remove the picture from the folder. COLONEL (OPERATIONS & SUPPORT) GENERAL D-10
  • 224.
    TOPIC 6: IGte.itv and Honesty GRADE: CO.ME. LT/CAPTAINS (OPERATIONS) There is a lot of competition between MAJCOMs to promote their own people. This problem is compounded by the differences in numbers of grades in the MAJCOMs. LT/CAPTAINS (SUPPORT) There are many questions about the integrity of the system from a rater's vi.wpoint. They are hesitant to rate less than I at any time; average performance is most difficult to rate and there is concern over gaming the system. From the rater viewpoint, it is the rater's personal policy about the system that determines how an OER is written. If the immediate supervisor cannot be relied upon to write a good OER or to obtain good indorsements then the rater must be visible to supervisor's supervisor and get his/her support. Five of the eight officers have written their own OER. SENIOR CAPTAINS/MAJORS (OPERATIONS) SENIOR CAPTAINS/MAJORS (SUPPORT) There is a feeling that personal integrity is not supported and neither is the integrity of the promotion system. There is a need to reward and recognize leadership and willingness to stand up for convictions. A simple personality conflict can ruin a career. To protest the integrity of the system, there is need for guidance frorm higher levels such as self-policing system that would include periodic review, reinforcement, and reemphasis of policing and procedures. MAJOR/LT. COLONEL (OPERATIONS) OER's talk around the issues, one learns the words but they are not truthful, none of it is truthful. Inflation is unreasonable. You are reading lies, almost useless (as a way to understand an officer's performance level). Senior leadership doesn't get an accurate word picture. Nobody reads all the lies which are written. D-I I
  • 225.
    TOPIC 6: Inteeritvand Honesty (Coat.) GRADE: COMME MAJOR/LT. COLONEL (SUPPORT) Marginal performance is not documented. To get less than the maximum (in numerical scores) an officer has had to do something bad. However, the report is coded so that marginal performance can be indicated indirectly -- usually by saying "good but not superlative." LT. COLONEL (OPERATIONS & SUPPORT) Some officers have had to write their own OER's, while others feel that they have had to lie to maintain careers or avoid hurting others. Many believe that "the ratee is at the mercy of the rater's eloquence* and that we're assessing writing abilities of the rater not the person being reviewed. There is a common knowledge of "the code" and how to use it. COLONEL (OPERATIONS & SUPPORT) GENERAL There is subtlety and "gaming" on the OER's that are directed to the board, but they feel that they recognize and see through the word picture to the facts. D-12
  • 226.
    TOPIC 7: £alm GRADE. COMM~t~ LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) Young officers feel it is necessary to learn the unwritten guidelines of the OER and promotion system. They also feel that it is extremely important to "please your supervisor.' The OER is a vehicle for going up the promotion ladder, but young officers must guide their own careers. SENIOR CAPTAIN/MAJOR (OPERATIONS) Some things, e.g., PME, Masters Degree, are very important and this perception is supported by promotion board statistics. Rated officers do not have the opportunity to pursue these degrees. SENIOR CAPTAIN/MAJOR (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) MAJOR/LT. COLONEL (SUPPORT) Can't focus (the OER words) on actual performance. Front side is hard to use (to describe performance). Officers write their own, they often don't know their rater. We make up jobs for junior officers (in order to have something to say about) communications-oral and written. OER has powerful impact on career, it encourages careerism and I'm concerned about our ability to fight a war. Everything is careerism, not an effort to do (a job) well now; it's all related to promotion. Careerism is not a function of the OER, other things are promoting that, and it's not all that bad. To get promoted you need to work hard, have a sponsor, get a good job. Good personality gets a better rating. You need PME and a Master's (to get promoted). It's a discriminator. One needs to continue growing, (but a) master's diverts from real job. Advanced education should help you do your job. You can't get a master's in an operational job. The (master's) programs are easy because we couldn't otherwise get them (on a part-time basis). PME in residence is more valuable for promotion (than by correspondence) but all of D-13
  • 227.
    TOPIC 7: £areerjzj(Cont.) fzRA2E CMMENTIS these schools and deployments, alert duty, et.c, create family problems. There is enough time to do these things -- a few exceptions, but most people can do these things. LT. COLONEL (OPERATIONS & SUPPORT) COLONEL (OPERATIONS & SUPPORT) "Credit for attendance" at PME or Master's program does not reward what is best for the Air Force; should rate on performance improvements resulting from the education. There are difficulties in doing, however, including the time required to observe performance change. PME and Master's are used as discriminators by boards because they are easy to see, few other discriminators can be found. GENERAL D-14
  • 228.
    TOPIC 8: IndorsementSystem GRADE: COM• : LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) The word picture and level of indorsement are most important parts of the OER as it is used by promotion boards. They believe there is hidden quota system for indorsements and that commands control systems. SENIOR CAPTAIN/MAJOR (OPERATIONS) The indorsement process is the controlling system in the OER/promotion board process. SENIOR CAPTAIN/MAJOR (SUPPORT) MAJOR/LT. COLONEL (SUPPORT) Level of indorsement and last sentence is all that is important. The whole emphasis is potential. Preparation of the OER form is an administrative burden on units and raters. On average, each from is retyped more than four times, and raters spend endless hours preparing narratives, both for substance and for form. In addition, preparing the supporting documentation required to secure the proper level of indorsements adds substantially to the administrative burden. There is a highly developed system for determining indorsement levels including printed justification forms with the discrimination factors used. In the form we observed, the factors include: PME, civilian education (attained UA in process), promotion eligibility, and previous OER indorsement history. Standards are specified for which reports will be evaluated for higher level indorsement. These standards are not uniform within MAJCOM or within the Air Force, Wing commanders have chance to identify higher performers through indorsement level. However, they also can "game" the system, inter alia. The problem with indorsements as discriminators is not that higher performers don't get tagged but that the system doesn't discriminate well at the margin. MAJOR/LT. COLONEL (OPERATIONS) D-l15
  • 229.
    TOPIC 8: IndorsementSystem (Cont.) LT. COLONEL (OPERATIONS & SUPPORT) Since a hidden quota system is used, bring this system out into the open. COLONEL (OPERATIONS & SUPPORT) Major information-bearing sections are indorsements and promotion recommendation. Current indorsement system is equivalent to a quota or control system except ratees don't know the rules. GENERAL D-16
  • 230.
    TOPIC 9: Feedbackto Ratee GzRADE: COMMENTS: LT/CAPTAIN (OPERATIONS) More feedback to the ratee is necessary. LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJOR (OPERATIONS) More feedback about performance should be provided to officers. SENIOR CAPTAIN/MAJOR (SUPPORT) The OER is not used as a feedback tool. This is considered a weakness because they feel that there is a need for some type of feedback and/or counselling system. MAJOR/LT. COLONEL (SUPPORT) MAJOR/IT. COLONEL (OPERATIONS) OER is not effective as feedback (to the individual officer.) Can't provide (accurate) feedback because it will kill him on assignments, promotions. It is a morale boost (to read how well you are doing) but it has nothing to do with improvement of performance. We don't need the OER for counselling, the people we have are told all the time. Forget the OER, we tell them. Not much career guidance. The civilian feedback system (in the Air Force) is not very good either, it don't change performance. Low ratings don't get rid of (the Air Force) civilians. LT. COLONEL (OPERATIONS & SUPPORT) COLONEL (OPERATIONS SUPPORT) GENERAL o There was agreement that the OER is not a good feedback tool. D-17
  • 231.
    TOPIC 10: PromotionIssues LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJOR (OPERATIONS) SENIOR CAPTAIN/MAJOR (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) MAJOR/LT. COLONEL (SUPPORT) LT. COLONEL (OPERATIONS & SUPPORT) There was discussion and consideration that the up and out system may not be right for everyone in the Air Force. COLONEL (OPERATIONS & SUPPORT) Point made that AF promotion system makes it too clear to officer whether he is a "success" or a "failure" each time he meets a board; those passed over feel they have clearly failed. Canadian system, with "fuzzy" promotion zones encourages pcople to keep trying; being passed over does not destroy officer's morale, because he has several chances for promotion. Up or out system seen as part of the problem but group unanimously rejected changing that system. GENERAL It really doesn't matter how long you look at the file - 60 seconds or 5 minutes, usually there is no difference in the final result. They feel that the "up or out" system should remain in place because it is a motivating force and drives competition within the service. The unfortunate side is that it drives away quality people at the same time that it polices the system. D-18
  • 232.
    TOPIC 11: SuggestedChanges in OER Form .RADE COMMENTS: LT/CAPTAIN (OPERATIONS) A standard QER should be used for every non- promotion year, where an officer is in the zone, then a "promotion" OER, which could be more specific and detailed, would be written. OER's should be simpler, shorter, and less burdensome. The PER should have two sections: one section would evaluate specific duty performance (e.g.. flying) and another section would evaluate "other things". LT/CAPTAIN (SUPPORT) The recommendations for the form were to remove the blcck ratings from the fiont of the form. SENIOR CAPTAIN/MAJOR (OPERATIONS) The first part of the QER - except for demographic and the job descriptions - should be eliminated. (However, higher ranking officers in other focus groups indicated that rating blocks are necessary because it allows them to "kill" unfit officers). SENIOR CAPTAIN/MAJOR (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) MAJOR/LT. COLONEL (SUPPORT) It is especially difficult to create "facts" for page one in the case of young rated officers whose joL consists solely of flying-related tasks. Conversely, it is easy fo, junior support officers to provide facts to do,.ument performancc factor scores. A solution is to eliminate the narrative on page 1 of the form that pertains to performance factors. LT. COLONEL (OPERATIONS & SUPPORT) Remove the front part of the form (after the job description section). COLONEL (OPERATIONS & SUPPORT) Most of the front side of the OER is not useful, although the job description may be somewh-t useful and may be worth retaining. Should use narrative assessment by supervisor only', difficulties discussed briefly. D- 19
  • 233.
    TOPIC 11: Su22esitdChanges in OER Form (Coot.) GRADE: COMMENTS: GENERAL Rework the front side of the OER forms, but maintain discriminating factors for the Board. D-20
  • 234.
    TOPIC 12: Purnoseof the OER GRADE: Q,.EI..S,.: LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) Purpose of the OER -- OER does not adequately accomplish task of school or assignment selection but does work for evaluation. SENIOR CAPTAIN/MAJOR (OPERATIONS) Keep the large organizational picture in mind: retention, morale productivi-y - when evaluating the OER system. SENIOR CAPTAIN/MAJOR (SUPPORT) The purpose of the OER is questioned. There is a need to clarify that purpose and then redesign the OER form to accomplish that task. Purposes of OER - OER is not fully accomplishing its objectives, particularly as it refers to identifying individuals for promotions. MAJOR/LT. COLONEL (OPERATIONS) MAJOR/LT. COLONEL (SUPPORT) LT. COLONEL (OPERATIONS & SUPPORT) COLONEL (OPERATIONS & SUPPORT) Two major goals of OER could be: I) to provide information helpful for promotion decision. 2) to curb careerism by focusing OER on assessment of current job performance. GENERAL D-21
  • 235.
    TOPIC 13: ControlledSystem LT/CAPTAIN (OPERATIONS) LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJOR (OPERATIONS) SENIOR CAPTAIN/MAJOR (SUPPORT) MAJOR/LT. COLONEL (OPERATIONS) If any controls are introduced, they should be for new lieutenants. lie careful not to shift the dissatisfaction. making unhappy the people who are good rather than those who are weak. The rumors about a new OER are already hurting retention. Everyone is so critical of the system, but a new system would be worse. We don't adapt readily to new things. MAJOR/LT. COLONEL (SUPPORT) The quota of *potential' scores under the controlled OER was a disaster; however, that system might have worked if the percentages had not been so restrictive. LT. COLONEL (OPERATIONS & SUPPORT) COLONEL (OPERATIONS & SUPPORT) No clear answer to question of whether a new control or quota system could be workable. Suggestion that quotas be matched to promotion opportunities at each grade. GENERAL D-22
  • 236.
    TOPIC 14: Otheuep G.RADE:!COMMENTS LT/CAPTAIN (OPERATIONS) Approximately 90% of all flyers are good, solid pilots which makes differentiation even more difficult. There is a lot of competition between MAJCOM's to promote their own people. This problem is compounded by the differences in number of generals in the MAJCOM's. LT/CAPTAIN (SUPPORT) SENIOR CAPTAIN/MAJOR (OPERATIONS) The Canadian AF system - in which the ratee cannot see his/her scores, but can see the comments - is a good system. The Army OER is a good system given that senior officers indorsements are tracked. (This system can also be "gamed", however). Keep the large organizational picture in mind: retention, morale productivity - when evaluating the OER system. There seems to be a conflict between what is good for the individual arid what is good for the AF organization as a whole. SENIOR CAPTAIN/MAJOR (S) PPORT) MAJOR/LT. COLONEL (OPERATIONS) The system is good but highly inflated. It doesn't allow for a single mistpke or a personality conflict between rater and rated officer. MAJOR/1.T. COLONEL (SUPPORT) There Is a price to pay in designing a system that identifies the best people explicitly. That price is dissatisfaction and attrition among those not so identified. D-23 I~~~~ - - I
  • 237.
    TOPIC 14: OthrIues (Coot.) GRADE .I;: !0 LT. COLONEL (OPERATIONS & SUPPORT) Though they feel that the OER is a good tool for promotion to the major level, and that the right people are being promoted, there is skeptisism about the system because of gaming. The unwritten code has existed through the last 3 types of OER's. There is an awareness that corporate culture drives the promotion process. The Air Force culture and the possibility of changing that culture is questioned. LT. COLONEL (OPERATIONS & SUPPORT) There is a question as to whether the OER itself is not effective or whether the OER is a product of a system that is not effective. Provide training and guidance to the raters from higher level officers and reinforce. COLONEL (OPERATIONS & SUPPORT) A significant change in the OER system would require a major cultural change in the Air Force. Current problems with OER are culture-driven. GENERAL The total needs of the Air Force are taken into consideration. Half of the Generals thought that the OER is a good tool for communication about the individual. They recognize that there are many officers who do not understand the system. It is the responsibility of supervisors to teach "the system" to subordinates. D-24
  • 238.
    ENCLOSURE 4 TOAPPENDIX D AIR FORCE (OER) PROJECT FOCUS GROUP GUIDE A. INTRODUCTION 1. PERSONAL INTRODUCTION 2. OVERVIEW OF HAY/SYLLOGISTICS BACKGROUND AND CAPABILITIES 3. BRIEF DESCRIPTION OF PROJECT a. Review and conceptual redesign of officer performance evaluation system. b. Three parallel efforts. c. HAY's private sector expertise. 4. GROUP MEMBERS INTRODUCTION a. Allow everybody to briefly introduce themselves. 5. EXPLAIN FORMAT AND PURPOSE OF FOCUS GROUP a. Format 1. Unstructured, flexible format. 2. Generate and discuss concepts and ideas. b. Purpose 1. Explore the issues surrounding the OER process, in order to gain a better understanding of the OER process. B. GENERAL ISSUES 1. EFFECTIVENESS OF CURRENT OER SYSTEM a. Is the OER system achieving its purposes as stated in Air Force policy and regulations? If not, why? 1. Promotion. 2. Assignment. 3. Augmentation. D-25
  • 239.
    4. School selection. 5.Separation. 6. Feedback. b. What purpose can an OER system legitimately fulfill? 2. STRENGTHS OF CURRENT SYSTEM a. What are some of the strengths of the evaluation system currently in use? 3. DRAWBACKS a. What are the main drawbacks of the officer evaluation system? 4. DIFFERENTIAL EFFECTS OF OER SYSTEM a. Does the OER system fit some groups more than others? Probes - rank, job, time in grade? 5. OER IMPACT ON THE INDIVIDUAL a. Does the individual receive a "fair shake' from the current evaluation system? Why or why not? 6. WHAT PROBLEMS DO YOU FACE AS A RATER? HOW DO YOU COPE WITH THEM? 7. IMPROVEMENT OF OER SYSTEM/PROCESS a. How can the OER process be improved? Probes - rating/writing, review process, training, roll out. D-26
  • 240.
    8. IDENTIFICATION OFISSUES a. What are the key issues that need to be addressed in a project of this nature? b. Are there any bases we may not have covered that we should? Probes - in the focus group, in the project. D-27
  • 241.
    APPENDIX E FEEDBACK INTERVIEWSUMMARY Following the completion of the data collection phase of the study, the team developed a preliminary set of OER conceptual designs. These designs were tested for feasibility and desirability, in part, through a series of interviews with Air Force officers of various grades representing the major commands. Enclosure I displays the units of assignment and identity of the individuals interviewed; however, the names of these officers have not been included in order to preserve the confidential context in which the interviews were conducted. Enclosure 2, page E-3, shows the interview guide used. The results of these interviews were used in refining the preliminary designs into the recommended conceptual designs discussed in Section V. A summary of the interview results is displayed at Enclosure 3, page E-5. E-l
  • 242.
    ENCLOSURE I TOAPPENDIX E INDIVIDUALS INTERVIEWED COMMAND/AGENCY ZOQ133QN Air Force Communications Command Deputy DCS/Personnel (0-6) Staff Division Chief (0-6) Air Force Logistics Command DCS/Personnel (0-6) Manpower Staff Officer (0-3) Air Force Systems Command Deputy DCS/Personnel (0-6) Logistics Staff Officer (0-5) Air Force Training Command DCS/Personnel (0-6) Military Airlift Command DCS/Personnel (0-6) Squadron Commander (0-5) Personnel Staff Officer (0-3) Strategic Airlift Command DCS/Personnel (0-6) Vice Wing Commander (0-6) Squadron Commander (0-5) Electronic Warfare Officer (0-3) Tactical Air Command Wing Commander (0- ,i) Executive to Wing Commander (0-3) Military Personnel Center Director (0-6) Director (0-6) Personnel Staff Officer (0-4) Personnel Staff Officer (0-3) E-2
  • 243.
    ENCLOSURE 2 TOAPPENDIX E FEEDBACK INTERVIEW GUIDE Explain background of study and the fact that we are considering various alternatives. 11. For each element presented, determine the respondent's reactions: A. Positive, neutral or negative B. If negative, reasons why C. Whether positive or negative, any problems anticipated in implementation Ill. Elements to be presented A. Having OER preparation set up as a computer-interactive process with certain information computer-supplied to cut down on the administrative process. 1. Having pre-developed generic job descriptions to which modifications are made by the rater. C. Having an OER work sheet that is used to set future goals and review past performance but does not become part of the OER record. Its objectives would be to help in coaching a junior officer and to develor a mutual understanding of performance expectations. ). Hlaving i section on the OER form which requires the rater to indicate one area in which a plan has been developed to enhance the officer's effectiveness over the coming year. This would include measureable objectives for the plan. E. Having the rating officer identify the single strongest area of performance for an individual, F. flaing an indorsing official indicate the ranking of the officer against others in the same grade (for those rated at the highest potential level). G. Having the wing commanders or equivalent indicate the 10% of each grade who are judged to be highest in potential. 11, Having performance factors rated for only the extremes. I. Having a rater's rating history become part of his/her own personnel file for consideration by his/her own commander in rating the officer on *The Exercise of Leadership." E-3
  • 244.
    J. Having raterstotal distributions of ratings for that grade appear on all OERs that are part of the selection folder. K. Having an indorser's rating history become part of his/her own personnel file for consideration by his/her own commander in rating the officer on "The Exercise of Leadership." L. Having indorsers' total distributions of ratings for that grade appear on all OER s that are part of the selection folder. M. Eliminating all numerical ratings of performance, requiring comments to document what the officer has actually done (accomplished) in his/her job during the rating period. N. Eliminating all numerical ratings of potential, retaining the current system to assure that better performers receive higher levels of indorsement. 0. Retaining a system which produces highly favorable ratings for almost all officers so as to enhance morale and commitment. P. Having separate OERs for company and field grade officers which cover the same general factors, but provide different criteria against which they are judged. IV. Any other suggestions the individual might have for improving the OER process. E-4
  • 245.
    cc 4 cc > 0 0l>c C c 0 s- 0 CZ u *o V I- ~ c 4 4o 14u 0 -00 r- cc .2 LJ. 0 0 .00 = oo "" C 4)CL IL)) 0 m i. w = c cu. E0 0 S 0 '- .0J E. Q _0C '0 LL. 06 cý c c 006 cc 0 V '0~9 0 0'Q '< -. ~~~c 00 N c c0' 4 ) oe.~ -j I* m2.V 4A 0 Z w CD %.- 2 2 o 1Gof4 a kn coc u1 0 0 f- a., C, eq en e' en Wý0 Cli~I od. - C> - )6. (: C6 "0 - 0 'r 0 L. 0 '-) eq (12 -0fn 0++0 +0 +CD +0C E-5
  • 246.
    >. C r~) EE 0 .0 0 0 cE)- ) ) 0L 00 0. >1 Z c 6 wj C L.I o- .( C w r. C 06 0 - 0: 0 PV - ~~ V4) C 4 ) >V) C E.- w C 0 -~ C 6Q~ 0 C)I - c w- . C3 LsC1 -1 44 E .~ E co E F.0 t L~~~~ E. . 4 -0 CL- w 0 W, E EC V) > L. -5 ) ~ L =~ c = Z 0 0 ý;~~~~. - =U 00 ,: (U0 f-4~ en1 . 2? 2f 6 n rn en C-II 9) 4 6 6 -. 4) U 0 4 So. E -~ 1. 01. .-. 0. 0ON V% 0. C)C L 0'+0 0. +0 + F.- 6
  • 247.
    .2 E cu m 0 > E)* ..h 00 c 4) -o o E .~COO 0: 00C: M Ev 4) ) C c.06 w0 46- 4) . 6 )C0 v 0--' -p a - v >, C: 00 04) .- =~ >o> 0 w- 4 0-0 4)G >~ 4 ) C)4 cu L. co. u . 4) -~ ~ z~ V~ 06 v 'r~r ( 4o o 0 ! E U) 14 W- U oJ jz~- -0 0. IL .2 3 0 § - CA". o u _l n P 4 = ' 'ITI U D I- %m e 20 Z q~ +0 6 +-4 +00 m +( 6I+0
  • 248.
    U) c zV 0 10 e-- 0c0 -N 0 oC '- . mc4 U) 0 -0 u Go0 E U U 0 V-WU C -. *5>. 0 -c 0 0~ 4)4 W0 .0 cc.. - ) - ~0 - (U U) .5 9u m 0 2 U, >) > 2.~a L' - . .- .8 s) 0o v M ~ v0. cc0r C -6 E)- .0 E , o 0EC-0v cr i&1 2 .2 0 ( ~.'g eZ ~ r o ~ cc 0 0 E * ~ U 0 U-0 0 0 v Ca~) . ~C LLI I-.U) C0 C4 - C4 fl +0. )~E UU U ~~ ~ u~ou I-8i ~I..S -5
  • 249.
    APPENDIX F OER FORMSUSED IN THE SERVICES This Appendix displays the forms used by the U.S. armed services, the U.S. Coast Guard, the Foreign Service of the Department of State, and thc Canadian Defense Forces. EORM TITLE PAGE U.S. Air Force Air Force Form 707, Officer ................................................ F-2 Efffectiveness Report U.S. Army DA Form 68-8-1 ................................................................... F-4 OER Support Form DA Form 68-8, Officer Evaluation Form ........................... F-6 U.S. Navy NAVPERS 1611/1, Report on the.................... F-8 Fitness of Officers U.S. Marine Corps NAVMC 10835, USMC Fitness Report ............................... F-10 U.S. Coast Guard CG-5312, Lieutenant Commander ........................................ F-12 Officer Evaluation Report Foreign Service Form DS-1829, U.S. Foreign Service ................................... F-16 Employee Evaluation Report Canadian Defense Forces CF 1417, Personnel Evaluation Report: ............................... F-21 Officers F-1 I~~~ ! I I I IIIj
  • 250.
    AFR 36-10 Attachment1. 26 ocktober 1982 Effective 1 November 1982 SAMPLE RATEE IDENTIFICATION DATA IA.4AFA 36,-fOwVfUlly befoM 0`1111In si yIti) I. NAsa (Lam First,Middle I.,ialJ iL SSAN (Mclvd# Suffix) S.GftAOC A. DAPSC SMITH, Jack II 1231-34-5432 Captain 6. ORGANIZATION. COMMAND, LCCATION a. PAS COOC 345 Tac Ftr Wg (TAC), Mt Home AFB, ID MTOTDKLS 7.renoo or -e.•y S.N.Oo.v DAY OF 9.t.o. - VIASN OR KP FROM, 13 Jul 81 I T.HuI 31 Oct 82 sure"VSION 1201 Annual 11.JOUOESCRWTION ,.OUTTIm., Enter ooem-- e.i t a.d approved duty title as of the L K•YOUTIgS. T^9141 ^NO .gPONSISITIb, closeout date of the report (paragraph 2a this attachment). Item 2: Describe the type and level of responsibility, the impact, the number of people supervised, the dollar value of projects managed, and any other facts which describe the job of this particular ratee. Ill. PEF Fc6,MANCC PACTORS r 7 5l1~ -po WCL0.~. U4-LfOW! Mgt Ir eqid Ap9OVr ASOVC OId Icrflc •~'ipJ~ e~pqfonactI ,vquir• NOT OlS3EVEOD UTJlL.BO UANOARO STANDARD STANDOAWO TANDAND I. o.01 KNOWLAO . u, o (Dept& L- . L . bdld 0 A) What has the ratee done to actually demonstrate depth, currency or breadth of job knowledge? Consider both quality and quantity of work. 2. J•UDGhgP4T AlO OICCIIIONStCOnS.flwrn, accurate, effective)0 Does the ratee think clearly and develop correct and logical conclusions? Does the ratee grasp, analyze, and present workable solutions to problems? 3. P.AN AN00 NGANIZI WONK (Trmaly. Does the ratee look beyond immediate job requirements? How has the ratee anticipated critical events? 6. 14ANA0GE4MYN OF MCSOUNCeI0L.. J L LJ L. (MaRpowe? rgld m0 rE) Does the ratee get maximum return for personnel, material and energy expended7 Consider the balance between minimizing cost and mission accomplishment. S. LCA )cRN1FInrwflVt.accCpf 0 L J I How has the ratee demonstrated initiative, acteptance of responsibility, and ability to direct and motivate group effort towards a goal? 4. ADAPTABILITY TO STRESS (Sidb~e .... ...... .JLJ L.... flIXIble. dependable] How has the ratee handled pressure? Does quality of work drop off? Improve? 1. ORAL COMMUNICATIUMI (Clear, c0mcU confideu"0 1< How has the ratee demonstrated the ability to present ideas orally? 0. WIVIITT N COMMUUNiCATIOM (O1.0LJ CoACIDe.ropis'edJ -0 .L4..J How has the ratee demonstrated the ability to present ideas in writing? *. PR@OPF9IONAL. QUALITIES (Afthsad*. 0 L----J L--.J L . L• dftwL CoaposteN. bra*Vt0g25 How well does the officer meet and enforce Air Force standards of bearing, dress, grooming and courtesy? Is the image projected by the ratee an asset to the Air FPnrrp S6. .. VUMAoN RoLATIoNS (Equal oppornitJ L. J L._.J " par coaloptld. NmattiWryJ How has the ratee demonstrated support for the AF Equal Opportunity Program, and sensitivity for the human needs of others? Evaluation of this factor is MANDATORY, ---- F-FE R AF 1Z0om,- 707 P..VIoUS COITO W"e9U9. F-9..c OFFICER EFFECTIVENESS REPORT
  • 251.
    Effective 1 November1982 AFR 36-10 Attachment 1 26 Octobe 1982 SAMPLE n/ IV. ASSIGNMENT RECOMMENDATION: i. STROMEgST QUALIFICATION, Pers e rver nce I. SUcGcSTEO ,oe (IncbideAFSCI: 3. ORGANIZATION LCVELI 4I.9(NQ V. EVALUATION OF POTENTIAL: Compare the mlee' capability to assume Increatied r ,posiubathty with that of other offwrs -homi you know in the mme g'ede~indicate your noting t / I .' by placingon "X X in the designatedportlonof the mott approp•#ese bloack. EVIL KIL ILI I Z IZI RATER A000 MNOOR. RATIER A^O04 INRS, RATER A^OOR I"OD*s. WATER AOO" IIOOWS- RATIR CR RATERt an "ATvR an MATZR no VI. RATER COMMENTS Organize comments within the standards of good writing. Do not use headings; underline, or capitalize merely to add emphasis. Include those comments required by paragraiii 3-15. Add any other comments not covered elsewhere and not excluded by paragraph 3-14 which will increase the value and meaning of the report. Amplify those positive aspects of the ratee's performance deserving special note. AE. GRA D-.E . SR vc oV.O N., coR,, LOCATION oUTY, TITL. ,.To JACK LAMB, JR., Lt Col, USAF Operations Officer 1iNov 82 529 Bomb Sq (H) (SAC) $$AP SAI Plattsburg AFB NY 012-34-5678FR 0 11. ADDITIONAL RATER COMMENTS O3CO•CuR ZamNONCOECUR Review the ratings and comments of the rater for completeness and impar- tiality. If the additional rater does not concur with any rating in section III or V, or any comments, check the nonconcur block. To reflect disagreement, initial appropriate blocks (section III) and mark additional rater block (section V). Significant disagreement (para 2-26) requires justification. N@4 "C. . GRADE. 0R OF SVC. O NG.N.CO AO.LOCATION cUTY TITLE OAT; FRANK HARRIS, COL, USAF Commander 2 Nov 82 529 Bomb Sq (H) (SAC) .SIAN4 SIGNATURfe Plattsburg AFB NY 987-65-4321 & P VIol. INDORSER COMMENTS OCONCuR KNO€CONCUR Review the ratings and commenfs of the rater and additionaT rater for completeness and impartiality. If the indorser does not concur with the additional rater's comments or ratings, check the nonconcur block. To reflv t disagreement, initial appropriate block (section III) and mark indormdr block (section V). Significant disagreement (para 2-26) requires justification. P*A:E. GRADEC. RO Or SVC. ORON. C094D. LOCATIONt DUTY TITLA 1"T James M. Robinson, Cc, USAF Commander 4 Nov 82 380 Bomb Wg (SAC) SEANA Plattsburg AFB NY 234-56-7890FR *U.S. VERNtmENT PRIIITIN Office: Iin.-SSJ.S, AF Form 707. (Reverse aide.) F-3
  • 252.
    OFFICER EVALUATION REPORTI T FORM P. t~. R A *-elI em ee W OCUR Rhad PN,.n A*I Sfwmto nm on e'r q.'WMC0em00k* ffib fe'm PART I - RATEo OF'IMPS ICE NrIPCAh NAME OF RATED OFFICE NA IL'. fuI. Nil GRADE ORGANIZATION LANG. LESLIE R._ I CPT, B-Btrv. 3d En. 55tch Arty PART II- RATING O4AIN - YOUR RATING CHAIN fORt THE EVALUATION PGR#00 I: NAME GRADE POSITION RATE A RTRREY, THOMAS A. . LTC. an Copander INTERMEDIATE NAME GRADE POSITION RATER UNION NAME ORADE POSITION RATER FOX, LARRY R. CUL Bae Comander PART III - VERIPICATION OF INITIAL PACE.TO-FACI OISCLMION AN INITIAL CACEITO-FACE OIScuSIOn OF DUTIES. RESPONSIBILITIES, AND PEINPOREiAFE OGJECTIVfS FOa Tin CURREPNT PATINO PERIOD TOOK PLACE ON See paragraph 4-6 and 4-7 PAYED OFFICERAS INOITIALS See Daragraph 4-6 RIATEN'S INITIALS See varaeraph 4-7 PART IV - RATED OFFICER (Cýpww oe..B. od c b-&-- ftr 66 F__ft- P*4") STATE YOUR $SIGNIICANTDTIES AND RESPONSIIILITIES DUTY TITLE IS YHIE POSITION COOE t_ 6. INDICATE YOUR MAJIOR PEFORIMANCE OBJECTIVES DA. -ot•.NF SEP?-,,, O,,. -- -" ..... .. F-4
  • 253.
    SLIST YOUR S1Ga';,CA'l CONTRIOtlITWNS SIGNATU'LOE AND OATE PART V - RATER ANODO" INTERMEDIATE RATER fReq.aru" e.d r€tment a. P.1 IV.. 6. ead ebow. i~ulw M144A A" eo-trei Ujint u-hou . SOW110UnrrIo •e l' Poilftda jrihoiiatnon DA Farm 6 7--s.j a RATER COMMENTS ilgio..I SIGNATURE AND OATE 1M.-d•o'-, I SIGNATURE ANC00ATE 1`11..4410-t DATA REOUIRED BY THE PRIVACY ACT OF 1974 ( L'.S C S$2.a 1. AUTHORITY See 301 Title UM SC.See 3012 Title 10 VSC" 2. PURPOSE: DA Form 6- -S. Officer Evaluaton Report. wre'h az the primary source of Information for ofrwrr personnel rmana epment detusiont DA Fotrn 67-6-1. Offsetr Evaluation Support Form. aer-, as a guide for the rated ofr'cer's perform. anet. development cf the rated orffier. enhane" the accomplishment of the organization m ,inon, and providea additional perfurmanc, ianfurn.ation tvii the ratinr chain 3. ROUTINE USE: DA Form 6T--&' will be maintained in the rated officer' officiaj military Peruonnel File (OMPF) and Carter lklanatenment Inditidual File iCIIFI A cops will be provided to the rated officer either directly or aeat to the forwarding addite Showrn in Part 1. DA Form 67-1. DA Form 67-6-1 is for organizational ue only and will be returned to thefalrd officer after re-ieu by thr- rating chain 4. PISCLOSURE: Di•closure of the rated offrier', aSAN (P~rt I. DA Form 67-4) k .oluatair. Nowe"". fallum to wdlfy the'SAN uay rAamult in a delayed or erroneou proc ing of the offic't a OER. Diadowe of the Ilnformatio is Part IV. - DAIo. m• lT6--] - kroluntary. Nowere. fuliltre to provide the Inloemstio• reqwuaed will riult in an eitala•tlo of th ratVd ofrrvr wt•• oat the bentet of Ikset ofIr'na commetmu. Shoid the raled ol"•er w the Priae Act as a book not t;o p-r1ide the Infowatim mreq ted in hat IV. the Support Foram will wontaln the rated offeera satemest to that effetd anIe forwauded throlughlie ratng usita ls aneordance with AR 623'-1-S.10. F-.5
  • 254.
    * sl I"F.[F.As, NA."( MIDDLE INITIAL A~SNGRD OAT( O AE 01 R-1 1, OE0 J. ,N. (0 STACOOK VWN* %I NAY(1SATION. ?IP CnDFCAO APO. MAJOR COMMAND REASON FOR suamissioN c. COMO COO[ tw~UCOEE o OF .. MILPO a RATED OFFICER COPY ae ome g.. t do" FORVWVAROINdGADDRESS AAONT..S coo( ~,.. F I .. I I r 2. 1F ORWA RO CO VO0OF F IC ER tXPLTNATION Or NONRATEOI 1 EA.OOS PART it - AUTI4EWTICAT1ON (Rated offal, lig....i... PART I doI..4 RArIUI; OFFJCJALS ON.LY) * A.E O; flETjR f..I P,'*. ptipI'S GRAoir PIIflANCIý %7AANIZATION. OUTY' ASSIGNMENT . -AE 01INTERMEOIATE RATER (",I,. t-1. MI)uNf~ATPE_ T GFRADE. ORANCM, ORGANIZATION. OUTY ASSIGNMEN4T c NA%-E OF SENIOR RATER 11-I7. F.it. 011 SNflTR GRADE. ORAIICH. ORGAFFIZAT-014 OUT,' ASSIGNMENT JDATE d SIGNA!UAC OF AATEO OFFICER DATE DATE ENTERED ON 11 RATE D OFFICER 0. R MPO INITIALS ~.NO. OF OA FORM 2. 2A CooINITIAS INCL PART Oil DUITY DESCRIPTION fR..,I) * FFIICIPA-. OVA'ý TITLE lb. STI;MOS F.REFCR TO VAnT ilia OA FOFIM 67-S-I a. PRafnESSINAL O PEI to.or~i I$- I atth1 1.4-1,a theRP IMP. aia- e Ii udapabl the changinE LOW DEGREE 2. Mfroni.raie..caIIlopriale dcswlepaa anod etppnpau in [ahgge li SeesIAJ-d @rov~igtmendat 5. Pctforvns under physical and wai~ntal snt. 11. Pow.t.. malt&Iirlt WaingFFand aipposiane. 6. EncoIu.(ig candor aRd frainkflea in subordinatus 13. Suppolu EO/EV.O 7. Clear and concise in wfitLan communkation 14. CLOWsaw conda. In vaillcommunicaELovI t' PROFESSIONAL ETHICS fCIRPR @1PT IP"mP late noted 01aWMM pontea4a411 .I.Ia-diti a'. _d. 1D~~..I 1. OFO.CATION 2 AtESONSISWILITv 3. LOYALTrY A. DISCIPLINE 6 INTEGRITY a MORAL COURAGE 7. S(I.PLEISNESS 6. MORAL STAND- A RDS DA 617 8 019PUACIES CA, FORM 477 I IANI 1). 1101IERC IS OSOLIYE.4fI NOV 15. US ARMY OFFICER EVALUATION CIPORT F-6
  • 255.
    Pt .100 COV11111 PART V - PtRPOIUAWcE AN00 POTENTIAL EVALUATION 1FMMI a. RATED OFPICIARS NAME IM" POTOP ICE 1 ASSIONEO IN ONE OP HlIS/HER 01SIGNATEC SDECIALTOESAAOS []IE3 nEl 6. P111FORPINANCE VU"I NO THIS ARATWFIEG PII00. REFEIRTO PART III, CA FOAM 47-4 AsiC PART Off . 6 ANO 6. CA FORM 4,.40-t r7 UIIE~T A 90U;TV" PAIITS c.CCJUEFNTONSDICIFICASPECTSOP THE PERFORMANCE. REFER TO PART fit.OCAFOAM 67-9 Af.O PART fit a, 6 ANtd, CA PORNN101. ONOT UPONRCOMMENTS ON POIEPPTIALI 0. THIS OFF ICE A POT tN1I AL F OR PROPOT PON TO THE "R XtI NICHER RA E 0POI0 DYMOE AHEAD OF PROMOT' -IT-- 0. COIIIMENT ON POTENTIPAL PART VI - INTERMEDIATE RATER ItCO.MEFITS PART ViI - SENIORA RATIN I POTENTIAL EVALUATION tso. rh.,., I AN All b Ifl.IN F DA SIR USE ONLY lift it A COMPI.E TEO OA fonm El A I -* d'C" -, 11 .T.II. I Hot1 EfFORT AND CDA,.IOI 41 (1 I I JNO F- 7
  • 256.
    SUPEnS USE ONLYsuptils USE ONLY- I IoPI16 , 11-_ ,, 03 REPORT ON THE FITNESS OF OFFICERS NAME(LAST F:RS. MIDOLI "- ORAOD 3 0" SN0 5 ACp A1 6 UIC 7 TIPSATION DAT ri?,M -" [Z_ ,EMACjjo_ I_ EI 0 -CCASION ru1 Ali____ 0- rm -r-A~rarif! I P -il ]1 10 DCALC N OFI DEACHMCN! ' FROM - to IODIC~A.r REPORTING SEimon or. OFCC OFFC1 C)" S -E '11 1co, Ir'S SPE TO 1 -i DPI -- , - . SULAr . I -' ftI i I . IAL jJI H3 CO I IIL-'6 CLOSE Out I IoIN? 2- EMPLOYMENT Of COMMA•-o CONT,,UED ON "EvEASE SIDE OF RECORD COPY' 2 •I- 23 REPOA71NO SENIOR [LAST NAMAE. FkMillITL 21 ORA60D 11rE7 I 28 nUTICS ASSlGNtO jCOPATiNUE ON REVEIAE SIOE OF RiCOno COPYi SEIPIC ASPEN rV INO COD LTU!TTI VUW6U SHEET I~ 29 GOALSE?11NG 1113o0 *O 1 .. '. I 3I, oWRKINO [-1 2 .... * a mail 33 11AVI 0. "" £ kC'•)T_ ___ •f1 '' PEI AV"ONS •, M. ... o Iil ° '" 34 REAPONS • N03F WmlImO II UAFSFII 3$ IOAJA 3f7 Bi-11N A :J- 8 I UA1OS AM 0PIO7wN AOiLITY 3 8 S A T . t WATC H 4 3 ! . A A I-,. .AFA I J I I AA 0 " r. Ad~~~~. do(T'(i No so of P,On~~i!u ~ "'I jJL~J .1 I EVALUATION SUMM1tARY B3 FIIOI t..J / '.'• yr f i t ,l.,I I j _____.__ / ... ,, -il(,~l4•{IiAIIL i~ ',*v,/iiU I [1 -- •- 6' ;'lil` LUM A , L1] .. OI. j , i ... i i...~._ _ (A(_ I k ._ ( ut. [11 I ,,p , iA.,C,' ... .. Itil. .. C It .. .. J j __ . . _ __ ___. . .___ 4___ __ SI'LI=I•uhuAL?4 T .4,}l l' I leCI C(1 (J.lll IcoMl~ 7t'* pLV(•IBFb I•t li tlJUi,(, [TI f,1 IV ......e [ IIf,` AICALYIL. J llu ('fl.j(,,iiA 1 *]' )i I 7... MIIp I tilt ic / ."' II IIu, II ,l,,,I IJ],,,,, l "0 JI I II Ai ... ,- -np• _IrJ' _iva) Ila Jl$.'Il L / MLIII , / AI--Ia •l- / A-OI~i?Yo / (II ,, A~ol i 11 lll~ li( cu., 'I Mt III ýUJL74 ý11 1-( III V.14U 11111 J ALL L tj jlA(lL F11A 81 'iflNAIUNi Oj OFFCIN I VAII VAI IIA I i ,l; 1.1 l? (I ll f1l1H A (C N OW I. (" L T? AT }I HA V i th fll -,'k ti'lU It IIA V t il tl ' AP PAI LIO0 Of V V P il FOMMANCJ ANt) 10G~t TO MAAL A StIAt IMANI E . ... NA1UAfOF"('PORTING SEAiO' F S tit SOFANibIGNATURIE V, MIOJI a it I A ll oi WkINO [41410i11 UN LUNCUU!"(fiAl ANIdr LUNUtMhINI)IJI b I'L* NIPO I V-3 8_ _ _ _ _ 11vPres(1#111110 ?~111 *E4L WORK 11 t~? S IE
  • 257.
    28 DUT IES A~lGA#D ICo~imugd Sa CW4MENTI P&V,•wtv COErYr • on IA•*oi.pC P6OMgf§I I.*,SCW P MhbV P0fWfl Irll ON WOIO OR W F*vVf9i mf WO S4E1W dO aOf A lo,,nanc £ €OtnWIl WWAPV Cvnwnor, c*18wno Ie ~t., Ski%& 041idIaftt, awl ay be -.notanI 10 "Wtf df-tiaOm~nt "n Iutfe i&~gtmffnl A RvW% on boauawIM MAm~nitl (' w*CJI~l &I-.Wrf an" kVopong CCMwYeds Nf r '~fo ve F-9 I 9
  • 258.
    PROGRAM i. OaImNLZATIOcl DFis st£ . Iruc 1 9 DESCRt~nol TITLE(Alshc eal" es q-- 2. MARINE REPORTED 04 6. LASI NAMA b POSTNAMI M t. it 0 eA. * W rdNTtcA )tONNO. U.PM"O 9 tAT.Arv tI - 1 I I1 23. OCCAoNsIO AND PERIOD COVERED 0 • OCC 6. NOW• FRlOM10 4. TYP it 061100SO• NOe1AVAAA&rlY C w awme ý,,,N #vHil.AI " "DUTY 1GNMFNT S. SPECIAL INFORMATION a *DIIReT7IVE MUL .6 10OWNS1 /0 04. Is.LO NO. *.o * GUALIFICAt1ost b clVteWiING ORsCR*s ID NO. i NO . d.*4 1 6. RESERVED FOR FUTURE USE 7. RFWRVED FOR FUTURE USE I ORGANIZED RESERVE DRILLS '0 SI I I I/ •. , 9. DEPENDENTS REQUIRING TRANSPORTATION 0.NO 6 tOCAtICI . c. ADOMI& 2 =1 lto. Du n PEFERENCE (Code). 1ob. DUTY PREFERENCE (Descriptive Title) (Aw s. e vio e se.li .... 2 2d 2d "d 11. REPOITING SENIOR S T IS 12. SPECIAL CASE (Mark i1 oppIhcoblo) led ATTENTION TO DUTY 1So. YOUR ESTI TEOF THISMARINES "GENERAL VALUE TO THE SERVICE' EREPORT E REPFOR T e p9 Eq ED Eq @0909D0]E3[3E 90SC3E i3.PERFMANCI 14l. COOPERATION '~b. DISTRIBUTION OF MARKS FOEALL MARINES OF THISGRADE c [] . . N , • • • [] R LjI L__JLJL •I IC__ _Ll_j L_.J -zo. 13b. ADDITIONAL DUTIES l4f. INITIATIVE 15. FILLBOXES SO THAT THEtUM Of EACHCOLUMNCORRESPONDS TO ITEMISb I 13C ADMINISTRATIVE DUTIES 14g JUDGMENT 0.>C) o0 :3d. HANDLING OFFICERS IWAR NCO, Wo-) )4h. PRESENCE OF MIND Z ED E EDR R FE9 B 9 EAB 9 D RE 8 1 04 CD (D 8 D l D E E ID k,z0 0113. HANDLING ENLISTED PERSONNEL 4, FORCE S13 TRAINING PERSONNEL 141. LEADERSHIP 16. CONSIEORING THERE•UIRMENT$S OF SERVICE IN WAR. IN•n:ATI YOURATI1TUDI 0 t • F IIOlARD MAVIN<. THIS .1A0IN1UNDERYOURI COMAANP z 0 ER EA 9 9 9 NDT EDPREFER 06E OR E EPARTICULARLY 'K 1 3 g. TACTICAL HANDLING QF TROOPS 14k. LOYALTY OBSERVED NOT WILLING GLAD DESIRE < Z E. e E E3 e) E1 21 17. HSMRINE SEENTHESUBJECT OF ANY OF THEFOLLOWING REPORTS' C) IF YES REF-RENCE IN SECTION C. AD L E 1 QUALITIEr$S 141. PERSONAL RELATIONS o, COMMENDAIOR I b. ADVERSE .DISCIPLINARY ACTION i^ l.ENURANgE x, E 93 E] 9 ED B 9 Ell] E R E ] ED YES ENOI EYES eNO EYE -40. PERSONAL APPEARANCE •M- ECONOMY OF MANAGEMENT 18.REPORT BASED OH OBSERVATION 19.OUALIFIED FOR PROMOTION Z --_ -,z -- ~k YES ED) E-' C9] e R E _E] ED LE RE DAILY I3OFREOUEN U1 ENT .A-.p.,CA 3YES .:,T, IAC. M•lILITARY PRESENCE I4n. GROWTH POTENTIAL 20 RECOMMENDATION FORNEXTPUtY 21. RESERVED FOR FUTURE USE 0 RECORD A CONCISE APPRAISAL 01- THE PROFESSIONAL CHARACTER OF MARINE REPORTED ON THIS SPACE MUST NOT BE LEFTBLANK. ED, 22.I CERTIFY tIW Inlfomotion in se*ction A is correwi to the best of my 23. I CERTIFY Itha to the best of my knowledge and belief all entries mades hereon are knowledge, true and without prejudice or pan-olity. z S (SIPo tVro of Morine ripored on) (Dole) (Sanoiure of Reporting Sonor) (Dole) z 24. (Check ,ne whlen requietd) I HAVE SEEN THIS COMPLETED REPOR. AND 25. REVIEWING OFFICER (Nose. Grode. Service. Duty Assignment) J25o INITIALS 1 i HAVE NO STATEMENT TO MAKE Z I HAVE ATTACHED A STATEMENT. 25b. DATE (Signolurt of Marine reported on) (Dole)2 -- STAPLE ADDITIONAL PAGES HERE F-IO
  • 259.
    USMC FITNESS REPORTPage 2 (1610) MARINE lpORTREO ON (Lostl tome) (,irt, name) (M.i.) GADE IDENtIFICATION NO. P7.Ioo (Fon.) (7To,) OAc.&sIO REPORTING SENIOR'S CERTIFICATION I certify that on the terminal date shown in Item 3 of Section A, I was the Reporting Senior for only those Marines of the some grade as shown in Item 15b of Section B. Those Marines are ALPHABETICALLY LISTED below. I rank this Marine as of (only rank Marines marked Outstanding in 15o and b: mark NA if not applicable). NAME (Lost. first. M.I.) ,mos . NAME (Lost. F,,,i. M...) ,MOS C 0 -4 i,,l REVIEWING OFFICER'S CERTIFICATION 1 z I1.E I have not had sufficient opportunity to observe this Marine, so I hove no comment. t 2. 1 have had only limited opportunity to observe this Marine, but-from what I have observed I generally concur with the 4 Reporting Senior's marks in Items 15a cand b. 1`1 x 3. 1have had sufficient opportunity to observe this Marine, and concur with the Reporting Senior's marks in Items 15o and b. or / z 4. Lij I have hod sufficient opportunity to observe this Marine. and do not concur with the Reporting Senior's marks in itemsuiA I5a and b, I would evaluate this Marine as (item 15a) and rank this Marine as ______of rn -_ (only rank those evaluated as Outstanding (OS)). REMARKS (mandatory if Item 4, above, is checked): SIGNATURE _____________________DATE ____________ NOTE: The information above WILL NOT be entered into any computor Program. T7_1"I
  • 260.
    TRANSPORTATION LIEUTENANT COMTMANDER U.S.COAST GUARD CG 5312 Page I (Rev. 644) OFFICER EVALUATION REPORT (OER) THE REPORTED-ON OFFICER WILL COMPLETE SECTION 1. ADMINISTRATIVE DATA at NAME FLast, First,I iddle Initial) b. SSN e 'RADE d. DATE M RANK 04 e. UNIT NAME DT g.OPFAC h. OBC i. STATUS INDICATOR j. DATE SUBMITTED k. DATE REPORTED PRESEN UNIT I . TYPE REPORT m. OCmASION FOR g•EGULAR REPORT IR-- emi- Detachmer of Detachment Promotion -T 0i Regular Special L , Concurrent , Annual porting o0 of Officer E-o, Officer n. PER~IOD OF REPORT o c. DASNTOSRE' _ p. REPORTED-ON OFFICER SIGNATURE ,A- OT= " IO V W'. UAT PCS TAD LV OTHERI THE SUPERVISOR WILL COMPLETE SECTIONS 2-7. I. Section 2. descrtibe tie ef ,loeeso lnukoding piery anid esellftatial si dvtleretento eetel aend giseet,•,t•ere. to Unlt o Contot Guard missions. Than lot oath of the rating senses In Secton, 34 !otmswto the offincer's Wortrence during the reportung perode,604~ thes stioneloede gAhYleeT and ::elgal insit lo flittng in the lpropridte "lrcie. In the area folloii ng iach s1ctiOn, dee•rIbe tk eel. for the mtrlke lithe eking .. inifiee whistle neali. Use onrly ceed epe.. COmOless go~ 2. DESCRIPTION OF DUTIES 0 Documentainwouowei 3. PERFORMANCE OF DUTIES: Measures Sri officer'S ability to got things done. a BHENG PREPARED G Ga htca try Wbine. unspectd Appoeast cns Tan prompt Pllea-to 6 ioo to et Ti"1ng Asud•t•sea sruleomly well IAs learned- trolled •y eoentve-mas Sets valvie or (rnot iui,•,ml tetoa H=aclyiunght. IU l Imnonedwli eooisawicraolam Wei Led emn•oltd ability tho ntliepae, to id•n unrealistic goal, if any Salt wrong IrunpoFadl ie high aid rlhin goIs I "rlgn '",nwiueca ut,,Ia e.ente Iel ilyw tmii~bdone.eo atpnor•is, slid prioritles Te& go so (01ll*. eilgietnopeIng pmadu. pVow I Pwr a eal@dPktJeslJIe oal•) llI eaadnoe. e U(or aoompln t u aemJod rg.o o=r4 proem1 uret. p.lacor eetams Net of soyrete will Uesa Io heie onehenI Uhulanp< en. opereaung powidue& rea. 'I tl~ll n1M01l x.4eboth perdLJtalbletlif iw yp~lpr•iosr 0 inee~tliosonalbilni t or ient lsole ad JamU "Lesson hom-orkl It0| oysltalle to aPhi4leve Vi hisbastl outo ýt li urrertlin 0i41tionel. tlrasionl. guy well prepared for r jp h,blhulie end priperllol fe' lerOmp|llshil .•eop en. MIAL. l.11. litie end siillonless TWW pl ritlll _____~~~~lo o __ __ __C a'eu tyi eeply. bSU. RESOUtCPS May overuleder allocate t-spu~wtors, oncrn - Ku'incully-nmisanfe -anety of. toti.. ustilfl71 -6 =1 et Irisgifts wear late en uopr-clvrlt'l ar"as, *t 9rerlook Si.ntuinaniauily wi., the ineruelsai•llul4 Pdest La one bse a"e c•4inca -rmolstsd eblllty to utill.r prepl, 60o9 crtrila de.M.dA Moto efleol .• l Coal coenrioevis Ueltsl.nl, eta nel o, done dloist w1ihiha •rnai ape qetnss ef at mrneoay mag tol, nd tim eaficiently, to managing a narrow rlnge of srtinvlttl well through ethere IIe fl 'p ee l tivtea C4whtljy "Md dam inth loa" delegze, en to virldefollow-up tonirol. Ovir/under manages, doien't delo1gat efle'tiely., requires same d at-bein UlIolo whi•ioeor polsble Kamcthe'l"bi wisely tlorutllis people. or "bi•rns" Mioisre lad kars what.r elil•g an; them o•nt lwmin't (el-lewop iLope War eflst "eg I- (.EMTNG KItIULTh Ucuali obtaiin."uu Wiesl t, etiij quTigi (-f ... .1 dienk c well I, all rte-otis amlwstd (lo C5.U.JdL;;;;lftV o -;;;~ts yw #ip- sK"th bt e as m e. unor .. - .olast eh,-,r uruulol wituaionll Utlbein sll oceti Amwr•r hld-. •-ys 1nTe u•ulily'quatityofhe•ffclWe tr'-k at tuintiI. in rsl, ine icasli UI will mol endw reqo•rlmitntas e •s•hen veawe. a1 to So morIand do it hell, ii apiLt of connp lihments IThe ofleriteeneaa orItimpact anecded twgeol iults ~isoly meiriiai teure. rduuiac rinished, qusality work and "now" eetcit Owe,waei and that a( of reoultuo ciit officer i unit Srow the IusituI quo rniiuiirm naive frici ayherlihd•ut Il•al, a esitl satee oI lu il y mu high s1alil p kee.St Guard hi- i posliive ieniepat an .iun t or ll•dCoe" •m•ew, Me'& indei o Il•lm~e ie oslaT ,uisl, reA( PMllnOelepo On iUNS aM' 0)o _ 01 CD D 0) 0_ a-,-bVLE~I Needs renunngdoosilropurIlebath Toh& Parte 60Tk kowip.,er11pn lihlytnietacsoils in ,,iiplu uns"116 tl cein duo ds•avelo'i Welewihiout trsmufic 4.di, .ropletel prd e F t. eAd isels olamed Adep61e flbdJiret 1Wph e iLt 71w tes- to which the eafficr rapOeb". tion blow o' Laid responding to hquosus, dasdI ine, Mlkel trmely Iree sc to .rs- prap-ira#el U#sIF lllpew bePrs ra "repllea.ervamets lerdllsiea in a Im4lY own i-twrs, aosAl&uleouIchengedio quia.e. menmre leoters or mile Tellal.e &. el iOn. poiw'y, ds . or reneihiltlu cha in g eailt, eie ip, o pope Iktlp4%lo t o4 a•aler shanielsp isei. eibilluets in 11ldiss. delemen, or phlwwwwuut -sla -- d 1 alietathlk I= ilAm~rA@l. H f Illm =; &bfeW* &OL No 1 -- .1,• 10,6 r"ll I •r (l~~l+4m IllJII~hll~l~rolll..lJse tollllnlllllllll'll p1 issm 6l 0 Sh. 4806 ugl~revt or bitlallnt mwphr llll (D 0 C0 T Q 0__ _CD_ I, COMMENTS€ (Porfortfunoo of Dutie); F-12 Pou odrdw is obsolete... i.PtPttA X1I7B. -I ~ 1~Xw u aan~ltes
  • 261.
    f. COMMENTS (Performarimof Duies contlnued): 4. INTERPERSONAL RELATIONS: Measures how an officer affects or is affected by othiem. a WORKING W-TF! O"HERS. Sometmimmme dirards the ides andfestimp ,acaunagam, Ip of idea; Ule. Stimulatwe ape". ueP m ow des•amsw" d others, or mueee bohult hacui d Reopecot the view., ideas of others; t ee tommun Wi •pw. 4Ow em Dammossalnal &ilhy to prim ,a tween . faflure to inform or consult-L b. a w-mp-tivs,lsters a m of eamwerlL. arwtkble g with *han of ael fesS. ISimoeraits. Sodto week With oter Ueobimpaltne; Salk too muchb~sS. t. Keeps Other iftds moed w oithersl Car. rianketPstl s GeO"difermeut People sa" people o W N tonS Si ahie Momnsoon coal& tus. May be inflaudble, os tam pe or w mba rim shamo of lead. 'Irea. pow"l in ow wrgasnhetanm. to work lost.erh withetmatn nxt SWlow So resolve clcte. Not a Lamina adeat., omurte mam. Helpm datastin m them toac geBas which H/0 player. reed-wve conflicte and may Wuem on mmu would not eUatnwium haewbmu obtaine. b.HtUMAN RELATIONS: Eakbite discriminatory sadmincise toward Trieset otbare falirly and with diplt Through leadership end demontiarated o__ _ _ _ thers due to their retigton, aes. mc. ream. rarlegus of rligion,see. am, ram or athoic sum pereteonsaimamitmat insile fiair The degree to which the ~wr.~luble the or aethntic bcgon.Allowbse his Is- hsckground. Cearries out work,,=-nzg ~ an qa rsmeto te,.S l te leta nad epardO t ar the Cmm mndant's fluem ar the o ramotmaatf lhpe. apesl responsibiltlie without btim.cregardlesn of eligico• t, Mae. . rem Heman Rel•• a Policy and show o ree mae tey m piontimi to barem alwmu le I subaai dieastm acon mthls (r b-Ting Imp or ehni b6dwakmo Dow not.WSoltem pie -ad weham te in dealing with cams, diseectful; may makm" aslurng remarki, i to•etepirlt of the Commandant's Human actions or bohsvwr by anyoe. Maker dnd Itn. f. Dcwot hold subordinate@ accountable for Relations Policy. ]= lemu•lrlyboterwo.ty catributimos, 1o their human relations responsiblities. thi Si ___(D_ 0i) 0 of) 0 _ _ _ _0 c. COMMENIS flnterpersonal Relations): 5. LEAOERSIIP SKILLS: Measures an officer's ability to guide, direci, develop, influence, and support others in their performance of work. . LOOKING O1f FOR OCTiEIIS. 1 shrvi lithle concr-i- for th, Saaety. problems. Ceam shout peop•le Ko, es and rempond. Createes n attitude of cnresand a Sense (o nied., goals of other* May overlook or t. .,h, need. Concermod for Lheir gaae- Wmflucilty w.n others Peracnally ensurses The ofrlot' Sensitivity an4 responsiveneee tolerate unfair. insensitive. or hbunve trUat- itylwell-being l ami ble •lsneJsnd helpe r-eouwre ar available to mo t people*$ to the nede, problems, goals. end ac)hieve. eent eopeople. May b leaebk to other, wSith personal on,ob r"11td pembleod.l = needs and thaet hmit. of ondurtorae are NWS ment O (there. but non.vsp"ruive to their personal needs. end goale When unable to sm.st. eSugge•ts exceeded. Alians Ilemihk to people and Seldom ocknowledgeo or recognaia• rubor- or proiede• other remur "Goe to het" for their problem. b)oa no tolerate unfair, in dealtee' achievements people Rtewards daervlg subordinates ino eenutve, or abusive Lreatmnent of or by t t'ineey falUion. others itztremely oanat enticuea in ensuring dmereng uubcrdnat-- get appo.prate tme- ly •ecognition. (D Q 0 ® @ D @® L. I•AY•bWPU• UIOtflINAMh: Shown little interest in treining or develop. Provides opportuntiee whith enurag, Creasto.challen Situaions wh•ch prompt -ent of eabLordin•.Le. May unnesaaar, iy eubordinates to eapend their roles. banJo. sr 'ly high lreldevel ntof people "Theextent towhich an ffi uses coechjng, withbold authority or over-aupervise. Imporutat •asks. and liars by doing1. L ork group run le'clo-kwork oaeu eetangad tag and prvidea oppoa Deesn't challeri their bilitihe. May Deetateas end hold. subordinate. accoun F .lwSyA kn,,w what' on end guru-th fo growth to increasm the ktilla. w.ulersto, margis • . lrfmaic. or eltlcues table. litecogiue goodperformanca.caract. ro .. :y handle the = e Ho!de kLaneled, . Lad proficiency of subordinates. ezceeively. Dmoen I beep etbordanete, in- Shortcomings. Proide opportunties for autnraiainate acsounteble. prondee tamely formed. provides littia monaet tctive feedback, 0ianang which support profmoal growth pr'•uir •ndconstructive ir•irm Pruovid. artit- And maueve tsrsinng opportunities- I •"JThUt..•.c•'r 00"r Il ;JF.s i Pmn officer who haa diffii•uly controlling and A loader who Saru. the & ppor mmrad €oou. A Setrong leader who commnande respect end Inrlueunin og thers .ctively. Miy not insill ent of others Set.s hih work standards iap"r. othie to ech-eve result. rnot normal, The .Lfr's re •fea ctivao In in Influrn- •rdi'dence or enhar.,a ,opertUone amrong end epectatlions which are clearly ]y etuancable. People want to srvie under dirst".lo others in the accomphlaun of rubordunalee and others bet& work statn. understood. iA-quiree IS to meet Uti h1erleadership Commun•catetshighwork Leeks or masaon i darda that may be vogue or mtusonderetooi. standarda Eneohaiode Keep. people .sndaPrdend enpaataoew .hacrely "Tolerate lAtLeor marginal performance motivated end on tiack even when "the go urdervltood_ Get es unor reeslt. even in Falters in difficult situationns ing #et. toagh tare- O cical nd ddrat aituationa. Wine over rather than nmpo*e nell Q G0 1 0 d. KVALUA TIN(J hl'l'lt)If)vlATE. - Preri ovaluatiuo that are late,inownie Prtpae s e-valustuor which are timely. lar, Pi-epares cVsluastune which aee always on tent w•th actusl parformnsc, ornotwtLhin soirie, anrdaoilurnt w-tih eystem stan, taie.fair. accurate amin.clearly measre per, Te aOunt to which so ficer eceoducte. or systse udelinos Second gineeesethe darde. ftiquired narrative are concumt, formance against the standattrs Nev"r gets roqulro• ~ ~ o car L' 'odut aur , /t,.ertofte need to be improved J to-zrpum,rend conU-ltirbtrto udnr.ndr reports returned for Corut-orvaiiuastrnt. utlnflatmd arid timely eveluoaornn lor or redon. = 't hold iuber•drien ecoue Subordnates' performance and qualiies Use perforimsanc evelustbao e a tool to onluiteod, civildan, andoffiorr personnel table for their retlogo Proides littlo no Seldom gets reports returned for co•e•s. develop subordinart. •nd schieves notable euvnelini g f -ranatee. tionedjittmeot Provides constructive perforomance Improvement. Sets en example ceu-mbh4 whe., needed. Doa not eeh 1 . in in supporting eape.lished guidelines ecueeifatdf 'yprprdLwst 101 IG) (~~romz others 100 8 a. COMMENTS (Leadership Skills): F-,13
  • 262.
    CG-312 (Pae 2)(Rev. 6-64) 6. COMMUNICATION SKILLS: Meamire an offiers bwty to commmic in a potive. clear. and comln4V maInner. a MPEAJfNG AND LUTMNMG: Countoutas as ht •is kspb ed by fpes ck l dearly mnd-o1moUy. Gt- the i ,-,s alan-m aleicdad um-. in 411 pointaamesar. Spemaks~ 1 ~o gffecivly end With new = Alwayslislafdt and credible in both No,.,well. un ofcr spak AM• btall i- in- a~~.u milsssieuses als. rhsc inm- both• priate., -- A pbic-• artualas. gilelujmalinfedice andim oo to, am, divsduej. group, or public atsaatioaL Di o. 0 prst issibsitt confidence -hen Uses appropriate cramar-~ WW plots r wpealkurin may be unprepsred. Latem pOW, ts% baa•o dtistraung manneriamr . GiCAs phass•es and pe•suade. L•ncuragee osthsm to ly; doeasn't give Wenhr a chancie to speak. other a chance to speak. listens well. respond. is an sUsintowe liesstner. b. WRrIU•-G Writes material wh)ich ay be bard to Writs., clearly and simply. Mittelial ad. Celauueetly wnites material which isan 1ax. undersetandi or damnet support conciu•si=n diems, subject. lows wel.,achieves intend atmple in blrevty, clarity. logical flow, and How well an elar, conmmunicates through raldied. May use jargon or uViephr. ad purpose. Uses short aente4nc•spara- persuason.Tailmr wiucitt to audhietun.. Imroni material. rambling seltence•p•lracrupiN or Imart graphs, pereonal prionouns, eand the active tg approprte rooeerstaooal syle Wit. gromm. asru••ire, format May oversae rm-. Av-ids buresuatoc. jarlgon Senwork neve _edok I.o the poenve oice. Ownwot or •Uhat d - or big words when little ones will do Own uborduisate mmw.s =m high sAtdard dinata often needs corredjron or rewritef week or that of subordinates rarely needs coriection or rewrite. Of___ 0 a) a) 0 0 (D _ i. AR11CULAT•NG IDEAS May hiave valid We but lacks orIJIManiati Expresss idea nd oncepta in sioorgonit. Peadily iebshsim ecdsbility. C•ncm. Par. or a confident delivery. MAY argu rather ad.undervtAndsibla manoter. Pointsout pro's suasive. and st=ng. Delivers Ides" with Ability to contribute idess, to diatiseelsusss then diecuss; or may intaryset inelevant and con's. Urns soud resnig cinvinong[ogit anbl comments err" an expem thoughts clarly. ooherit:ly. camment•. Contributes litle tha. isgs- off subjsect-L Ieeptiv• toides ooters. C T hins c iung, thovugh, Clearly states key and eo1emporniscualy. is -ll or large or use••.il Unrectptive to ideas ao ethm's speak well "-Ofthe cuflf." iue-. ad quenin. BuAids on thtei up brleeirrr m-ebtmin. O- t "ink's ' •eilon feet" in alI d. COMMENTS (Communication Skills): 7. SUPERVISOR AUTHENTICATION a. SIGNATURE c. SSN d. TITLE OF POSITION e. OATE -7 IP 1 6,1 1iz j THE REPOR1 ING OFFICER WILL COMPLET6 SECTIONS 8-13. in SectionS . Comment on the Suosritiear &valuation of this offica, lOittlna. Teo n for ec of d, raig siicaL-, In Section 9 and 10 cor'ip•te this officer against the etandaildl shovrri and 'osign a mriark by filling in the spPioctiil. C10it a. In the ieas following each ieatiion desýnbe the bain lot me marir given citing specifics wfiatt pouoible. tine only eliotled spate Complete SectIonsl 1I, ¶2 end 13 8. REPORTING OFFICER COMMENTS SPERSONAL QUALITIES: Measures selected qualities wvhich illustrate the character of the individual. £thr'l.AfIl'E Tend., to poe'.pone needed iction Ira- ! Get thinor, done Alosyt etrn~rs to do the OrrDns~t-a, nlurtures. promotes. or brngs ,I plementa charie only &then confronted by v ob better Makes improvenrents. "works shout new ideas, Lethod., or priactices, which iemonstrated iblity to me rofr'ward. make necessity or directed to do eo.Often over smnrter. not harder - Sellstarterr, not afraid result in significa tiprovement to unt changes. and to week responsibility iiLhout eastn by eveniss May supprem inuatite of or mokint meitakeh Suppots ner andior Coast Guard Doe ot promote N.O gtidanicw and supervision subor-dinatei May he non.supportive of ideaa.newL.od.opractice* and efforts of other, change for take of change Malkes woe chatters directed by higher authonriy. to bnnC tibout constructive chanre Takes thwhile idraiipractires work w.hen osthers ticely corrective sction toa rvoid.reiolve may have given up Always takes poisiUve problerru acion well in advrncr: q G) 0 0 (D 0 b JJDGMENT: May "nt show sound Iogic or common sense Dirmonsurvat. analytical thought and cemn Combintes keen analytical thought aid to. in maJung diffdcult decisions. Somstimes ats •on sen.sei•i making proper deci-1ioi. Uses eight to make urtilo end omionul dae. Demonstrhted abilityrto a at 1undde'm. too quickly or tou late. gets hung up in i cisand experience snd oinido, the inn. s Isioe. lc-seeson th e ms lnemd st stuon anm mkis sound recommedti onsLaO by dataije. orboy 69WS ker elements kl toaltis pact o( aluternatives t oghs nlib, cost. and relevant iliforUntaion. own in o•oplex ItU"- using expenasce. common sen0e. end aise wironglI h tilseconaidor-atiogn Mekes sound decisions lon. Always dos the "right" thing at t anaiy1ca though in the decia-on proce s a Utmely fashion with the olst informs --right tme. tion evailable. c-REPONSWCLIT.: Umiallyceaintisdpended upon to dot•s right Posmess high standard of honor and i. Uniesnmemang in midase otbf-- and ka- thing Normally ocountable for own work tegrity Holds iself nd subordinates tun. tegrity. Placme goals Oa Coas Guard above flmoicnitmted mn"' mitent tatgettingtkejob May awrvt lees thant sittlactlory, work or tabl Kep ommritsinentasteve when un. peierada aabibtics sadi gels.. "Goss ths as. os ad to bold oee's self ac"ountable for tolerate ndtftorsom. Teod ai to get involv. cmifortable or ddiffcult to doIn Speaks up Lre mile. and mare.' Always holds esVand mw anldsubordinsates' actions. ccrtneeg of ed or speak up Provides miniml support for when necessary. even if postion is un sibordesiatse accontable fopruidm and umvri.nloL ability to ompt decasons coc- ded&isonsi men to own kIldas. popular. L•yal to Coast Guard. Suppots tiouniea the eciragei In h Willl tuary to own vnies andsin mks thsem worit. ergianhethosial polcsaridedsdocep which may stand up aned ho ---inAd fi-med -nL. bs siontor its,ewv Ideas la eenu ps~la -olmeafdedsweerk. __ __ 01 G 00 0 _D _ d. STAhMIA. Pill'umn (a csI marginal leader strrni PesfaMA" ifIs eusafred sundier straes or Pesfarmanos" seethes an ima iially high or during pseudo a of e Wendd tr. May dsh'arigis ofextenided work with no lees lee hna4ie rdsa perds of 1'lle orb'sia's ability to think and ac rt fs make pooir decilsosem irevaook key faIos of prductleuity or msafty. Works amtr houra extsnded work. Can wor haLew" sea lively sander waodi"iac that are witmlul ficu so0wreng priorities. or wase= ngh bat when mO to get thne job dones. Stays several days and stUl remain very, proedw. aedtesr enontally or physically fatiguiiing esafety constidera~tions. Belka at putldung in mel when the peesmiure Is see ties and Marc. IThrivess =ar st1"Nsull neci' r av . ti comes rattled In time situstiose eswsitive stressul situstleews ___ __ _ 0 0 0 G0 0 00 a SOBRIrI'. Use of lcoholc sets powasinaplo, or semIte Usesalcmo I dum-uinua~isly andin hiemr" Meessandeein tosalows fear. is aedruei. Is reduced job pseforuaine. May bring Unt. or ac at all. Job performance estivaeraf. holdsumpsevimas a assloaifor dwwwmag The extent to which an affloweercial name.~ didt o seret w keissigh shohol odiaserc helnd by useof alocilol;nan diwedisct brought lag Intaemerate am and teog aey se" madeostatbes tn theuse ofalcohol assd indres.] ad incidnsle ijall wh~ f dulty. Dose sest seek So mvlmio. DoWeas t tlsroate losisoamseatless Uspeecint ascaekl realala ladeta A suhore to 40 wsse.I help for people with aimba related pro. by etherst Surptorts alcohol educatiocpn' le catirr ia mvla alanhwed oussnta pie IF~ h I s a to Iinks tums adbeaisWpreveat pram a.d see. help for t1ese wiitalcols pmCreatess h sm ltars sitrtvee Its -- 1 -l1ois re latedind usdded &. re latedpr obl rn ia a "oh l e II 1 0,'. 0. _G)_ D 0
  • 263.
    f. COMMENTS (PeronalQualities): 10. REPRESENTING THE COAST GUARD: Measures an officer's ablity to bring credit to the Coast Guard through looks and actions. as.APPECARANCE: May sat a~lwaysa" rstlav or~1= Apos t. -et and well genoamsd in .. Always Wumaraa an linatenomh appearance standard. Civilia n sts my amand civilian iatte. Presents Clearly mesets Creaming standarda. Me awat t a an,~ afs, r it prto~ at Urnso,May "s presnt a lityeally Wi- eapain. Rsquirsaeuaahr- Deviewatrte Vast, can in wouwing andl "-a,,and w"ell-md. norm or pmally trie appearanc. Diownot boM diatna Its form to raomnngAnuiform munt-an undifo and dwilaat ut.s. dyihaesn ame prescibed -at&gi inh nto. to sesm standards. standards a&d maintain a pliytmlly trimo Ham a maert phyicmallyw~in ssilltary ap Maaida. and anomly iu su pewn Fom exartimnm In g•ue:an. NfO dimiats to do lbs o drew and physical appearance m~ dusinas soad otherso. _ _ _ _ _ 0 0 _ _ _ _ _ _ 0 _ b.CUSTOMS AND COURTESEES Occasocially laxs sine ovun bat msutary Cawmt in mdmmzngO to 1111111411 U*traimm. - Always precise in ucrdenagil uiIluiai7 -- cstoms. wwanume. and t;diuoaL May not custooma. "Ad courtesan. Convays tboatitm- cowrtm*L Ins~p~suewblorrljnates to do the Ther l; to whice asnffim• ornes to show iu "r Fct wben dealing wihb pulam icother and require whrduetkn mawe Etsrplafisa the fnet traditions of Military itradlitm. customs said courteedes others. Tolerates lax behavior on part of toconfort. Tviiats peolevwmh corayand Military CminOm.. etsq'iitLs. end protocol1 and aunoly rhsqw ordinsas Lodo ouhardinaties. consindi Ion; emw savdinatas dothe Gone. out oway to inur polite. mdaeraa tbe sus. sam and genuine treatment is extended to everyone. Ininsta ubordisetis do llkewise. _ _ _ _ _ 0) - 0 0 0 _ _ c. PIOFESSIONALMM: May be muainfrmed/unawwar of Coast " Welveted bow Coast Guard obpcives. Peco•igusediasanexpe.t inCoast Guard-a Guard polias objectives. May bluf policies. poadmu serve the public; cm- fairs. Works ceatively aod ainifidenUy With How an l atppliess knowledge and ekills raether tan adMmit iWuDesJlle to mutwuces thee effectively Sutasghdfor. represntative of public and gfovieinesL in p•i •idinq serce to the public 7b@ mas t- enhaneselfusageor image d Coast Guard. ward. cooperatve, agidevenhanded in deal. LJwpao confidence aM trust, and dsahly oir. couin which an officer represnts the Coast May be ineffecti-e wheni -okin wih Ing wih hepblic and govenuuenL Aware veys dedication to Coasit Guard ideals in Guard. others May lead personal life which infr. of irmpectfimpreasion actions may caua• on public and private life. Laves everyone with ioge s Coast Guard respnonbilitiem or others. Supports CG ideas II---ads personal aer positive imnage of salf AndCoast Image. ife which reinforces CG Imuag _0 (0)® 0 0 o (a, d. DEALI• G WTH TRE PUBLIC Appeuar ill-ai..sse with the publicor oedui. Deals fairly and honeetly with the public, Always self aseured and in control when inconsistent in epplying Co•aA Gu•srd prm- medis and others atall levels. Responds pr- dealing with public. media and others at all How an individual acts when dealing with grams to public sector. Falters under mptly. Shows nofalvoniuar Doesn't alter levels Straightforward, Impartial. and other se-voie. aem bus'inesa, the preasure. May take antagonistic. or con. -hen faced with difficult aiLtuitons Cornfo. diplomatic Applies Coast Guard ruleipm wiedis, or the public. deeenilir-. approach Mekes inapprupnate table in social ;tusti,'rn Issenuitie to con- graims fairly and undiormly. Has unusual statements May embarass Coast Guard ii" ernis expressed by publit social gtrce. Responds with grest poise to omie social situations. provcative actions of others __ _ _ _0 G © 0 0 0 a. COMMENTS (Representing the Coast Guard): 11. LEADERSHP AND POTENTIAL. ;'scibc this otiice's demonstrated leadership ability and overall potential for greater responsibility, promotion, special as.&ignmenri and coenT.ind. Comrnnntn Should be rulated to ttiose areas for wh,.h the kepot'ing Officer has the apprupriate background.) 12. COMPARISON SCALE AND DISTRIBUTION. (Considering your cornenets above, in line a. compare this lieutenant cQmneander wrth others of the sme " . gradse wtorn you have known in your career). ONE OF THE A QUALIFIED MANY COMPETENT PROFESSIONALS UNSATISFACTORY A LFIED WHO FORM THE MAJORITY AN EXCEPTIONAL A DISTINGUISHED OF THIS GRADE OFFICER OFFICER aD I DI IDDDI E I D] - - FOR HEADQUARTERS USE ONLY Sb..-" r-] D... D N 13. REPORTING OFFICER AUTHENTICATION a. SIGNATURE b. GRADE c. SSN d. TITLE OF POSITION e. DATE 14. REVIEWER AUTHENTICATION 0 COMMENTS ATTACHED a. SIGNATURE b. GRADE C. SSN d LE OF POSITION F-15
  • 264.
    See Instructions BeforeCompleting (September 1985) M,-1e reproduced. Two-sided copies must oe head-to-foot as original form. NAME OF EMPLOYEE BEING RATED (surname first) U.S. FOREIGN SERVICE EMPLOYEE EVALUATION REPORT TYPE OF REPORT GRADE SSN REGULAR-- CAREER CANDIDATE - VOLUNTARY __ POSITION TITLE INTERIM Change of rater - duties - assignment POST OR ORGANIZATION PERIOD COVERED From To RATER (type name) REVIEWER (type name) TITLE: GRADE: TITLE: GRADE: I. EMPLOYEE'S JOB AND WORK REQUIREMENTS (Established by Rater, Reviewer, and Employee) A. Describe the position and where it fits in the stalffing pattern: indicate the number and kind of employees supervised. B. Divide work requirements into two categories, continuing responsibilities and specific objectives (including, as appropriate, professional development activities); delineate in desending priority order. Include specific requirements relating to needs of other agencies. C. Describe any special circumrances influencing the work program. F-16 FORM OS-1829 When completed on ForeignService personnel, thisis an efficiency reportwhikh &hall be subject to Inspectiononly by Seteamber 1985 those Persons euthorized by Sec. 604 of the ForeignService Act of 7980. ,~~~ E I
  • 265.
    FORM DS-1829 Page2 II. EVALUATION OF OVERALL PERFORMANCE AND ACCOMPLISHMENT ComDaeted by Rater) General Appraisal: SFS Member, Adjustment of Salary Level-Parformance was excellent or better 0 Yes 0 No All clases-Performance was satisfactory or better (If no, see 0 Yes 0 No instructions for documentingunsatisfactoryperformance.) 8. Discussion; Performance-strengths and waakne -as-i$ evaluated in terms of the five competency groups listed below. (See instructionsfor definitions.) All groups must be discussed with at least one competency from each group. Support assessment with examples of what and how work was done. 1. Substantive Knowledge [degree andlevel of functional end/orarea skills and knowledge, includingvwhere appropriate, technical careerskills) 2. Leadership (presence,effectiveness in oralcommunication,foresight,positiven.,and negotiating$kill) 3. Managerial Skills (interestin improvingsystems, concern for influence, objectivity of purpose,self.control, achievement orientation,andoperational elfectivencts) 4. Intellectual Skills (conceprualability, logicalthinking, understandingofauthority relationships, skill in written communication,languageskills, and culturalsenstivity) S. Interpersonal Skills (EEO leadershipandsensitivity, socialsensitivity, reachingskill, counselingskill) F-17
  • 266.
    FORM DS-1829 Page3 II1. EVALUATION OF POTENTIAL (Completed by Rater) A, General Appraisal: (Check block that best describesoverall potential) 1. For Career Candidetes only: Assessment of Career potential as a Foreign Serv'ce Officer or Foreign Service Specialist: 0 Unable to assess potential from observations to date [] Candidate is unlikely to serve effectively even with additional experience [] Candidate is likely to serve effectively but judgment is contingent on additional evaluated experience 0 Candidate is recommended for tenure and can be expected to serve successfully scross a normal career span 2. For other Foreign Service employees: 0 Shows minimal potential to assume greater -espomibilities D Has performed strongly at current level but is not ready for positions of significantly greater responsibility at this time E3 Has demonstrated the potential to perform oeffectively at next higher level 0 Has demonstrated potential to perform effectively at higher levels 0•o as demonstrated exceptional potential for much greater responsibilities now 8. Disrussion 1. Potential is evaluated in terms of the competency groups listed in Section II. Cite examples illustrating strengths and weakhneses in competencies most important to your judgment. 2. For career candidates, discuss potentisl for successlul service across a normal career span: for Senior Foreign Service, discuss potential for highest and broadest retponsibilties. for all others, discuss potential for advancement. C. Areas for Improvement: The folluwing must be completed for all employees. Employees should be made aware of areas where they should concentrate their efforts to improve. Based on your observation of the employee in his/her present position, specify at least one area in which he/she might best direct such efforts. Justify your choice. (The response is not to be directedto need for formnia training.) F-18
  • 267.
    FORM DS-1829 Page4 IV. RATING OFFICER'S COMPLIANCE STATEMENT requirements were established by rater. reviewer, and employee on I.. applicable,reQuiremeln$ were revised on ,_ Employee's performance was discussed (candidatewas counseled)on the following dates: 1. 2 . . 3. -- _4. . In the case of an unsatisfactory performance rating, this is also to certify that the requirements of 3 FAM 521.2e (tenuredemployeesl, 3 FAM 557.5b(2) (employeessublect ro administrativepromotion),3 FAM 577 (FOCareerCandidates)or 3 FAM 587 (SpecialistCoreer Candidares)have been met. Date Rating Completed I(Rater's S~gnaiu re) V. REVIEW STATEMENT (Completed by Reviewer) A. Discussion: Give your assessment of the employee's performance and potential (it a careercandidate,overall potentialto serve &fectiveiys ,t alllevels crosse normal careerspan, including FS-1 if an FSO candidatel. If possible support your evaluation by providing additional examples of performance observed this rating period. Note differences with the rater's appraisal or recommeondations. Comment on relations between rater and employee. B. Reviewing Officer's Compliance Statement: After reviewing this report carefully, I consider it to be complete, in conformance with th inr.ructions, and adequately documented by specific examples of performance. Date Section V Completed L (Reviewer's Signature) F-19
  • 268.
    FOr, -So1829 Page5 VI. STATEMENT BY RATED EMPLOYEE A. Discussion: This section is intended to provide the rated employee's views on the period of performance appraised and on career goals and objectives. You must comment on your most significant achievements during the period. You a'so may wish to address activities or problems which may not have been adequately covered in the report, or aspects of the appraisal which may need clarification or correction. You are encouraged to state your current career goals including training and assignments desired over the next 5 years. (Continuationsheeto may be used.) B. I acknowledge receipt of a copy of this report. Date Section VI Completed (Emoloyee's Signature) VII. REVIEW PANEL STATEMENT (Completed by Review Panel) A. Examples of Performance: Specific examples have been provided to support the ratings given the employee. Yes [/f not, return to raterfor rewrite.) B. Certification: This report has been prepared according to the regulations and contains no inadmissible material. (Date) (Panel Signature) C. Comments: (If submitted late, indicate who is resiponsible for delay,) VIII. SUBMISSION CONTROL RECEIVED IN POST/BUREAU DATE RECEIVED IN PER/PE DATE RELEASED 70 DEPARTMENT FILES F-20
  • 269.
    (wlen any partCompleted) (une foil rample en tout ou on pirile) I National Defense PERSONNEL EVALUATION RAPPORT D'APPRtCIATION Defence nationale REPORT DU PERSONNEL Officers Officiers Sa -'C in111aIS SIN Phink MOOc NorM 00 famltlC I'lll ics NAS Gra8e CEM General G6n6ralit6s 1. The Personnel Evaluation Report (PER) - Officers is 1. Le Rapport d'appr6ciation du personnel (RAP) -, Officiers designed to provide information for use at NDHQ in select- a pour but de fournir au QGDN la principale source de rensoigne- ing officers for promotion, development, training, employ- ments utilises Iots de Ia selection en vue d'une promotion, de Ia ment, retention and release. It consists of two parts to be formation professionnelle, d'un cours de formation, d'un emploi, used as follows: du maintien en fonction et d'une liberation. Le rapport est en deux parties destinies AI'usage suivant: a. CF 1417 for reporting on all officers; and a. CF 1417 sert A la prdparation d'un rapport pour tous les officiers; c, 1). CF 1418 for additional reporting on all officers of h. CF 1418 Sert a la prparatioir d'uri rappo•rt acddiltinnel Colonel rank and below (see Annex A to CFAO pour les officiefs du grade de colonel ou de grade inferieur 26-6 for speciai procedures for officers in a foreign (voir ianiiex; A a i'OAFC 26.6 pour dispositions speciales establishment, international staff, or seconded po-.,, relatives aux officiers dans det etablissements 6tranger5, tide), avec des 6tats majors internationaux ou en affectation hors cadre.) 2. Detailed orders and instructions for completing the 2. Des ordonnances et des instructions dltsilldes sur Is facon PER are contained in the following references: de preparer les RAP figurent clans les iul•tiicitiorn suiivlnitfs a. CFAO 26-6 Personnel Evaluation Reports - Regular a. OAFC 26-6 Flalwort5 clalmi I~t,.)i du iwrsorirl - and Reserve Force Officers - which presc'ibes the Officiers de la Force r~gulitre r t de la F1i6serve -- tablit policy and orders with respect to general reporting Ia ligmie de conduite et ;Cs formalitrs relatives aux exi- resoonsibilities, reporting channels, occasions for gences gndn•ales de la preparation et de Ia filibre de trans. completing PERs, and other administrative orders riission des rapports, les circonstances exigeant I'tablis'e. pertaining to the submission of PERs. ment d'un RAP et autres ordonnancos administrativet avant rapport J la pr6suntation des RAP. b. A.PC-268.000/IS-000 - P,_.'mnnel Evaluating and b. A-PC.268-000/IS-00U - Etablistement des rapports d'sp" Reporting - Officers v, ich provides detailed preciation du personnal - Officiers - donne des instruc. instructions for completing the PER. tions detaillees our Ia oaCon de reniplir Iv RAP. Afin do %'stouter quo le RAP demquire un document To be a valid career document the PER must be valable, iI doit iitre pr~pari avec Ia pr6uision qul lul completed Accurately. It is imperative, therefore, voladue. 11 no mbe a ficer s r ortu rs at lui that reporting and reviewing officers read and 'curs di lire et di comprendro los Instructions d~tell- understand the detailed instructions in A.PC-268- lies doigu eat d ca pu on A COI s - 000/IS-00 bfor comencng a ev~uaton.list figurant A Is publication A.PC-208-000/hSO000 000/ IS-O00 before commencing an evaluationi. event do ridiger leJ epprdcistions. F-21 F CONFIDENTIAL ONF EI TI" -
  • 270.
    SECTION~ I -PERSONAL INFORMATION - INFORMATIONS PHRSONNIELLES A. ýrlsrltaI Status b. Dependent Chilldren (11140/0go/sChOe 94111do/IogUS" Of ins~truction) Elint mastrlmonlai Enfants b charge (teexafdoo/anvste scolirel~/iingui 6'irtltrictlon) c. Location of Depence~lti d.olst* Moved Domicilte des corionnei a charge Dole do cifirflegoemorl 0, Factors Affecting F-utrjee Postings Ficlt~pitil pouviint lnhtuef out lot futurgs affectationst 111I.ta g~f,,c eographical iocetlin lo' it nespC'ltng Rdvion casifle tor$ 06 INOFOChIaCn affectation 9, 1 vois of so-l'Doyri'eI' doesi.O t.,,, flet potions, Genre d'0m-t,v ovitro lorc 00 I,%ipruenrjiIe btfeclatior, I.~04II(Jllrct101 uaIIt1 - bunce bt 1Wt111'I k Lurtunl tif'e'ogts ,, Ai filnn) VIt~~tP CI 'ibCFt oc IuJQs I abi i l 1. C~ Q ~ ,. t",,' Al((t,1dhllL, va'l' C1t.A 7t. , lip (II if ,Jemaeuud uie fnglu wji, v'11UIC~ aur,'.i Auluc~'e, £ IAF C 26.0 (,ilt jsiqh I& la~jlglis 1i.gc's t, r j u tC1911w1 ofI Vriencti CONFIDENTIAL CONFIDENTIEL 1 (wthom Any~Part fooflplaetCli June fall forripll on, fout sly @~It.
  • 271.
    l(when any partcomploted) (ure foil remplie en tout ou en patie) PERSONNEL EVALUATION REPORT - OFFICERS - RAPPORT D'APPRILCIAMION DU PERSONNEL - OFFICIERS DIRECTIONS FOR MAI1KING RESPONSE SPACES SECTION 2 - IDENTIFICATION OF OFFICER REPORTED ON RIOLEB POUR LINSCRIPTION DES ESPACES SECTION 2 - IDENTIFICATION DE L'OFFICIER OUI FAIT L'OBJET OU RAPPORT AUJX RIPONP'ES. R NSURNAME - NOM :INITIALS - INITIALES a Use C3LACK Iead pericI Oniv (H e Or of . . . . .- b tJ-r.plover qu ur. v.' . .,on d)J l ,uIom pIonib 0 0 0 0 11,4 ou •,:,U rI/,,o l A ( A ) 0 ® ® ® SIN-NAS b Fivire dmi, marejues no~ireý (Im Cuuvrenl 0 Unit It cercirs Wis,.,iochiene 0 000000000 .,,,0 0 0 0 0 010 0 0 0 0 0 , 0 b"(.".""l"g"r "0000 @ ® 000 ®1®I 000 ® di VI'rIe vuur c,tai~in rod n, I~v iv d lrm~cframv,, initains e~n oniCre tough vS~h~vq G 00 0 0. 0D D0 000 cli)aquL rannit lflhe o aC8 (a 8 10 000000 ,- 11ý0,,o f.orb. ,,,, ,,,,ý ;..1-10 1 M OC - CEM Ul -C U vNO fill Is-rg U( IIl~th Lu' 10 laI1U!C 0 0 D @ ZUi.IZU * MARK5MAIIOUES0 0 0 ®I 00 ol ' ----- ' 0 0D 0 D 1 :10 00 00 0)( IlMPROQP~tl InJ.A•LAIPT^ILt. . ".'.'.r.,I ,,/I* ,' G__ __ _ _ C) 0'0 ; 00 (D (I) 0- (). 100 00 X0 1 ~ ®®;®;®0 0' o®l oi ® SECTION 3 D~ETAIL.S OF FIEPOIIT (D! (2O6 100. ' ®('I, l0 0!0:( SEC71ON 3-EN•iGEIUMVLNT67SUR LE RAPPORT I C) a I (D 0 0 .1 G 0 A TYI', Of r Il ' Cr1U D1 14A I'O [0.10 L10 0, 0D T 0...G: 0,! 1® 0® v 0,® /,lU ......................................... 0 S-TI---- EN0I~iCAT ON OF REPORTING OFFICER SECTION 4 - IDENTIFICATION OE L'OFFICIER RAPPORTEUR 7 Lour d Ull I,,O'W:A ............................ 0 UIC CIU NA . SO I SIN - NAS NA . SO i...; ' 0 f 0 0 n'IbLIVIDII -ji -------- -0 - D 5( 0 0C- G ,,.o,;o , ,,0 0... 0, 0 0 0 0000 U,111 " 41IeD11UM If U''i.It (DI D 0 10 0 1 0 ( D ( • U•,,, II., ',,, .,I~ 1/. , .... ,,,,,, ........ 0 Q (0 G)1 0 1 0 0 0 0 ,0 0 000 G000000 00 ,,...................................0 o D ,) $ ,D ,•0D : ,,,'Ud, •.L)6 ........................... 0 10 0 0 0 0 G 0 0 0 0 (D 7 Ituf ~i(GD -------- 0 0.0 0J t T10 0 0( 0 T 0. 00 •J"it-; P l F'iO U Pi" i;•0 f u II;O/ 'VItE PAR L A f4APP()nT 0 0 , (D @ @D@ @ "IILPOMlT"ro ,- flWU, I"0 . ALS(CTION A A - WENTIFICATION OF REVIEWING OFFICER --MA H Molk 'k Ali -- Mo,,N?,M , :•• -Vll- 7A-h SECTION 6 - IDENTIFICATION DE L'OFFICIER RMVISEUR S .: ;" .-.... 0" UIC.CIU NA- 5.o SIN. NAS ,NA,.,SO Q A 0...... .... 0... i A -0 " 0'. G (D 0 G G D AV...coj0)Ar0 0 0 0 0 '.4 ...................... .... 010M000..00..00! JUN 0 o0 J0 00" 0 o0,. ....... 00 0000 01 00 0 00•0 0 Uo, ...... 00 000•, 0 D D G 0 D 0 (D ', 1 1 .,° .... IT.... 01,, (2 C. () 0 01 0 0 0 ®00 0 Orf ...... 00® o0 OCT ocT 0(D. 000 D 0 ® 0 (00 0 0 o ooo 0 0 0 0 "Nov ... _001 G Nov' ...... 00oG C-2 01D0 01( ( )0 G F- 2 3
  • 272.
    a DESCRIPTIVE TITLEOF PRIMARY JOB a TiTRE OESCRIPTIF OU POSTE PRINCIPAL LT I CAPI' CP AJ LO O N b SECONDARY DUTIES (by descriptive title onlvy c RANK FOR POSITION 0 0 0CAPTCAI MAJLCOL COL N b. FONCTIONS SECONDAIRES (litre descriptit seulemenl) c. GRADE OU POSTE I CAPT CAPT LT LT CAPT SAJ LCOL CO. i d: RATED OFFICER'S RANK 9 0 di GRADE DE L'OFFICIER EVALUE 5. 0AP 0A LCL . TIME IN JOB e ANCIENNETE ACE POSTE 0 0 0 0 0 0 0 0 0 0 - "OS 4 G 8 10 Tj 1b 2 30 36 48f. I PcnIOD OBSERVED •OIS I PERIODE D'OBSERVATION 0 0 0 0 0 0 0 0 0 0 - SECTION 7 - COMPARATIVE ASSESSMENT7 ra SECTION 7 VALUATION COMPARATIVE 7-1 Reportng Ofcer - Ocer rpporteLr 7-2 Reviewing Offcer - Of-cir reviseur -. PERFORMANCE FACTORS/FACTEURS DE RENDEMENT 1o- Normal H • _ _ _ f Normal S." I Acceotec responsibilities and duties 1 A pr.s en Charge des TCspoflsobri,Ies(DG aI des lonctionr, 2 Appied job knowledge and skills 2. A appique its conna,ssanres el leCs-------------- -- 0 0 10 (D (D 0 1 G i comDeiences au travall 2 Analysed problems or situAtOns .A 3. AanalySO 1o$ problernes 0.. le sS:i76ns ---------- )) i)( k $ 4Made dec.s'ons'too b acti .. 4 A DS des oclcisior~s el djes 'r-eswes G( D 00 D ( M Tade riers and 1repataTo~ns..... 5 A woesse des plan s e -Ia .! , erep (eD (DD ..o0,o:;.eied. ............ .. ®® ®®I ®®I®®oo® - c,eg........e o-ee .,.-. ........... 0 !(D 0 ( ® ®® ® ) ® - cX;£rm,,ni- ý._e ot.i.e-~ew i ( D D 0 z D c p a . .... ------.. -------------- 0 G C D (D D D 0 0 (9 G . A •cmmf~r.J ,C 64e F. CC, SIeltre ,Ce" 5h'es'•, - ". , 11111 9 .,o. a'. : cIs•,o .. ........ . -... 1 G G ( (D (DG 13 A Lc ~. --- _----~r--------------- I 1-• ',u' C!' • ed I:t ( 31 ' %zI PTi0 Std )C . P.. eA).. ................... - . (D e (D'-0 ,e ses su,.p4rnes l ,PO1E. SI. NA. AT.. .. . .TES.r.I...SS..,. N" .. a I.... . . Norr.a- 2 ,p nce .IS 4 Conduc®0 4 6 . o ul e - - 5 iS)ielleCI-----------------------. Q o ®®o®® 6 negrilýy ..........-..........................- '-- - -- - - - - - - - I (® @ g( - 6 . niegrii Z E 0 ® (Z G ( ( 7 LOtaule ' S ~1ao----------------------------------------o ®® -O&t D( 2 3 D,,I.Calion . C,,jrage ........................... . , ~ "oI SECTION 8 -POTENTIAL I ..QO ® L. ..Q, 81Re~orhng OfficerT Ohic~er repPorreu I - 8 e~e m Olf~cer Othc~er rev~se~r Normal 5-R'e, ew N ora SECTION 8 - POTENTIEL , 1 '0®! No~ma . - O G) . (D - G (2)0 1 ®1 (0) 0 G SECTION 9 PROMOTION RECOMMENDATION 9 1 Reportng Officer - Officier repporteur IReveing Officer C. 1cer reviseur NO' NOT YET YES NONOT YET YE[S SECTION 9- RECOMMANDATION DE PROMOTION NON PAS ENCORE 0 OUt 0, NONO PAS ENCORE 0 OUI (0 I :OH NDHO USE - A 1'USAfrE DU .NOTET• . ,]_.___ _]__. ___ i
  • 273.
    1when any partcompleted) (une fols remplie en tout Ou en partle) SECTION 10- DETAILS OF JOE - RENSEIGNEMENTS SUR LE TRAVAIL a. Unit b. Official appointment Ic. CSdate Unite Poste offgctel Date ce mutation d. Unusual circumstances (if any) Circonstances lus$tee (ii y a1 lieu) SECTION 11 -NARRATIVE BY REPORTING OFFICER - EXPOSES DE SITUATION DE L'OFFICIER RAPPORTEUR i (THE IVARRA TIVE NORMALL YSHOULD BE LIMITED TO THE SPACE ABOVE THE DOTTED LINE) tL "EXPOSEDE LA SITUA TION DEVRAIT NORMALEMENT SE LIMITER 4 L-ESPACE AU-DESSUS OU POINTIL LE) ".i.e sh nte e wiJwrcCiatan a etc u_"" Sgnat ure Date K CONFIDENTIAL CONFIDENTIEL (when any part completed) (une foit remnlie en tout ou en pairle) F-25
  • 274.
    CONFIDENTIAL CONFIDENTIEL (w-len anyPan completed) (urie foil remplis on tout ou an Catte) C.CTION 12 - RECOMMENDATIONS FOR TRAINING AND EMPLOYMENT - RECOMMANflATIONS W'INSTRUCTION ET D'EMPLOI 6. Training 0. Employment Instruction EmpI0 Aank. name and auuo3tnimert Date Grade. -ion, Ct post," r SECTION 13 - COMMENTS BY REVIEWING OFFICER - OBSERVATIONS DE L'OFFICIER REVISEUR I do not kno. this officer I know this officer slightly i know this officer well [] Ji: fie cohiao pLmsau tout cel oficler Je ne connail cet officier ou'un Peu Je conna. -,n ccl officiei RAnK. name and avDuintmen! I"natur Daitc 3race. onom et poste SECTION 14 - COMMENTS BY NEXT SENIOR OFFICER - OBSERVATIONS DU PROCHAIN OFFICýER SUPtRIEUR Ci no' kno thi i•, OtfiCcr I know this officer Sltghtly I know this ofhce.r weit Jhe connias las itu lout cCI otcCr 0jc ne connais cet oatlteer Qu'un Pew .e contais bierl cet eo&fcler Rank, niarre. acipo~ntnnent Aar unit SgaueDt Grade. nom. oote et unit SECTION 15 - AODITIONAL REVIEW - EXAMEN SUPPLEMENTAIRE F-26 CONFIDENTIAL CýONFIDý)E N TI AL77T
  • 275.
    FI 1 E II ITI I • '' ,,9 0,III I