Final Research Project:
I. Annotated Bibliography (100 points, Due 4/7 by 11:59 PM)
An annotated bibliography is a collection of your research on a particular issue. It includes source citations and a summary of the
source. Your annotated bibliography will contain at least five (5) sources. All the sources MUST be located through the library. They
may be print or electronic sources, but they must be accessed through the library resources.
Read: Chapter 24 on Doing Research, Chapter 27 on using MLA citation formats, and Chapter 26 on using sources paying close
attention to the discussion of summary on page 496.
An example of an Annotated bibliography is posted on eLearn.
I will go over in class the databases I recommend for this assignment. I suggest you use them. It will make your work here less
difficult.
II. Argumentative Research Paper (100 points, Draft due 4/16, Final due 4/30 11:59PM)
An argumentative essay takes a position on an issue and argues for that position using evidence from sources and logical reasoning to
support the position. It is one of the most common forms of writing anywhere, and you will use it extensively in college.
Read: Chapter 10 on Argument, and review Chapter 26 on using sources in writing.
The first thing to do is pick a topic and develop a research plan. A research plan worksheet is available on eLearn. Once you pick a
topic that interests you, list questions concerning the issue that you are interested in answering. Once you develop a list of research
questions, then pick a single question to respond to in your paper. Your paper is the answer to the question.
A research question must have at least 2 reasonable, supportable answers. In other words, someone must be able to reasonably
disagree with you.
Once you develop your research question, then you do the research (annotated bibliography) to gather information in order to answer
your question and support your position on the topic.
We will discuss paper organization and source use in class after the annotated bibliography is complete.
III. Presentation (50 points, 4/28 or 4/30)
You will give a brief presentation of your paper on one of the presentation days the last week of class (we will sign up for these times
in the coming weeks). Your presentation should be brief, 3-5 minutes and should include a Power Point type of visual aid that
illustrates the topic you have chosen, your position on the topic, why the topic is important, and your evidence to support your
position. Your goal with the presentation is to convince your classmates that your position on the issue is the best position.
Presentation tip: Do not read your power point presentation. This makes for a very boring presentation since everyone can read.
Power points should provide interesting points, lists, or images that you talk about to the class.
Schedule of Due Dates:
Tuesday Thursday
Week 10
3/23-3/ ...
Final Research Project I. Annotated Bibliography (100 poi.docx
1. Final Research Project:
I. Annotated Bibliography (100 points, Due 4/7 by 11:59 PM)
An annotated bibliography is a collection of your research on a
particular issue. It includes source citations and a summary of
the
source. Your annotated bibliography will contain at least five
(5) sources. All the sources MUST be located through the
library. They
may be print or electronic sources, but they must be accessed
through the library resources.
Read: Chapter 24 on Doing Research, Chapter 27 on using MLA
citation formats, and Chapter 26 on using sources paying close
attention to the discussion of summary on page 496.
An example of an Annotated bibliography is posted on eLearn.
I will go over in class the databases I recommend for this
assignment. I suggest you use them. It will make your work
here less
difficult.
II. Argumentative Research Paper (100 points, Draft due 4/16,
Final due 4/30 11:59PM)
An argumentative essay takes a position on an issue and argues
2. for that position using evidence from sources and logical
reasoning to
support the position. It is one of the most common forms of
writing anywhere, and you will use it extensively in college.
Read: Chapter 10 on Argument, and review Chapter 26 on using
sources in writing.
The first thing to do is pick a topic and develop a research plan.
A research plan worksheet is available on eLearn. Once you
pick a
topic that interests you, list questions concerning the issue that
you are interested in answering. Once you develop a list of
research
questions, then pick a single question to respond to in your
paper. Your paper is the answer to the question.
A research question must have at least 2 reasonable, supportable
answers. In other words, someone must be able to reasonably
disagree with you.
Once you develop your research question, then you do the
research (annotated bibliography) to gather information in order
to answer
your question and support your position on the topic.
We will discuss paper organization and source use in class after
the annotated bibliography is complete.
III. Presentation (50 points, 4/28 or 4/30)
3. You will give a brief presentation of your paper on one of the
presentation days the last week of class (we will sign up for
these times
in the coming weeks). Your presentation should be brief, 3-5
minutes and should include a Power Point type of visual aid that
illustrates the topic you have chosen, your position on the topic,
why the topic is important, and your evidence to support your
position. Your goal with the presentation is to convince your
classmates that your position on the issue is the best position.
Presentation tip: Do not read your power point presentation.
This makes for a very boring presentation since everyone can
read.
Power points should provide interesting points, lists, or images
that you talk about to the class.
Schedule of Due Dates:
Tuesday Thursday
Week 10
3/23-3/26
Meet in Library Lab 317
Meet in Library Lab 317
Week 11
3/30-4/2
4. Meet in Library Lab 317 Research Day-No Class. I will be in
my office
until 1PM if you need assistance.
Week 12
4/6-4/9
Annotated Bibliography Due by 1159PM
Week 13
4/13-4/16
Drafts of Papers Due by 1159PM
Week 14
4/20-4/23
Extra Credit Day Conferences on Final Drafts-REQUIRED
Week 15
4/27-4/30
Presentations Presentations
Final papers due by 1159PM in dropbox
3.1 Analysis, Design, Development, Implementation, Evaluation
5. Effectively designed training programs need to improve
employee performance, but they must also align with the
organization's business and performance needs (Jones, 1993;
Kirkpatrick, 2009; Noe, 2012; Piskurich, 2010; Robinson &
Robinson, 1996; Rummler & Brache, 1990). Programs must also
align with specific work processes and tasks, as well as with the
employees' understanding of the big picture.
Aligning training programs with the company's strategies is
more difficult than you may think. Consider this: According to
Kaplan and Norton (2001), only 7% of employees fully
understand their company's business strategies and know what
the company expects of them to achieve its goals. Therefore,
not only is a well-thought-out, well-designed training program
essential, it can also limit these risks:
incorrectly assuming that training is always the solution to a
performance problem,
needlessly expending funds on training programs that do not
align with the business strategy,
having training programs with incorrect or irrelevant content,
and
adopting training programs just because of what Clark (2010)
called "training fads and fiction" (p. 7).
Fortunately, a systematic process is available to ensure that a
training program not only improves workplace performance but
also aligns with organizational goals—this training or
instructional design process is known as ADDIE.
ADDIE is an acronym that stands for "analysis, design,
development, implementation, and evaluation." This model has
been used in workplace training and development for decades. It
was originally created in 1975 by the Center for Educational
Technology at Florida State University and ultimately was
adapted by all the U.S. armed forces (Mayo & DuBois, 1987).
Today ADDIE is not without its critics, however. (See Figure
3.1.) Some practitioners argue that the ADDIE process can be a
very time-consuming, cumbersome, and expensive (Hodell,
2011). Because ADDIE tends to be a linear process, others
6. assert that ADDIE limits creativity and what Piskurich (2010)
called design artistry with its opportunities for the flexibility to
think outside the instructional design box. As a result, a
proliferation of alternative instructional development models
has emerged (Molenda, Pershing, & Reigeluth, 1996), including
the successive approximation model (Allen & Sites, 2012) and
Dick, Carey, and Carey's systematic instructional design model
(Dick, Carey, & Carey, 2009).
However, these other models tend to be distinctions without a
real difference to the ADDIE methodology, and more often than
not, failed training programs were usually due to malpractice in
the use of the ADDIE model, not the model itself.
Figure 3.1: ADDIE model: Analyze
Though not without its critics, the ADDIE training design model
has been the principal framework used in the training and
development field for decades.
Where Is ADDIE in the HRD System?
As Section 1.1 discussed, the framework for human resource
development Gilley, Eggland, and Gilley (2002) proposed
considered two important dimensions: time frame (short term
versus long term) and focus (individual versus organizational).
As we detail the ADDIE process here and in subsequent
chapters, we will explore the process starting from the training
domain, or the short term, individual domain. In subsequent
chapters, we will see how the other domains of HRD, those of
performance management, career development, and
organizational development, are impacted by and linked to
ADDIE, as well.
The ADDIE process can be represented as a subsystem of the
HRD open systems model of input-process-output, as first
proposed by Swanson (1995) and as depicted by Figure 3.2
(adapted from Quang & Dung, 1998).
Here, inputs are the triggers or performance gaps—the analysis
being the assessment of the differences between expected
performance and actual performance—as seen, for example, in
7. the performance level of unskilled employees or those
attempting new skill sets the job now requires.
The process becomes the design, development, and
implementation of the new training.
Finally, the output includes evaluating the success of the new
training outcomes as reflected in willing and able employees
who, in theory, will now have improved or new skills that result
in an increase in the likelihood the organization can meet its
goals.
Figure 3.2: ADDIE and HRD processes
ADDIE is considered a subsystem within the HRD's open
system of input-process-output. Collapsing ADDIE into HRD,
we regard the input as the analysis of the training needs, the
processes as the design, development, and implementation of
the training, and the output as the outcomes of the training to be
evaluated.
Source: Adapted from Quang, T., & Dung, H. K. (1998). Human
resource development in state-owned enterprises in Vietnam.
Research and Practice in Human Resource Management, 6(1),
85–103.
3.2 Analysis—Defining the Needs
In many ways the analysis phase of ADDIE is a deeper variation
of root cause analysis (Basarab, 2011; Fee, 2011; Rothwell,
2005a; Watkins, Meiers, & Visser, 2012; Wysocki, 2004),
discussed in Chapter 2. As you will remember, in root cause
analysis, we try to pinpoint the reasons for performance
problems to see if the job performance gaps are due to:
1. the employee (for example, lack of willingness or ability);
2. the work environment (for example, roadblocks, such as a
supervisor's poor communication skills); or
3. a systemic organizational practice (for example, weak human
resources recruiting practices).
Specifically, using the performance formula of P = f (KSAs Ă—
motivation Ă— environment) from Chapter 2, a performance gap
can be conceptually broken down by asking where the
8. performance breakdown is, as shown in Figure 3.3.
Figure 3.3: Breakdown of a performance gap
Typically, performance gaps occur due to deficiencies in the
employee's ability, willingness, or something in the work
environment that is impeding employee performance.
Source: Adapted from McArdle, G. (2010). Instructional design
for action learning. New York: AMACOM.
In the analysis phase of ADDIE, we perform what is known as a
needs assessment, a systematic and detailed analysis to
determine what is needed to resolve a performance gap (Brown,
2012; Kaufman, Rojas, & Mayer, 1993). In particular, a needs
assessment can answer some important questions, such as:
· Why is there a difference between the expected performance
and the actual performance, what Blanchard and Thacker (2010)
call the trigger?
· What are the reasons for the performance gap, and what is the
risk of ignoring it?
· At what level of the organization does the performance gap
occur?
· How do we prioritize any training interventions?
· What data do we need to collect, and how do we collect it?
· Do we create or purchase the training?
Perhaps the most valuable question a needs assessment can
answer is whether to continue with the ADDIE or not! The
needs assessment will also answer the question of whether
training is the most effective remedy for the performance
discrepancy. Additionally, a needs assessment may uncover that
organizational inefficiencies may remain even after the ADDIE
process is complete. As a result, there are times when training
may be necessary, but not sufficient to meet the organizational
goals. Other nontraining systems also influence organizational
outcomes. Kaplan and Norton (2001), for example, developed
the balanced scorecard to illustrate how, in addition to human
resource systems, organizational effectiveness is also influenced
by the overall effectiveness of the organization's financial
9. systems, customer service, and internal business processes.
Although effective training and development is necessary to
meet and sustain organizational needs and goals, training alone
is not sufficient and should never be thought of as a panacea.
3.3 Levels of Needs Assessment
The trigger of a performance gap is why you perform a needs
assessment; however, we also need to know where, what, and
who needs training. Therefore, keeping in mind our systemic
view of HRD, we assess the organizational, task, and individual
levels of performance processes; these different perspectives
enable us to consider both the strategic and tactical performance
levels.
To get a frame of reference, we can first see where each level of
assessment would be located in our performance formula
(Figure 3.4), discussed in Chapter 2. For example, as we will
discuss, an organizational assessment analyzes the
environmental variables that affect performance, individual
analysis investigates the status of the employee's current KSAs,
and task analysis evaluates the performance that is expected
from the job.
Figure 3.4: Pinpointing the performance gap
Using the performance formula, we can frame our needs
assessment to evaluate the tasks, individuals, and organization
to determine the area where the performance gap resides.
Let us begin by detailing each level of needs assessment.
Organizational Analysis
An organizational analysis evaluates current or projected
organizational performance gaps. In an organizational
assessment, we would ask strategic questions such as:
How important is training to achieve business objectives?
Do we want to spend money on training and, if so, how much?
Do we have a legal duty to train?
Are we growing our workforce's skill sets to meet upcoming
changes?
Do we want to outsource or design the training in-house?
10. Is it more efficient to hire workers who already possess the
needed skills versus training current employees who do not?
Here we focus on performance not only in relation to alignment
with the organization's culture and strategic goals, but also in
relation to how performance can be affected by external
variables such as available resources, changing workforce
demographics, consumer preferences, political trends, obsolete
technology, or the economy. Chiu, Thompson, Mak, and Lo
(1999) called this a demand-led training needs assessment. In
this context the organization modifies performance standards
due to new legal mandates, changes in industry standards, or
what the competition is doing. For example, consider that upon
the arrival of the Ford Model T in 1908, it no longer mattered if
a horse buggy manufacturer had the best-trained employees
(Levitt, 2008).
A modern example of a demand-led training need is the recent
federal mandate called the Health Information Technology for
Economic and Clinical Health Act, abbreviated HITECH Act,
which requires all Medicare-eligible health care providers to
convert to all electronic medical records by 2015 or risk
financial penalties. Because of this mandate, health care staff
must be trained on new electronic medical record software.
Until this new training is complete, these new requirements
might result in a performance gap.
Table 3.1 provides recommendations for traditional sources of
data for an organizational needs assessment, modified from an
approach first suggested by Moore and Dutton (1978). In a more
recent study, Md.Som and Nam (2009) found that, in addition to
an organization's goals and objectives, competitor's training
practices such as e-learning and shared accountability for
applying the training also ranked high in data sources that
companies use for organizational needs assessments. In
addition, many companies also use a SWOT analysis
specifically as an organizational assessment data–gathering
technique. SWOT stands for "strengths, weaknesses,
opportunities, and threats" (Chiu et al., 1999; Dealtry, 1992).
11. Strengths and weaknesses typically assess the organization's
internal capability; opportunities and threats refer to how the
external environment affects the organization and its
competitive environment.
Table 3.1: Sources of data for organizational needs assessment
Data sources recommended
Training need implication
Organizational goals and objectives
Where training emphasis can and should be placed; these
provide normative standards of both direction and expected
impact, which can highlight deviations from objectives and
performance problems.
Manpower and labor inventory
Where training is needed to fill gaps caused by retirement,
turnover, age, etc.; this provides an important database
regarding possible scope of training needs.
Skills inventory
Number of employees in each skill group, knowledge and skill
levels, training time per job, etc.; this provides an estimate of
the magnitude of specific training needs and is useful in cost–
benefit analysis of training projects.
Organizational climate indices (e.g., labor management data,
grievances, turnover, absenteeisms, suggestions, productivity,
accidents, short-term sickness, observation of employee
behavior, attitude surveys, and customer complaints)
These quality-of-working-life indicators at the organization
level may help focus on problems that have training
components.
12. Analysis of efficiency indices (e.g., cost of labor, cost of
materials, quality of product, late deliveries, and repairs)
Cost-accounting concepts may represent a ratio between actual
performance and desired or standard performance.
Changes in system or subsystem
New or changed equipment may present a training problem.
Management requests or management interrogation
This is one of the most common techniques of training needs
determination.
Source: Moore & Dutton, 1978.
HRD in Practice: SWOT Analysis
"I conduct a SWOT analysis in my business annually to assess
our situation. From time to time, I have asked a valued client to
spend half an hour with me identifying what he or she feels are
the strengths and weaknesses of our business, as well."
Here is Kristina's example of the SWOT analysis she used for
an organizational needs assessment.
Strengths
Our brand and reputation in our markets are strong. We are
recognized as being professional, reliable, and quality driven.
We have excellent employees who are currently well trained,
customer oriented, and efficient.
Weaknesses
We are not the low-cost or low-price supplier in the market.
We need to build stronger relationships with our top five
13. customers, perhaps introducing a new customer service
program.
Opportunities
We have opportunities for growth in new locations.
The cost of marketing is less in this digital age; we could
capitalize on the lowered cost with a stronger program.
Threats
The impact of global economy on local businesses is a threat to
us.
Foreign currency exchange rate variations are problematic. For
example, the U.S.-Canadian dollar exchange rate fluctuations
can impact our business.
Source: Reprinted with permission from Bovay, K. (n.d.). More
for small business owners and managers. Retrieved from More
for Small Business.com website: www.more-for-small-
business.com
Consider This
How would this organizational assessment impact training and
development decisions?
Specifically, how might the implementation of a new
organizational strategy be linked to the strengths and
weaknesses of the workforce? Training delivery mechanisms?
New training policies?
Food for Thought: SWOT Analysis
Conduct a SWOT analysis of your organization; if you do not
work for an organization, do a personal SWOT inventory for
yourself as a career development tool. Write down the strengths,
weaknesses, opportunities, and threats to the organization as
you see them. Keep in mind what the organization's vision is.
Where does the organization want to be in 3 to 5 years?
Consider This
What does this analysis say about the training that may be
necessary for your organization's employees?
As a training consultant, what training that reflects the
organization's strategic plan would you suggest?
Job-Task Analysis
14. Job-task analyses (JTAs), sometimes called operational
assessments, are the most common type of assessment. They
examine the knowledge, skills, and attitudes required to perform
the job at the optimal or expected levels. Through a JTA,
employers can create a detailed list of required tasks and then
evaluate them. As the name suggests, a JTA has two levels to
it—the job analysis and the task analysis (Kandula, 2013;
Stolovitch & Keeps, 2005)—as follows:
Level 1: job analysis (what work you do). This is used for job
design, position advertising, and career planning.
Level 2: task analysis (how you do your work). This is used to
determine what an employee must know—specify the equipment
used—and to establish the minimum performance standards.
Reviewing job descriptions is a good way to evaluate the tasks
of a job. Job descriptions memorialize the major activities of
the job, including those tasks that are necessary versus
desirable, as well as the work conditions in which the worker
operates.
Not surprisingly, well-written job descriptions follow the
premises of Bloom's taxonomy (Anderson, Krathwohl, &
Bloom, 2001; Bloom, 1956), discussed in Chapter 1. Bloom's
taxonomy classifies the depth and breadth of job-learning
objectives from cognitive (knowledge), psychomotor (skills),
and affective (attitudinal) perspectives—what we call today the
KSAs of the job. Table 3.2 outlines the action verbs of KSAs.
Table 3.2: KSA action verbs of job descriptions
Learning type
Related action words
Knowledge development
cite
compare
contrast
define
describe
detect
17. 2009). In the tire-installing example, knowledge requirements
might include a need to recall appropriate tire brand or
knowledge on how to inspect and confirm a proper seal.
Skills. The job analysis will also confirm the list of skills
required to perform the job optimally. So, for example, someone
who fills a customer service representative position may need
higher order skills such as conflict management (Blanchard &
Thacker, 2010). In the tire-installing example, successful
workers may need more of a basic psychomotor skill like
detecting the correct tread size, preparing the tire for
installation, and measuring for excess rubber.
Attitudes. The job analysis also generates attitudinal outcomes.
Specifically, the job analysis should confirm what attitudes or
feelings might be present that would facilitate or inhibit an
employee from doing any part of the job well. For example,
does the job require a worker to be helpful, professional, and
friendly, or just to be open to new learning? In the tire-
installation example, an employee may have to be open to new
ways to trim excess rubber from a tire due to new technology,
for example.
Job Characteristics Model
Another way to analyze a job is to break it down into its
individual task behaviors. Using the job characteristics model
originally developed by Hackman and Oldham (1980), we can
break down any job function into its smaller parts; this enables
us not only to see the requirements needed to perform at
component levels, but also to identify which part of the job may
be responsible for less-than-expected performance. In other
words, the job characteristics model helps us find the weakest
performance link in the expected behaviors of performance.
Task behaviors of a job are broken down as follows:
Skill variety—the degree to which a job requires a variety of
different activities or the use of several different skills and
talents to carry out the work. For example, a car mechanic may
fix flat tires, rebuild carburetors, and check fluids, as well as
interact with customers.
18. Task identity—the degree to which the job requires completion
of a whole and identifiable piece of work; that is, doing a job
from beginning to end with a visible outcome. An example here
could be of a cabinetmaker who, prior to producing the finished
product, must select and refine raw wood, stain panels, and
install hardware.
Task significance—the degree to which the job has a substantial
impact on the lives or work of other people, either in the
immediate organization or the external environment. A health
care provider is an example of someone whose every job task
has an immediate impact on the recipients.
Autonomy—the degree to which the job provides the employee
substantial freedom, independence, and discretion in scheduling
the work and determining the procedures to be used to perform
it. University professors have a high degree of autonomy in
their jobs.
Feedback—the degree to which carrying out job-required work
activities causes the employee to obtain direct and clear
information about the effectiveness of his or her personal
performance. A massage therapist is a good example of a
worker who receives immediate feedback from clients after the
session.
Previous section
Next section
3.4 Other Data Collection Methods
Other methods of data collection for job-task analysis—varying
in time and expense—include questionnaires, surveys,
interviews, focus groups, and archival records (Beebe, Mottet,
& Roach, 2012). Consultants in the field find the data collection
methods listed in Table 3.3 effective.
Observation is another data gathering technique. According to
Noe and Hollenbeck (2010), direct observation of the jobs
provides firsthand knowledge and data about the job. Job
observations are most appropriate for jobs that consist of
observable behaviors, such as fieldwork, manual labor, or
interpersonal communication.
19. Some examples of jobs where the observation method would be
particularly successful would be machine operators,
construction workers, police officers, flight attendants, bus
drivers, janitors, and tire installers.
Table 3.3: Outline of the pros and cons of each type of data
collection method
Method
Description
Advantages
Disadvantages
Observations
Can be technical, functional, or behavioral.
Can yield qualitative or quantitative feedback.
May be unstructured.
Minimize interruption of routine work flow or group activity.
Generate real-life data.
Requires a highly skilled observer with process and content
knowledge.
Allow data collection only in the work setting.
May cause "spied on" feelings.
Tests
Can be functionally oriented to test a board, staff, or committee
member's understanding.
Can be administered in a monitored setting or "take home."
Can be helpful in determining deficiencies in terms of
knowledge, skills, or attitudes.
Easily quantifiable and comparable.
Must be constructed for the audience, and validity can be
questionable.
Do not indicate if measured knowledge and skills are actually
being used on the job.
Surveys and questionnaires
May be in the form of surveys or polls of a random or stratified
sample or an entire population.
Can use a variety of question formats: open ended, projective,
forced choice, priority ranking.
20. Can reach a large number of people in a short time.
Are inexpensive.
Give opportunity of response without fear of embarrassment.
Yield data easily summarized and reported.
Make little provision for free response.
Require substantial time for development of effective survey or
questionnaire.
Do not effectively get at causes of problems or possible
solutions.
Interviews
Can be formal or casual, structured or unstructured.
May be used with a representative sample or whole group.
Can be done in person, by phone, at the work site, or away from
it.
Uncover attitudes, causes of problems, and possible solutions.
Gather feedback: yield of data is rich.
Allow for spontaneous feedback.
Are usually time-consuming.
Can be difficult to analyze and quantify results.
Require a skillful interviewer who can generate data without
making interviewee self-conscious or suspicious.
Assessment centers
For management development.
Require participants to complete a battery of exercises to
determine areas of strength that need development.
Assess potential by having people work in simulated
management situations.
Can provide early identification of people with potential for
advancement.
More accurate than "intuition."
Reduce bias and increase objectivity in selection process.
Selecting people to be included in the high-potential process
difficult with no hard criteria available.
Are time-consuming and costly to administer.
May be used to diagnose developmental needs rather than high
potential.
21. Focus groups and group discussion
Can be formal or informal.
Widely used method.
Can be focused on a specific problem, goal, task, or theme.
Allow interaction between viewpoints.
Enhance buy-in; focus on consensus.
Help group members become better listeners, analyzers, and
problem solvers.
Are time-consuming for both consultants and group members.
Can produce data that is difficult to quantify.
Document reviews
Organizational charts, planning documents, policy manuals,
audits, and budget reports.
Include employee records (accidents, grievances, attendance,
etc.).
Also include meeting minutes, program reports, and memos.
Provide clues to trouble spots.
Provide objective evidence or results.
Can easily be collected and compiled.
Often do not indicate causes of problems or solutions.
Reflect the past rather than the current situation.
Must be interpreted by skilled data analysis.
Source: Adapted from McArdle, G. (2010). Instructional design
for action learning. New York: AMACOM.
Remember, all of these methods attempt to confirm the expected
performance of a given job. That is, we are trying to validate
the requisite knowledge, skills, and attitudes needed for optimal
performance.
In sum, the selection of a particular organizational data source
depends on many factors, including the following:
Cost-effectiveness. What benefits will these data hold? Will it
be worth the resources expended?
Persons to be involved. Will certain personnel need to be
available to collect or interpret data?
Confidentiality. Will you have permission to access all required
data?
22. Ease of use. Once the data is collected, will it be easily
interpretable?
Time required. As a practical matter, will it take too much time
to collect the needed data?
Top management's preference. And, with all the other factors
considered, even then leadership may want you to collect
certain data in particular ways.
Individual or Person Analysis
The final assessment—the individual employee—delves into an
employee's actual performance (that is, the state of the
employee's willingness and ability). Individual assessment tests
pinpoint which employees need training and at what level,
including any remedial skills or assessing the entry behaviors
(Hannum & Hansen, 1989).
Individual Performance Terminology
Let us take a moment to clarify the terms of performance. As
discussed in Chapter 2, when we assess an employee's ability,
we are describing the present state of an employee's knowledge,
skills, and attitudes in totality as they relate to performing a
specific job. This should not be confused with achievement,
which is an assessment of what an employee has accomplished
in the past, or aptitude, which is how quickly or easily an
employee will be able to learn and be trained in the future
(Salkind & Rasmussen, 2008). These terms are also
differentiated from the term competency, which typically is a
broader term that includes an employee's developmental and
motivational dimensions. Examples of competencies would be
teamwork, commitment, innovation, and customer orientation
(Blanchard & Thacker, 2010). One way to think about it is that
competencies belong to the worker, and KSAs are required for
the job (Blanchard & Thacker, 2010; Piskurich, 2010).
Performance Appraisals
A good place to start to evaluate actual performance is to look
to the performance appraisal to determine if and by how much
an employee's performance deviates from expected
performance; this would have been confirmed by our earlier
23. job-task analysis.
Many types of performance appraisal instruments exist. If you
type "performance appraisal forms" in Google, you will get well
over 174,000 hits. Today's organizations tend to use some
standard performance appraisal instruments. Here is a sample
listing, with brief descriptions from Murphy and Cleveland
(1995):
Critical incidents. The supervisor's attention is focused on
specific or critical behaviors that separate effective from
ineffective performance.
Graphic rating scale. This method lists a set of performance
factors, such as job knowledge, work quality, and cooperation;
the supervisor uses these to rate employee performance using an
incremental scale.
Behaviorally anchored rating scales. These combine elements
from critical incident and graphic rating scale approaches. The
supervisor rates employees according to items on a numerical
scale.
Management by objectives. These evaluate how well an
employee has accomplished objectives determined to be critical
in job performance.
360-degree feedback. This multisource feedback method
provides a comprehensive perspective of employee performance
by using feedback from the full circle of people with whom the
employee interacts: supervisors, subordinates, and coworkers. It
is effective for career coaching and identifying strengths and
weaknesses.
Regardless of the actual performance appraisal tool used, what
is clear is that the performance appraisal should appraise the
actual behaviors of the expected performance. That is, you must
ensure that the instrument captures all the tasks and duties
necessary to make a judgment on the quality of the employee's
performance. Unfortunately, this is not always the case; a good
example of the performance appraisal instrument not being
properly aligned with the actual job duties required was found
in my study of certified diabetes educators (CDEs). (See the
24. HRD in Practice feature box titled "Effective CDE or
Ineffective CDE?")
HRD in Practice: Effective CDE or Ineffective CDE?
Currently, diabetes education lacks a performance appraisal
mechanism for the successful transfer of the requisite
knowledge, skills, and attitudes from the educator to the patient.
Effective diabetes education is vital to diabetes care. It offers
the promise of patient empowerment and independence by
teaching self-care behaviors that can contribute greatly to
maintaining the patient's overall long-term health.
Unfortunately, the field currently lacks a standard in the
performance appraisal instrument itself that contains line items
specific to the educator's success (or failure) in diabetes
education transfer. Performance appraisal instruments used for
CDEs are little more than adaptations of those used for other
jobs, such as for nurse practitioners or registered nurses. No
assessment for diabetes education transfer, arguably a CDE's
most important job task, is available. A fundamental component
of any learning performance system contains transfer of
learning accountability for those who have job descriptions
surrounding training and learning.
Because diabetes education ultimately empowers the patient by
helping promote effective self-care behaviors, the earlier the
successful transfer of diabetes education occurs, the earlier the
patient can begin those essential self-care behaviors.
Modifications to the performance appraisal instrument used for
diabetes educators may influence the rate of diabetes education
transfer to the patient.
Source: Kopp, D. M. (2005). Effective CDE, ineffective CDE:
What's the difference? Diabetes Educator, 31(5), 641–647.
Retrieved from http://tde.sagepub.com/content/31/5/641.extract
Consider This
What would be the biggest problem in having a performance
appraisal instrument that evaluates only some, but not all, of the
job tasks?
How can you better align the performance appraisal instruments
25. with the actual duties of the job?
On Rating Philosophy
Before analyzing employees' actual performance, learn the
organization's rating philosophy. Specifically, in your
organization, what is considered effective versus outstanding
performance? It is important to investigate this distinction
because in the absence of a forced rating system—where,
similar to a bell curve, only so many outstanding ratings are
given out—there may be inaccuracies due to manager rating
inflation or deflation or to ratings being lazily and uncritically
awarded (Murphy & Cleveland, 1995).
Consider, for example, in a Likert scale–type of performance
appraisal, what a rating of 5 out of 5 really means. Likewise,
what does a rating of 3 out of 5 represent? A rater also must be
wary of leniency bias and severity bias in performance
evaluations. (See Figure 3.6.) For instance, a rater may be too
soft or too generous when rating the employee's performance,
due to manager discomfort with giving an honest rating
(Armstrong & Appelbaum, 2003); this is known as the leniency
bias. In contrast, the severity bias occurs when a leader is too
harsh in rating an employee's performance (Armstrong &
Appelbaum, 2003).
Figure 3.6: Leniency bias and severity bias examples
Leniency and severity bias are two rater errors in employee
performance evaluation. Over time, without accurate ratings,
the employee can suffer by receiving no or ineffective
development because true performance gaps are concealed or
obscured.
3.5 Prioritizing the Training Needs
As discussed is Chapter 1, the basis of every job's expected
performance begins with the job-specific knowledge, skills, and
attitudes, collectively known as the job ability. Many times, the
prioritization of KSAs within the job is found in a well-written
job description that first breaks down the necessary versus
desirable job functions. For example, in a given job, it may be
26. necessary to be proficient in Microsoft Word® and desirable to
be proficient in speaking Spanish. By the way, a good rule of
thumb is, "If it's not on the job description, you shouldn't have
to train on it."
Yet people sometimes do have to prioritize or evaluate the
criticality of tasks in a job, especially given limited
organizational resources and time. One mechanism that can be
used is to evaluate the frequency, difficulty, and importance
(FDI) (Piskurich, 2010; Romiszowski, 1984) of a job. To
illustrate how an FDI analysis is done, let us take a simple
example of an office receptionist. After reviewing the job
description and interviewing the current receptionist, we come
up with a list of job tasks, as follows:
Receptionist
1. Wear employee identification.
2. Punch in.
3. Communicate with supervisor.
4. Greet vendors, customers, and other guests.
5. Determine the needs of vendors, customers, and other guests.
6. Announce vendors, customers, and guests to the person they
are visiting.
7. Provide directions for vendors, customers, and guests.
8. Answer telephone.
9. Handle messages.
10. Sort incoming mail.
11. Collect employees' outgoing mail.
12. Complete log sheet.
13. Communicate with security personnel.
14. Punch out.
Next, from this list, let us say that because of limited time and
resources, we have to prioritize (for example, using a 1 to 5
Likert scale rating, with 5 being the highest) the key functions
of this job. The list might look something like the FDI analysis
shown in Table 3.5.
Table 3.5: Example of FDI analysis
Receptionist
27. F
D
I
Total
Communicate with supervisor
4
2
4
10
Greet vendors, customers and guests
4
3
5
12
Answer telephone
5
3
5
13
Handle messages
5
2
5
12
Sort incoming mail
5
2
2
9
F = Frequency (How often is the task done?)
D = Difficulty (How difficult is the task to perform and
therefore train for?)
I = Importance (How important is the task to the job or
organization, as a whole?)
After performing this FDI analysis for the receptionist position,
28. we could correctly conclude that the three most critical
functions of this position (highest totaled) are answering the
phone, greeting visitors, and handling messages. Tasks not
critical to the job itself score lower. We would prioritize
training dollars to the highest scoring functions, at least. By the
way, if some FDI totals were tied and you had to eliminate one
function for training, the importance (I) subtotal breaks the tie.
By evaluating the tasks within a job—specifically their
frequency, difficulty, and importance—organizations, especially
those with limited training budgets, are better able to focus
their training efforts between and among jobs to prioritize the
training that may be needed, thus saving training time and
expense.