SlideShare a Scribd company logo
1 of 131
7(E)valuation of Training and Development
Ridofranz/iStock/Thinkstock
Learning Objectives
After reading this chapter, you should be able to:
• Differentiate between formative and summative
evaluations.
• Use Kirkpatrick’s four-level evaluation framework.
• Compute return on investment.
• Explain why evaluation is often neglected.
One of the great mistakes is to judge policies and programs by
their intentions rather
than their results.
—Milton Friedman, Economist
Introduction Chapter 7
Pretest
1. It is possible for organizations to try out trainings before they
are launched.
a. true
b. false
2. Assessing whether trainees enjoyed training is important only
as an evaluation of the
trainer’s competence.
a. true
b. false
3. Return on investment should be calculated after every
training session to determine
whether it was cost-effective and benefited the company as a
whole.
a. true
b. false
4. Fewer than 25% of organizations perform formal evaluations
of training effectiveness.
a. true
b. false
5. Failure to evaluate trainings may be not only unprofessional
but also unethical.
a. true
b. false
Answers can be found at the end of the chapter.
Introduction
We seek to answer one overarching question in the final,
evaluation phase of ADDIE: Was
the training effective? (See Figure 7.1.) In particular, we assess
whether we realized expected
training goals—as uncovered by our analysis phase—
specifically, whether the trainees’ post-
training KSAs improve not only their performance, but also the
organization’s performance.
As we will see, the process of training evaluation includes all of
these issues, as well as decid-
ing which data to use when evaluating training effectiveness,
determining whether further
training is needed, and assessing whether the current training
design needs improvement.
Ultimately, evaluation creates accountability, which is vital
given the significant amount
organizations spend on training and developing employees—
approximately $160 billion
annually (ASTD, 2013). This significant investment makes it
imperative that organizations
know whether their training efforts yield a positive financial
return on training invest-
ment (ROI).
Formative Evaluation Chapter 7
Figure 7.1: ADDIE model: Evaluate
In this final phase of ADDIE, we evaluate how effective the
training has been. From assessing
any improvement in the KSAs of the trainees to the financial
return on the training
investment, the evaluation phase appraises the effectiveness of
not only our prior analysis,
design, development, and implementation, but also of the
training in totality.
f07.01_BUS375.ai
Design Develop ImplementAnalyze Evaluate
7.1 Formative Evaluation
Although evaluation is the last phase of ADDIE, it is not the
first time aspects of the training
program are evaluated. When it comes to training evaluation,
we assess the training through-
out all phases of ADDIE, using first what is known as a
formative evaluation. Formative evalu-
ation is done while the training is forming; that is, prior to the
real-time implementation and
full-scale deployment of the training (Morrison, Ross, &
Kalman, 2012). Think of formative
evaluation as a “try it and fix it” stage, an assessment of the
internal processes of the training
to further refine the external training program before it is
launched.
Formative evaluations are valuable because they can reveal
deficiencies in the design, devel-
opment, and implementation phases of the training that may
need revision before real-time
execution (Neirotti & Paolucci, 2013; U.S. Department of
Health and Human Services, 2013;
Wan, 2013).
Recall from Chapter 6 that formative evaluations can range
from editorial reviews of the train-
ing and materials—which may include a routine proofread of
the training materials to check
for misspelled words, incomplete sentences, or inappropriate
images—to content reviews,
design reviews, and organizational reviews of the training
(Larson & Lockee, 2013; Noe,
2012; Piskurich, 2010; Wan, 2013). So, for example, we may
find in a content review that our
training is not properly linked to the original learning
objectives. Or we may conclude dur-
ing a design review that because e-learning is not a good fit
with the organizational culture,
instructor-led training is a more appropriate choice.
Formative evaluations also encompass pilot testing and beta
testing. With pilot tests and beta
tests, we are out to confirm the usability of the training, which
includes assessing the effec-
tiveness of the training materials and the quality of the
activities (ASTD, 2013; Stolovitch &
Keeps, 2011; Wan, 2013). Both beta tests and pilot tests are
considered types of formative
evaluation because they are performed as part of the prerelease
of the training. For the pilot
and beta testing, selected employees and SMEs are chosen to
test the training under normal,
everyday conditions; this approach is valuable because it allows
us to pinpoint any remain-
ing flaws and get feedback on particular training modules
(Duggan, 2013; Piskurich, 2010;
Wan, 2013).
http://www.businessdictionary.com/definition/condition.html
Summative Evaluation Chapter 7
7.2 Summative Evaluation
Whereas formative evaluation focuses on the training processes,
summative evaluation focuses
on the training outcomes—for both the learning and the
performance results following the
training (ASTD, 2013; Piskurich, 2010; Wan, 2013). Summative
evaluation is the focus of the E
phase of ADDIE. According to Stake (2004), one way to look at
the difference between forma-
tive and summative evaluation is “when the cook tastes the
soup, that’s formative evaluation;
when the guests taste the soup, that’s summative” (p. 17).
In summative evaluation, we assess whether the expected
training goals were realized and,
specifically, whether the trainees’ posttraining KSAs improved
their individual performance
(and, ultimately, improved the organization’s overall
performance). As Figure 7.2 depicts,
in summative evaluation, we assess both the short-term
learning-based outcomes—such
as the trainees’ reactions to the training and opinions about
whether they actually learned
anything—and the long-term performance-based outcomes.
These long-term performance-
based outcomes include assessing whether a transfer of training
occurred—that is, applica-
tion to the workplace via behavior on the job—as well as
whether any positive organizational
changes resulted, including return on investment (Noe, 2012;
Phillips, 2003; Piskurich, 2010).
Figure 7.2: Summative evaluation’s short-term and long-term
outcomes
Training evaluation can be broken down into short-term and
long-term assessments. Short-
term evaluations are usually trainee focused, whereas long-term
assessments are focused on
the training itself.
f07.02_BUS375.ai
Summative
outcomes
Short-term
outcomes
Learning by
participants
Reactions of
learners
Organizational
impact and Return
on Investment
Behavior
on the job
Long-term
outcomes
As Figure 7.3 depicts, however, the most common assessments
organizations perform with
summative evaluation are ultimately the least valuable to them
(ASTD, 2013; Nadler & Nadler,
1990). The next section will discuss each level of evaluation.
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
Figure 7.3: Use versus value in evaluation
Although levels 1 and 2 are most used and usually easiest to
compile, levels 3, 4, and 5 (ROI)
are deemed to be the most valuable information in assessing
training effectiveness, but they
require complex calculations.
f07.03_BUS375.ai
Percentage who use the corresponding level to any extent
Percentage who say this level has high or very high value
Reactions of
participants
Evaluation
of learning
Evaluation
of behavior
Evaluation
of results
Return on
investment
100
80
60
40
20
0
Source: Adapted from American Society for Training &
Development. (2013). State of the industry report. Alexandria,
VA: ASTD.
7.3 Kirkpatrick’s Four-Level Evaluation Framework
Perhaps the best known and most drawn-upon framework for
summative evaluation was
introduced by Donald Kirkpatrick (Neirotti & Paolucci, 2013;
Phillips, 2003; Piskurich, 2010;
Vijayasamundeeswari, 2013; Wan, 2013), a Professor Emeritus
at the University of Wisconsin
and past president of the ASTD. Kirkpatrick’s four-level
training evaluation taxonomy—
first published in 1959 in the US Training and Development
Journal (Kirkpatrick, 1959; Kirk-
patrick, 2009)—depicts both the short-term learning outcomes
and the long-term perfor-
mance outcomes (see Figure 7.4). Let us detail each level now.
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
Figure 7.4: Kirkpatrick’s four-level evaluation
Donald Kirkpatrick’s four-level evaluation is the widely used
standard to illustrate each level
of training’s impact on the trainee and the organization as a
whole. Kirkpatrick’s typology
is a good starting point to frame discussions regarding the
trainee’s reaction to the training
(level 1), if anything was learned from the training (level 2), if
the trainee applied the
training through new behavior (level 3), and ultimately, if the
training resulted in positive
organizational results (level 4).
f07.04_BUS375.ai
4
Results
3
Transfer
2
Learning
1
Reactions
Level 1—Reaction: Did They Like It?
A level 1 assessment attempts to measure the trainees’ reactions
to the training they have
just completed (Kirkpatrick, 2009; Wan, 2013; Werner &
DeSimone, 2011). Specifically, level
1 assessments ask participants questions such as:
• Did you enjoy the training?
• How was the instructor?
• Did you consider the training relevant?
• Was it a good use of your time?
• Did you feel you could contribute to your learning
experience?
• Did you like the venue, amenities, and so forth?
A level 1 assessment is important not only to assess whether the
trainees were satisfied with
the training session per se, but also—and perhaps more
significantly—to predict the effec-
tiveness of the next level of evaluation: level 2, learning
(ASTD, 2013; Kirkpatrick, 2009; Mor-
rison et al., 2012; Noe, 2012; Wan, 2013). That is, as level 1
reaction goes, so goes level 2
learning. According to a recent study (Kirkpatrick & Basarab,
2011), there was a meaningful
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
correlation between levels 1 and 2, in that positive learner
engagement led to a higher degree
of learning. This outcome specifically follows the idea of
attitudinal direction (Harvey, Reich,
& Wyer, 1968; Kruglanski & Higgins, 2007), whereby a
positive reaction (emotional intensity)
can lead to constructive conclusions, as depicted in the
following formula:
Attitudinal Direction
Perception + Judgment → Emotion (Level 1)
(Positive) Emotion → Learning (Level 2)
With attitudinal direction in mind, a level 1 evaluation is
attentive to the measurement of
attitudes, usually using a questionnaire. A level 1 survey
includes both rating scales and open-
ended narrative opportunities (Clark, 2013; Neirotti & Paolucci,
2013; Wan, 2013).
Typically, participants are not asked to put their names on the
survey, based on the assump-
tion that anonymity breeds honesty. Level 1 evaluation
instruments are part of the training
materials that would have been created in the development
phase of ADDIE.
Level 2—Learning: Did They Learn It?
In a level 2 assessment, we attempt to measure the trainees’
learning following the training
that they just completed (Kirkpatrick, 2009; Wan, 2013; Werner
& DeSimone, 2011) and, spe-
cifically, in relation to the learning outcomes we established
during the analysis and design
phases of ADDIE. Remember, learning outcomes can include
cognitive outcomes (knowl-
edge), psychomotor outcomes (skills), and affective outcomes
(attitudes) (Noe, 2012;
Piskurich, 2010; Rothwell & Kazanas, 2011).
• With cognitive outcomes, we determine the degree to
which trainees acquired new
knowledge, such as principles, facts, techniques, procedures, or
processes (Noe, 2012;
Piskurich, 2010; Rothwell & Kazanas, 2011). For example, in a
new employee orienta-
tion, cognitive outcomes could include knowing the company
safety rules or product
line or learning the company mission.
• With skills-based or psychomotor learning outcomes, we
assess the level of new
skills as a function of the new learning, as seen, for example, in
newly learned
listening skills, conflict-handling skills, or motor or manual
skills such as com-
puter repair and replacing a power supply (Morrison et al.,
2012; Noe, 2012;
Piskurich, 2010).
• Affective learning outcomes focus on changes in attitudes
as a function of the new
learning (Noe, 2012; Piskurich, 2010). For example, trainees
who learned a different
attitude regarding other cultures following diversity training or
those who gained
a new attitude regarding the importance of safety prevention
after a back injury–
prevention training class have achieved learning outcomes.
As with level 1, evaluations for level 2 are done immediately
after the training event to deter-
mine if participants gained the knowledge, skills, or attitudes
expected (Morrison et al., 2012;
Noe, 2012; Piskurich, 2010). Measuring the learned KSA
outcomes of level 2 requires testing
to demonstrate improvement in any or all level 2 outcomes:
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
• Cognitive outcomes and new knowledge are typically
measured using trainer-
constructed achievement tests (such as tests designed to
measure the degree of
learning that has taken place) (Duggan, 2013; Noe, 2012;
Piskurich, 2010; Wan, 2013).
• For newly learned motor or manual skills, we can use
performance tests, which
require the trainee to create a product or demonstrate a process
(Duggan, 2013; Noe,
2012; Piskurich, 2010; Wan, 2013).
• Attitudes are measured with questionnaires similar to the
questionnaires described
for level 1 evaluation, with the participants giving their ratings
for various items (for
example, strongly agree, agree, neutral, disagree, or strongly
disagree). They also
include open-ended items to let trainees describe any changed
attitudes in their own
words (for example, “How do you feel about diversity in the
workplace?”) (Duggan,
2013; Kirkpatrick, 2009; Noe, 2012; Piskurich, 2010; Wan,
2013).
With a level 2 posttraining learning evaluation, Kirkpatrick
recommends first giving partici-
pants a pretest before the training and then giving them a
posttest after the training (Cohen,
2005; Kirkpatrick, 1959; Kirkpatrick, 2009; Phillips, 2003;
Piskurich, 2010) to determine if
the training had any effect, positive or negative. Creating valid
and reliable tests is not a casual
exercise; in fact, there is a credential one can attain to become
an expert in testing and evalu-
ation (http://www.itea.org/professional-certification.html). Does
the test measure what it
is intended to measure? If the same test is given 2 months apart,
will it yield the same result?
HRD in Practice: A U.S. Department Uses Level 2 Evaluation
The U.S. Department of Transportation uses oral quizzes or
tests for level 2 evaluation. Oral
quizzes or tests are most often given face-to-face and can be
conducted individually or in a
group setting. Here is a typical example of the department’s
level 2 oral quizzing:
1. When it comes to Highway Safety tell me two safety
challenges you are facing right now
in your state or region.
2. What are “special use” vehicles and what is special about
them?
3. What type of crossing is required for train speeds over 201
km/h (125 mph)?
4. Identify the following safety device? …
5. Define what a passive device is? Can anyone give me an
example of a passive device?
6. What are three types of light rail alignments?
7. Why is aiming of roundels so critical? (p. 4)
Source: US Department of Transportation. (2004). Level II
evaluation. Washington, DC: Author. Retrieved from
https://www.nhi.f hwa.dot.gov/resources/
docs/Level%20II%20Evaluation%20Document.pdf
Consider This
1. Do you think this is a good way to evaluate trainees’
knowledge? Why or why not?
2. Do you think it is better to conduct this oral quiz in a group
or individually. Explain your
reasoning.
3. What suggestions could you provide to improve the level 2
oral quizzes for the U.S.
Department of Transportation?
http://www.itea.org/professional-certification.html
https://www.nhi.fhwa.dot.gov/resources/docs/Level%20II%20E
valuation%20Document.pdf
https://www.nhi.fhwa.dot.gov/resources/docs/Level%20II%20E
valuation%20Document.pdf
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
Level 3—Behavior: Did They Apply It?
A level 3 evaluation assesses the transfer of training; that is, do
the participants of the train-
ing program apply their new learning, transferring their skills
from the training setting to the
workplace, and as a result, did the training have a positive
effect on job performance? Level 3
evaluations specifically focus on behavioral change via the
transfer of knowledge, skills, and
attitudes from the training context to the workplace.
However, before assessing skills transfer to the job, let us
consider a practicality to the trans-
fer of training evaluation: We must allow trainees a sufficient
amount of time and opportu-
nity to apply the training skills in the workplace (Piskurich,
2010). The amount of time will
depend on numerous factors, including (ASTD, 2013; Cohen,
2005; Morrison et al., 2012; Noe,
2012; Wan, 2013):
• the nature of the training,
• the opportunity available to implement the new KSAs, and
• the level of encouragement from line management.
Typically, we can confirm transfer by observing the posttrained
participants and conduct-
ing work sampling (Kirkpatrick, 2009; Noe, 2012; Wan, 2013);
evaluation can occur 90 days
to 6 months posttraining (Kirkpatrick, 2009; Tobias & Fletcher,
2000). Figure 7.5 shows an
example of level 3 training results.
Furthermore, as we will discuss in more detail in Chapter 8:
• positive transfer of training is demonstrated when we
observe positive changes in
KSAs, and
• negative transfer is evident when learning occurs, but we
observe that KSAs are at
less-than-pretraining levels (Noe, 2012; Roessingh, 2005;
Underwood, 1966).
As discussed in Chapter 2, a trainee may have learned from the
training but not be willing to
apply the training to the workplace for several reasons. It may
sound something like, “Oh, I
know how to do it, but I am not doing it for you.” This is known
as zero transfer of training,
in which learning occurs, but we observe no changes in trainee
KSAs. So, and perhaps not
surprisingly, there is not a strong positive correlation between
level 2 learning and level 3
behavior (Kirkpatrick & Basarab, 2011). That is, just because
trainees learn something does
not mean they will necessarily apply it. As discussed in
previous chapters, irrespective of
learning the new KSAs and being able to apply them to the
workplace, the trainee must also
be willing to apply them.
Level 4—Results: Did the Organization Benefit?
With a level 4 evaluation, the goal is to find out if the training
program led to improved bottom-
line organizational results (such as business profits). Similar to
the correlation between
levels 1 and 2, studies have shown a correlation between levels
3 and 4 (Kirkpatrick, 2009);
specifically, if employees consistently perform critical on-the-
job behaviors, individual and
overall productivity increase.
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
Level 4 outcomes can include other major results that contribute
to an organization’s effec-
tive functioning. Level 4 outcomes are either changes in
financial outcomes or changes in
other metrics (for example, excellent customer service) that
should indirectly affect financial
outcomes at some point in the future; these are known as
performance drivers (Swanson,
1995; Swanson & Holton, 2001). Here are some examples of
level 4 performance drivers and
outcomes (Cohen, 2005; Kirkpatrick, 2009; Phillips, 2003;
Piskurich, 2010):
• Improved quality of work
• Higher productivity
• Reduction in turnover
• Reduction in scrap rate
• Improved quality of work life
• Improved human relations
• Increased sales
• Fewer grievances
• Lower absenteeism
• Higher worker morale
• Fewer accidents
• Greater job satisfaction
• Increased profits
Isolating the Effects of Training
A major challenge to evaluation training’s effectiveness is
isolating any subsequent perfor-
mance improvement to the training itself. That is, improved
performance may correspond
to the timing of the training but may not be linked to new
training itself. Phillips (2003)
attributes this to the need for isolation. For example, Cohen
(2005) described the following
scenario:
Let’s say training was focused on new selling techniques for an
organization’s
sales reps and the post-training assessment of sales and call
volume are found
to be significantly better than the pre-training amounts; this
change could be as
much due to an upward turn in the economy as it is to the
training itself. (p.23)
In this case linking the improvement to training would be
incorrect, so we must protect
against erroneously ascribing performance improvement to
nontraining reasons. To mitigate
this possibility, along with using pretests and posttests in level
2, Kirkpatrick (1959, 2009)
also recommends using control groups to statistically manage
and separate the impact of
other variables. Control groups do not receive the training, or
they go through other train-
ing unrelated to the training of interest, so we can assess the
unique effect of the training
intervention. In Cohen’s example, a control group would
include sales reps not subjected
to the specific training program, and then the control group’s
performance would be com-
pared to the trained group (known as the experimental group) of
sales reps (Cohen, 2005;
Kirkpatrick, 1959; Kirkpatrick, 2009; Phillips, 2003; Piskurich,
2010).
Level 4 outcomes in particular may be difficult to isolate to the
training program. This is
because in order to assess any of the level 4 outcomes, more
time must elapse to make a com-
plete assessment. For example, an organization might have to
wait 2 or 3 fiscal quarters to see
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
if decreased turnover or higher productivity follow training on
those topics. As a result, by the
time of assessment, other factors may have had a chance to
affect the level 4 outcomes. This is
what Sanders, Cogin, and Bainbridge (2013) called a
confounding variable, or another fac-
tor that obscures the effects or the impact of the training
(Guerra-López, 2012). In sum, not
unlike a 7-day weather forecast, a level 4 evaluation—although
still valuable data—is usually
more difficult to credit to the original training because it is the
most removed from the train-
ing event (Johnson & Christensen, 2010; Kirkpatrick, 2009;
Sonnentag, 2003).
Linking Kirkpatrick Outcome Levels to the Performance
Formula
Remember that in Chapter 2, we broke down workplace
performance by understanding what
components make up job performance; specifically, an outcome
of three variables:
• Ability—the employee’s capacity to perform the job;
collectively, their KSAs
• Motivation—the employee’s willingness to perform the
job voluntarily
• Environment—anything within the organizational
environment (such as the supervi-
sor, systems, and coworkers) that would affect the employee’s
job performance
The Performance Formula
Performance = f(KSAs × M × E)
KSAs = Ability; M = Motivation; E = Environment
Using Kirkpatrick’s taxonomy (see Figure 7.5), we can see
where summative outcomes are
expressed within employee performance (Blanchard & Thacker,
2010; Mitchell, 1982).
Figure 7.5: Synthesizing Kirkpatrick and the performance
formula
By synthesizing Kirkpatrick and the performance formula, we
can illustrate a training’s
impact not only on employee performance, but also on
organizational performance in total.
f07.05_BUS375.ai
Level 3
Level 2
Learning
Reaction
Prior
Level 4
Level 1
Future state of
Level 4
∑
Summation of
all trainees Level 3
Performance = f (KSAs × M × E)
4
Results
3
Transfer
2
Learning
1
Reactions
Performance = f(KSAs × M × E)
Kirkpatrick’s Four-Level Evaluation Framework Chapter 7
As Figure 7.5 shows, posttraining employee performance (level
3) is dependent on the effec-
tiveness of both levels 1 and 2, reaction and learning.
Specifically, the newly learned knowl-
edge and skills are in level 2, learning, and the attitudes and
motivation toward the new
learning are in level 1, reaction. Importantly, posttrained
performance is both contingent on
and subsequently affects the organizational environment level 4
outcomes. Specifically, post-
trained employee performance is subject to the antecedent state
of the organizational envi-
ronment (for example, the quality and state of the departmental
supervision would affect
the efficacy of the posttraining employee performance).
However, it is also expected that the
collective performance from the posttrained employee base
would ultimately influence and
affect the future state of the organizational environment and
organizational outcomes and
show itself in level 4 outcomes such as improved customer
service, more efficient systems,
and reduced error rates.
HRD in Practice: The Case of the $25,000 Hello
Adam did a double take at the final invoice the consultants had
faxed in.
$7,000—the bold digits jumped out at him. Adding this invoice
to their first two invoices, the
total for the customer service training was now close to
$25,000.
Man, this training was expensive! Adam thought.
It had all started because the receptionist had greeted a caller
with a dry hello instead of
giving a pleasant greeting and introducing herself, he
remembered. They had had a few
customer complaints about the receptionist’s lack of
pleasantness, but unfortunately, on this
day the caller was the owner, Mr. Lager. “What kind of message
of customer service are we
sending to folks, Adam?” Lager had asked. “I want those
receptionists to make the callers feel
like we are a likeable and friendly company. Take care of it,
and ASAP!”
Since Adam was in charge of administration, he contracted a
customer service training firm
immediately. And it seemed to be good training, too. It had
spanned 2 months, and all the
employees who dealt with customers were required to take it.
Adam received reports that
the trainers were very good; the sessions were said to be fun
and informative. The trainers
made sure the trainees learned new techniques about providing
excellent customer service
by requiring each attendee to pass a customer service test. All
the trainees had earned a
certificate to demonstrate the new learning.
In fact, now, after the training, anyone who called into the
company heard a pleasant and
happy greeting: “Hello, So-and-So speaking. How can I help
you?”
But, $25,000? Was it worth the expense? Adam pondered.
Would this be considered a
questionable return on the company’s training investment?
Consider This
1. What types of financial data could Adam review to establish
the monetary benefits of the
training to support the $25,000 expense and a positive return on
the training investment?
2. What could Adam point to as proof of successful level 1
evaluation?
3. Success in Kirkpatrick’s level 3 is demonstrated in which
part of the case?
Return on Investment Chapter 7
7.4 Return on Investment
As the case of the $25,000 hello illustrates, not only do we want
new learning to be applied
to the workplace and to impact organizational performance, we
also want to do that in the
most cost-effective and efficient way. Summative evaluation
should, in the end, lead to judg-
ments on the value and worthiness of a training program;
therefore, we also evaluate the cost
benefit of a training program and evaluate return on training
investment, the so-called level
5. What Donald Kirkpatrick was to levels 1 to 4, Jack Phillips is
to level 5.
Phillips is an internationally renowned expert on measuring the
return on investment of
human resource development activities. Over the past 20 years,
Phillips has produced more
than 30 books on the subject of ROI and has been a leading
figure in the debate about the
future role of human resources (Noe, 2012; Phillips, 2003;
Piskurich, 2010). ROI, or level 5,
evaluates the benefits of the training versus the costs.
Specifically, at this level we compare
the monetary benefits from the program with the costs to
conduct the training program (Noe,
2012; Phillips, 2003; Piskurich, 2010; Russ-Eft & Preskill,
2009).
According to Phillips (2003), the ROI measurement must be
simple, and the process must be
designed with a number of features in mind. The ROI process
must:
• be simple,
• be economical to implement,
• be theoretically sound without being overly complex,
• account for other factors that can influence the measured
outcomes after training,
• be appropriate in the context of other HRD programs,
• be flexible enough to be applied in pre- and posttraining,
• be applicable to all types of data collected, and
• include the costs of the training and measurement
program.
The two common ways to express training’s return on
investment are a benefit–cost ratio
(BCR) and a return on investment (ROI) percentage. To find the
BCR, we divide the total
dollar value of the benefits by the cost, as shown in the
following formula:
BCR = (Total Dollar Value of Benefits) ÷ (Cost of Training)
We determine ROI percentages by subtracting the costs from the
total dollar value of the ben-
efits to produce the dollar value of the net benefits; these are
then divided by the costs and
multiplied by 100 to develop a percentage:
Total Dollar Benefits – Costs of Training = Net Benefits
Net Benefits ÷ Costs × 100 = ROI
So, for example, if a traditionally delivered training program
produced total benefits of
$221,600 with a training cost of $48,200, the BCR would be
4.6. That is, for every dollar
invested, $4.60 in benefits is returned. The ROI, therefore,
would be 360%. According to
research conducted by SyberWorks, because e-learning
alleviates the need for trainee and
trainer travel, e-learning has ROIs that regularly outperform
traditionally delivered training
(Boggs, 2014).
Return on Investment Chapter 7
Did You Know? Training ROI
Not all return on investment is created equal! Depending on the
industry and/or type of
training, the ROI (measured by the BCR) will vary by sector, as
shown in Table 7.1. As a
result, it is difficult to formulate a rule of thumb about what an
appropriate or fair ROI
should be for a given training intervention. ROI will necessarily
differ from organization to
organization, based on variables such as required financial
margins, stakeholder preferences,
organizational culture, and overall corporate mission. In sum,
and according to training ROI
guru Jack Phillips, ROI sometimes is simply used qualitatively,
just to see if a program is
working or not.
Table 7.1 Examples of benefit–cost ratio per industry
Industry Training program BCR
Bottle company Management development 15:1
Commercial bank Sales training 12:1
Electric utility Soft skills 5:1
Oil company Customer service 5:1
Health care firm Team training 14:1
Source: Based on Phillips, J. J. (2003). Return on investment in
training and performance improvement programs. Oxford,
England: Butterworth-Heinemann.
In context, the significance of ROI—and training itself—means
different things to different
people; that is, different constituencies have different
perceptions of ROI evaluation. For
example, a board of directors may see a big picture of how the
training affects the company’s
ability to achieve its corporate goals: The finance department
may be looking to see how
training stacks up financially against other ways to invest the
company’s money; the depart-
ment manager may be solely concerned with the impact on
performance and productivity in
achieving department goals; and the training and development
manager may be concerned
with how training programs affect the credibility and status of
the company’s training func-
tion (Hewlett-Packard, 2004; Phillips & Phillips, 2012; Russ-Eft
& Preskill, 2009).
While ROI is seen as beneficial, determining ROI can be a time-
consuming endeavor; in fact,
for that reason, Phillips (2003) asserts that evaluating the ROI
of a learning event is not appro-
priate in every situation. Specifically, Phillips and Phillips
(2012) suggest that calculating ROI
does not add value in the following situations:
Return on Investment Chapter 7
• If activities are very short, it is unlikely that any
significant change in behavior will
have resulted.
• If activities are required by legislation or regulation,
evaluators will have little power
to initiate changes because of their findings.
• If activities are used to provide learners with the basic
technical know-how to per-
form their role, ROI data will be meaningless. Here, Phillips
argues that evaluating to
level 3 is more appropriate in these situations because the
training is not optional.
Hard Data Versus Soft Data
Part of the overall challenge in computing returns on investment
in training concerns how
we determine costs and benefits with regard to tangible and
intangible data. For example,
intangible or indirect training benefits such as customer
satisfaction, improved work rela-
tionships, and organizational morale are more difficult to put a
dollar amount on than are
tangible or direct benefits such as lower turnover, fewer
workplace injuries, and decreased
workers’ compensation premium costs. Training costs, too, can
be direct or indirect. Direct
costs include all expenses related to facilitating the training;
examples are the cost of hiring
a consultant, conference room fees, equipment rental, and
employee travel costs (Piskurich,
2010). Indirect costs of training may include such personnel
expenses as salary costs and the
costs of lost sales while employees are at training (Piskurich,
2010).
Tangible and direct data is easier to memorialize and list, as
well. Training expense, for exam-
ple, comes directly off an organization’s income statement.
Well-trained workers, although
an asset that serves as a good predictor of the tangible
outcomes, are considered off-balance-
sheet assets and are not as easily tracked on the organizational
accounting systems (Brimson,
2002; Weatherly, 2003).
Data Gathering Methods
We need data to compute ROI, and we can choose from a
variety of data gathering methods.
As Figure 7.6 depicts, a review of data gathering methods shows
that follow-up surveys of
participants, action planning—such as “asking participants to
isolate the impact of the train-
ing” (Phillips & Phillips, 2012, p. 95)—performance records
monitoring, and job observation
were the preferred data collection methods.
Return on Investment Chapter 7
Figure 7.6: Data gathering methods
In gathering data to compute ROI, each method has its pros and
cons. Methods vary from
surveys (the most popular method in a recent survey) to
interviews and focus groups, which
are more complex and take more time.
f07.06_BUS375.ai
Percentage who use these approaches to a high or very high
extent
Follow-up surveys of
participants
Action planning
Performance records
monitoring
Observation on
the job
Program follow-up
session
Follow-up surveys of
participants’ supervisors
Interviews with
participants
Interviews with
participants’ supervisors
Follow-up focus
groups
0 2010 4030 50 60
Source: American Society for Training & Development, 2013;
Phillips & Phillips, 2012.
Each data gathering method has its unique advantages and
disadvantages; this includes spe-
cific consideration to and trade-offs between data collection
time and the cost of collecting
the data, as well as the fact that some data gathering methods
may require a special skills set
(for example, how to conduct a focus group). Additionally, each
method offers aspects of soft
and/or hard cost–benefit data and, as a result, subsequent
analyses may be more complex.
As the next section will discuss, because of these and other
reasons, evaluation is many times
postponed or neglected outright.
Evaluation: Essential, but Often Neglected Chapter 7
7.5 Evaluation: Essential, but Often Neglected
Perhaps, and not surprisingly so, many organizations neglect or
overlook the higher levels of
evaluation. Some surveys show that only about 20% of
organizations conduct a formal evalu-
ation of training’s effectiveness (ASTD, 2013; Brown &
Gerhardt, 2002; Noe, 2012; Russ-Eft &
Preskill, 2009; Wang & Wilcox, 2006; Werner & DeSimone,
2011).
The reasons for not conducting training evaluation are varied.
Recently, Russ-Eft & Preskill
(2009) researched the prevailing reasons why evaluation is not
done more often within orga-
nizations; notably, their findings include the view that
organizations do not value evaluation
in general. This may be a function of many things, including the
organization lacking expertise
in performing evaluations, a fear as to what the evaluation may
yield, and even the practical
rationale that no one has asked for it!
In the final analysis, neglecting evaluation is not only
unprofessional, it is may also be unethi-
cal (see the Food for Thought feature box titled “Application of
Evaluation”). We will look
further into the ethics of training in Chapter 10.
Food for Thought: Application of Evaluation
There are organizations that prioritize quality evaluations to
maintain the integrity of the
business. For example, the American Evaluation Association
(http://www.eval.org) includes
high-quality evaluation as part of its code of ethics value
statements for organizations
that would be socially responsible as it relates to evaluation
practices. Specifically, the
association’s value statements in the practice of evaluation are
as follows:
• We value high quality, ethically defensible, culturally
responsive evaluation practices
that lead to effective and humane organizations and ultimately
to the enhancement of
the public good.
• We value high quality, ethically defensible, culturally
responsive evaluation practices
that contribute to decision-making processes, program
improvement, and policy
formulation.
• We value a global and international evaluation community
and understanding of
evaluation practices.
• We value the continual development of evaluation
professionals and the development
of evaluators from under-represented groups.
• We value inclusiveness and diversity, welcoming members
at any point in their career,
from any context, and representing a range of thought and
approaches.
• We value efficient, effective, responsive, transparent, and
socially responsible
association operations. (American Evaluation Association,
2013)
Consider This
1. What does the American Evaluation Association mean by
culturally responsive evaluation
practices?
2. How would ethical evaluation within an organization impact
the public good?
http://www.eval.org
Evaluation: Essential, but Often Neglected Chapter 7
Even with its ethical obligations, at its core, evaluation’s
objective is not only to ascertain if
organizational training with its respective programs are
effective, but also, if training is inef-
fective, to produce data so as to hold those responsible for
training accountable as well.
Sampling of Evaluation Models
Besides Kirkpatrick’s and Phillips’s, there are, of course, other
evaluation models. However—
and perhaps not surprisingly—many of the evaluation models
are variations on the same
themes. That is, evaluation models tend to assess the individual,
process, and organizational
levels, as well as consider the environment or context in which
the training takes place. Let us
look at some other popular evaluation models used.
Stufflebeam’s CIPP
The CIPP model of evaluation was developed by Daniel
Stufflebeam and colleagues in the
1960s. CIPP is an acronym for “context, input, process, and
product.” This evaluation model
requires the evaluation of context, input, process, and product
in judging a program’s value.
CIPP is a decision-focused approach to evaluation; it
emphasizes the systematic provision of
information for program management and operation. As shown
in Table 7.2, the CIPP model
is an attempt to make evaluation directly relevant to the needs
of decision makers during a
program’s different phases and activities.
Table 7.2: The CIPP model of evaluation
Aspect of evaluation Type of decision Kind of question
answered
Context evaluation Planning decisions What should we do?
Input evaluation Structuring decisions How should we do it?
Process evaluation Implementing decisions Are we doing it as
planned? And if
not, why not?
Product evaluation Recycling decisions Did it work?
Source: Stufflebeam, D. L., & Shinkfield, A. J. (2007).
Evaluation theory, models, and applications. New York: Wiley.
Reprinted with permission.
Kaufman’s Five Levels of Evaluation
Roger Kaufman (Kaufman, 1999) originally created a four-level
assessment strategy called
the organizational elements model; a modification to the model
resulted in the addition of
a fifth level, which assesses how the performance improvement
program contributes to the
good of society in general, as well as satisfying the client.
Kaufman’s evaluation levels are
shown in Table 7.3.
Evaluation: Essential, but Often Neglected Chapter 7
Table 7.3: Kaufman’s five levels of evaluation
Level Evaluation Focus
5 Societal outcomes Societal and client responsiveness,
consequences and payoffs.
4 Organizational output Organizational contributions and
payoffs.
3 Application Individual and small group (products) utilization
within the organization.
2 Acquisition Individual and small group mastery and
competency.
1b Reaction Methods’, means’, and processes’ acceptability and
efficiency.
1a Enabling Availability and quality of human, financial, and
physical resources input.
Source: Kaufman, R. (1999). Mega Planning: Practical Tools for
Organizational Success: SAGE Publications. Excerpted from p.6
Table 1.1 of Kaufman. R. (2008) The
Assessment Book, HRD Press. ISBN 9781599961286. Reprinted
with permission.
CIRO: Context, Input, Reaction, and Outcome
The CIRO (context, input, reaction, and outcome) four-level
approach was developed by Peter
Warr, Michael Bird, and Neil Rackham (Warr, Bird, &
Rackham, 1971). Adopting the CIRO
approach to evaluation gives employers a model to follow when
conducting training and
development assessments. Employers should conduct their
evaluation in the following areas:
• C—Context or environment within which the training took
place. Evaluation here
goes back to the reasons for the training or development event
or strategy. Employers
should look at the methods used to decide on the original
training or development
specification. Employers need to look at how the information
was analyzed and how
the needs were identified.
• I—Inputs to the training event. Evaluation here looks at
the planning and design
processes, which led to the selection of trainers, programs,
employees, and materials.
Determining the appropriateness and accuracy of the inputs is
crucial to the success
of the training or development initiative.
• R—Reactions to the training event. Evaluation methods
here should be appropriate
to the nature of the training undertaken. Employers may want to
measure the reac-
tion from learners to the training and to assess the relevance of
the training course to
the learner’s roles. Assessment might also look at the content
and presentation of the
training event to evaluate its quality.
• O—Outcomes of the training event. Employers may want
to measure the levels at
which the learning has been transferred to the workplace. This
measurement is easier
when the training involves hard and specific skills—as would be
the case for a train
driver or signal operator—but is harder for softer and less
quantifiable competencies,
including behavioral skills. If performance is expected to
change because of training,
then the evaluation needs to establish the learner’s initial
performance level.
It is fair to say that, although many of the evaluation models
may vary around the same
themes, certain evaluation models may be more appropriate to
use than others, depending on
the context and focus. For example, whereas the Kirkpatrick and
CIPP models focus on train-
ing evaluation, they do not underscore the evaluation of the
financial returns on investment
like Phillips’s model. Likewise, unlike other tactical evaluation
models, Kaufmann’s model,
Evaluation: Essential, but Often Neglected Chapter 7
because of its focus on societal outcomes, is not limited to
training initiatives and may be
used more broadly in other evaluative contexts such as
consumer marketing or evaluating an
organization’s corporate citizenship efforts.
HRD in Practice: Back to the Case of the $25,000 Hello
When we last left Adam, he was pondering whether the $25,000
expense for the customer
service training was worth it. Adam wondered, “Would this be
considered a questionable
return on the company’s training investment?” After performing
a return on investment for
the training program, Adam realized that, in fact, the training
was not cost effective, with a
–1.6% ROI. Tables 7.4 and 7.5 show some of Adam’s analysis,
in which he found the benefits
of the training were $24,615 but the direct costs were $25,000:
Table 7.4: Adam’s ROI analysis
Task Result
1. Focus on a unit of measure. Reduction in number of
complaints.
2. Determine a value of
each unit.
Take an average cost per complaint; include direct and indirect
costs—in this case $547.
3. Calculate the change in
performance data.
Six months after the program, there were 50 fewer complaints,
with 30 of those directly attributed to supervisors as a result of
techniques taught in the training program.
4. Determine an annual
amount for the change.
It was decided an annual reduction of 45 complaints was conser-
vative and realistic.
5. Calculate the total value of
the improvement.
Total value of improvement attributable to training was 45 ×
$547
= $24,615.
Table 7.5: Other organizations’ training ROI that Adam
researched
Study or setting
Target group
Program
description
Business measures
ROI
Verizon
Communications
Training staff,
customer service
Customer service
skills training
Reduced call
escalations
(–85%)
Retain Merchandise
Company
Sales associate Retails sales skills Increased sales
revenues
118%
U.S. Department of
Veterans Affairs
Managers,
supervisors
Leadership
competencies
Cost, time sav-
ings, reduced staff
requirements
159%
Source: Phillips, J. J., & Phillips, P. P. (2006). The ROI
fieldbook. Copyright © 2006 International Society for
Performance Improvement. New York: Wiley.
Reprinted with permission of John Wiley and Sons.
During his research on training evaluation, Adam saw that the
results could have been much
worse; in fact, he read that Verizon had a more extensive
customer service training that had
an astounding –85% ROI! “Wow!” Adam uttered aloud,
“Evaluation cannot be overlooked!”
(continued)
Summary and Resources Chapter 7
Consider This
1. Adam agreed that the skills outcome for the customer service
training was a success.
Specifically, after the training, anyone who now called the
company heard a pleasant and
happy greeting: “Hello, So-and So-speaking. How can I help
you?” In the final analysis,
does it really matter if the ROI was -1.6%?
2. What measures could Adam have taken to ensure a positive
ROI?
3. Do you think the training company that Adam contracted had
an ethical obligation to
ensure a positive ROI? Specifically, could they have charged
less and gotten the same result?
Summary and Resources
Chapter Summary
• The focus of formative evaluation is the evaluation of the
process, as the training is
forming; summative evaluation, however, focused on the
outcomes and specific train-
ing results—both the learning and performance.
• For summative evaluation, we used Kirkpatrick’s four-
level taxonomy, which is
depicted as a pyramid showing the four stages of evaluation:
reaction, learning,
behavior, and results.
• The chapter also discussed return on investment,
sometimes known as level 5. With
ROI, we can check to see how cost-effective and efficient the
training program is,
which in turn can lead to judgments on the value of training. A
particular challenge in
computing returns on investment in training concerns tangible
data versus intangible
data, also known as hard versus soft data.
• Finally, we discussed why organizations often neglect
evaluation. The number one
reason is that organization members do not value evaluation. In
sum, neglecting train-
ing evaluation may be not only unprofessional, but also
unethical.
Posttest
1. Summative evaluation of training assesses both .
a. learning-based and performance-based outcomes
b. training processes and training outcomes
c. readability and usability of the training materials
d. beta testing and pilot testing results
2. occur(s) when employees apply new information learned in
training
to their jobs.
a. Summative evaluation
b. Accountability
c. Transfer of training
d. Organizational results
Summary and Resources Chapter 7
3. Trainees who gain a new attitude after diversity training have
achieved which type of
learning outcome?
a. a cognitive outcome
b. a psychomotor outcome
c. an affective outcome
d. a performance-based outcome
4. A trainee who learned from a training but does not
demonstrate any resulting change
in knowledge, skills, or attitudes is exhibiting .
a. negative transfer
b. passive transfer
c. null transfer
d. zero transfer
5. Which possible outcomes of training, in Kirkpatrick’s model,
are the hardest to isolate
to a particular training program?
a. level 1
b. level 2
c. level 3
d. level 4
6. A manager translates a safety training’s results into a dollar
amount, determining
how much money has been saved by reducing workplace
accidents. She next divides
this amount by the total amount the company paid to hold the
training. Which calcu-
lation is the manager using?
a. a net benefit indicator
b. the benefit–cost ratio
c. the return on investment percentage
d. a monetization equation
7. Which of the following is considered an indirect or intangible
benefit of training?
a. reduced job turnover
b. decreased injuries in the workplace
c. improved customer satisfaction
d. lower costs of workers’ compensation
8. The number one reason more organizations do NOT conduct
formal evaluations of
trainings is that .
a. organization members do not believe evaluation is valuable
b. organization members lack understanding of the evaluation’s
purpose
c. the costsof an evaluation outweigh the
benefits
d. the organization has had previous negative experiences with
evaluation
Summary and Resources Chapter 7
9. Which model of program evaluation looks at how a program
not only satisfies a client
but also contributes to society?
a. Kirkpatrick’s four-level taxonomy
b. The CIPP model
c. Kaufman’s five levels of evaluation
d. The CIRO approach
10. Which evaluation model focuses on evaluation as an
approach to decision making?
a. Kirkpatrick’s four-level taxonomy
b. The CIPP model
c. Kaufman’s five levels of evaluation
d. The CIRO approach
Assess Your Learning: Critical Reflection
1. Explain how formative evaluation is linked to summative
evaluation in the training
evaluation process.
2. How dependent is level 1, reaction, on level 2, learning? How
might a trainee learn
something from a training workshop he or she thought was
awful?
3. Could you make a case for continuing with a training
program that is yielding a nega-
tive ROI?
4. If a training program is found to have a positive ROI, does
this measure indicate that
the training should be renewed? If not, why?
5. Describe some ethical problems that might occur if training
evaluation is neglected.
6. As it relates to levels 2 and 3, learning and behavior, what is
meant by the statement
“not everything learned is observable?”
Additional Resources
Web Resources
Jack Phillips’s ROI Institute: http://www.roiinstitute.net
The Bottom Line on ROI: The Jack Phillips Approach. Canadian
Learning Journal, 7(1),
Spring 2003:
http://www.learning-
designs.com/page_images/LDOArticleBottomLineonROI.pdf
Evaluation of Training Effectiveness:
http://www.youtube.com/watch?v=5HqEfxz5YNU
For information on outcome evaluation:
http://www.tc.umn.edu/~rkrueger/evaluation_oe.html
For more on Kirkpatrick’s four levels of evaluation model:
http://www.businessballs.com/kirkpatricklearningevaluationmod
el.htm
A government website on training and development policy:
http://www.opm.gov/wiki/training/Training-Evaluation.ashx
More information on how to measure training effectiveness:
http://www.sentricocompetencymanagement.com/page11405617
.aspx
http://www.roiinstitute.net
http://www.learning-
designs.com/page_images/LDOArticleBottomLineonROI.pdf
http://www.youtube.com/watch?v=5HqEfxz5YNU
http://www.tc.umn.edu/~rkrueger/evaluation_oe.html
http://www.businessballs.com/kirkpatricklearningevaluationmod
el.htm
http://www.opm.gov/wiki/training/Training-Evaluation.ashx
http://www.sentricocompetencymanagement.com/page11405617
.aspx
Summary and Resources Chapter 7
More on formative and summative evaluation:
http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations.
html
More on ROI in Training and Development:
http://www.shrm.org/education/hreducation/
documents/09-0168%20kaminski%20roi%20tnd%20im_final.pdf
and http://
www.shrm.org/Education/hreducation/Pages/ReturnonInvestmen
tTrainingand-
Development.aspx
Measuring ROI on learning and development:
http://www.astd.org/Publications/Books/Measuring-ROI
Further Reading
American Society for Training & Development. (2013). State of
the industry report. Alexan-
dria, VA: ASTD.
Boggs, D. (2014). E-learning benefits and ROI comparison of e-
learning vs. traditional train-
ing. Retrieved from SyberWorks website:
http://www.syberworks.com/articles/e-
learningROI.htm
Clark, D. (2013). Introduction to instructional system design.
Retrieved from Big Dog & Little
Dog’s Performance Juxtaposition website:
http://www.nwlink.com/~donclark/hrd/
sat1.html
Kirkpatrick, D. L. (2009). Evaluating training programs: The
four levels. Berrett-Koehler.
Phillips, J. J., & Phillips, P. P. (2012). Proving the value of HR:
How and why to measure ROI.
Alexandria, VA: Society for Human Resource Management.
Piskurich, G. M. (2010). Rapid training development:
Developing training courses fast and
right. New York: Wiley.
US Department of Health and Human Services. (2013). Tips and
recommendations for suc-
cessfully pilot testing your program. Retrieved from
http://www.hhs.gov/ash/oah/
oah-initiatives/teen_pregnancy/training/tip_sheets/pilot-testing-
508.pdf
Answers and Rejoinders to Chapter Pretest
1. true. Formative evaluation can be seen as a “try it and fix it”
process, since it takes
place while the training is still being developed. Ideally, any
deficiencies are uncov-
ered before the program is offered to an external audience.
2. false. Although trainees’ feedback can reveal a lot about
trainers’ strengths and weak-
nesses, this is not typically the main reason for evaluating
whether trainees found a
session interesting and useful. More significantly, employees’
satisfaction with a train-
ing session predicts how much they learn from it.
3. false. Although useful in many situations, return on
investment is time-consuming to
calculate and is NOT valuable in all situations. For example,
trainings that are very
short, are required by legislation, or are necessary for learners
to gain basic skills for
their roles will not benefit from having return on investment
calculated.
http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations.
html
http://www.shrm.org/education/hreducation/documents/09-
0168%20kaminski%20roi%20tnd%20im_final.pdf
http://www.shrm.org/education/hreducation/documents/09-
0168%20kaminski%20roi%20tnd%20im_final.pdf
http://www.shrm.org/Education/hreducation/Pages/ReturnonInv
estmentTrainingandDevelopment.aspx
http://www.shrm.org/Education/hreducation/Pages/ReturnonInv
estmentTrainingandDevelopment.aspx
http://www.shrm.org/Education/hreducation/Pages/ReturnonInv
estmentTrainingandDevelopment.aspx
http://www.astd.org/Publications/Books/Measuring-ROI
http://www.syberworks.com/articles/e-learningROI.htm
http://www.syberworks.com/articles/e-learningROI.htm
http://www.nwlink.com/~donclark/hrd/sat1.html
http://www.nwlink.com/~donclark/hrd/sat1.html
http://www.hhs.gov/ash/oah/oah-
initiatives/teen_pregnancy/training/tip_sheets/pilot-testing-
508.pdf
http://www.hhs.gov/ash/oah/oah-
initiatives/teen_pregnancy/training/tip_sheets/pilot-testing-
508.pdf
Summary and Resources Chapter 7
4. true. According to some surveys, only about 20% of
organizations conduct formal
evaluations of the effectiveness of their trainings, despite the
fact that many experts
consider this unprofessional.
5. true. The American Evaluation Association holds that high-
quality evaluation is
an essential part of organizations’ social responsibility. High-
quality evaluation is
included in the association’s code of ethics for organizations.
Answers and Rejoinders to Chapter Posttest
1. a. Summative evaluation looks at both the short-term
learning-based outcomes and
the long-term performance-based outcomes of a training.
Learning-based outcomes
include employees’ assessments of whether or not they learned
anything, whereas
performance-based outcomes address how the training
influenced the employees’
behavior or the organization’s return on investment.
2. c. Transfer of training describes the extent to which trainees
apply what they learned
in the training to the workplace, transferring their new learning.
It is a longer-term,
performance-based outcome measured in summative evaluation.
3. c. As opposed to cognitive outcomes, which link to fact- or
procedure-based knowl-
edge, and psychomotor outcomes, which link to skills, affective
learning outcomes
describe changes in attitude as a result of the new learning.
4. d. In zero transfer of training, evaluations show that learning
has occurred, but no
changes in KSAs are observed. This tends to occur when a
trainee is able to apply
learning but is not willing to apply it.
5. d. Level 4 outcomes include improvements to the overall
organization’s functioning
and bottom line. These are particularly difficult to isolate or
attribute to a training,
because time must elapse before they can be evaluated. During
that time, other fac-
tors may have influenced the overall organization, making it
hard to know whether
the training was responsible for the results.
6. b. The benefit–cost ratio (BCR) divides the total dollar value
of a training’s benefits by
the cost of the training. BCR is one common way of expressing
a training’s return on
investment.
7. c. Lower job turnover, reduced workers’ compensation
premiums, and decreased
workplace injuries are all examples of direct or tangible
benefits. Improved customer
satisfaction, on the other hand, is considered an indirect or
intangible benefit, along
with others such as improved work relationships and
organizational morale. It is
harder to assign a dollar amount to these intangible benefits.
8. a. The most common reason organizations do not conduct
evaluations is that they do
not yet understand the benefits that evaluation can bring. The
explanations for this
are varied and may include a lack of understanding of how
evaluation is used.
9. c. Kaufman’s original four-level organizational elements
model was modified to add
a fifth level that addresses societal outcomes. This level looks
at how a performance
improvement program benefits clients and society as a whole.
10. b. The CIPP model is a decision-focused approach that
attempts to directly relate
evaluation to the needs of program decision makers. It
emphasizes systematically
providing the information needed for program operation and
management.
Summary and Resources Chapter 7
Key Terms
accountability The willingness to accept
responsibility or to account for one’s actions.
achievement tests Tests designed to mea-
sure the degree of learning that has taken
place.
affective outcomes Attitudes; focuses on
changes in attitudes as a function of the new
learning.
antecedent state The organizational
environment prior to the training, on which
posttrained performance depends; for
example, how effective and efficient the
performance is.
cognitive outcomes Knowledge; outcomes
that show the degree to which trainees
acquired new knowledge, such as principles,
facts, techniques, procedures or processes.
confounding variable Any factor that
obscures the effects or the impact of the
training.
control group A group used in order to
statistically manage and separate the impact
of other variables so that the unique effect of
the training intervention can be assessed.
cost benefit The relationship between the
cost of an action and the value of the results.
experimental group A group of subjects
exposed to an experimental study.
four-level training evaluation tax-
onomy A theory developed by Donald
Kirkpatrick and used to determine the
effectiveness of the training and develop-
ment process, depicting both the short-term
learning outcomes and the long-term per-
formance outcomes at four levels: reaction,
learning, transfer, and results.
future state The posttraining organiza-
tional environment, on which performance
from a well-trained employee base has an
effect; for example, a more effective and
efficient environment.
isolation Isolating any subsequent perfor-
mance improvement to the training itself.
learning outcomes Results that are estab-
lished during the analysis and design phases
of ADDIE; cognitive outcomes (knowledge),
psychomotor outcomes (skills), and affective
outcomes (attitudes).
negative transfer A transfer demonstrated
when KSAs are at less-than-pretraining
levels.
organizational results Outcomes or
results that contribute to the functioning of
an organization, such as business profits.
performance drivers Changes in financial
outcomes or other metrics that should indi-
rectly affect financial outcomes in the future.
performance tests Tests that require the
trainee to create a product or demonstrate a
process.
positive transfer A transfer demonstrated
when positive changes in KSAs are observed.
posttest A test administered after a pro-
gram to assess the level of a learner’s knowl-
edge or skill.
psychomotor outcomes Skills; assess-
ment is based on the level of new skills as
a function of the new learning, as seen, for
example, in newly learned listening skills,
conflict-handling skills, or new motor or
manual skills.
questionnaires A set of evaluation ques-
tions asked of participants, who give their
ratings for various items (for example,
Strongly Agree, Agree, Neutral, Disagree, or
Strongly Disagree); or open-ended items
that allow participants to respond to any
changed attitudes in their own words (for
example, “How do you feel about diversity in
the workplace?”).
reaction The first level in Kirkpatrick’s
four-level training evaluation, in which the
evaluation assesses whether the trainees
liked the training session per se; it is also a
Summary and Resources Chapter 7
good predictor of the effectiveness of the
next two levels of evaluation.
return on investment (ROI) percentage
A percentage calculated by subtracting the
costs from the total dollar value of the ben-
efits to produce the dollar value of the net
benefits, and then dividing this amount by
the costs and multiplying the result by 100
to produce a percentage.
return on training investment (ROI) An
analysis that evaluates the cost benefit
of a training program via evaluation of
the benefits of the training versus the
costs; sometimes called level 5 on top of
Kirkpatrick’s four-level training evaluation
taxonomy.
transfer of training An evaluation that
assesses whether the participants of the
training program applied their new learning
from the training setting to the workplace;
the ability of trainees to apply to the job the
knowledge and skills they gain in training.
zero transfer A transfer of training that
is demonstrated if learning occurs but no
changes are observed in KSAs.
8Transfer of Training
Lisa F. Young/iStock/Thinkstock
Learning Objectives
After reading this chapter, you should be able to:
• Explain the framework for training transfer.
• Describe the accountability for transfer of training.
• Summarize the barriers to transfer.
• Understand how the learning organization supports
transfer.
While Mark Twain once said, “Everybody talks about the
weather, but nobody
does anything about it,” the same could be said about training
transfer:
“Everybody talks about training transfer, but nobody does
anything about it.”
—Anonymous
Introduction Chapter 8
Pretest
1. Supervisors’ support is essential for helping employees
transfer what they learned in
training to their job tasks.
a. true
b. false
2. Trainees who have more responsibility for their own learning
are more likely to trans-
fer that learning from one situation to another.
a. true
b. false
3. The trainer is generally considered the party most responsible
for whether trainees
apply the new learning to their work.
a. true
b. false
4. Research has found that most barriers to trainees’ application
of new skills are caused
by the trainees themselves.
a. true
b. false
5. In a learning organization, team and group learning take
precedence over personal
mastery.
a. true
b. false
Answers can be found at the end of the chapter.
Introduction
As early as 1957 James Mosél, a professor of psychology at
George Washington University
and the founding director of the university’s industrial
psychology program, observed that
training often seemed to make little or no difference in job
behavior (Broad, 2005; Mosél,
1957). Since that time, training transfer (Kirkpatrick’s level
3)—the degree to which train-
ees demonstrate new behaviors by effectively applying to the
job the KSAs gained in a train-
ing context—has been what Dennis Coates (2008), the CEO of
Performance Support Systems,
calls the Holy Grail of workplace training programs. In fact,
more than half a century later, two
separate longitudinal research studies that aggregated individual
studies of training transfer
estimated that still as little as 10 to 20% of the knowledge or
skills taught in training pro-
grams is effectively transferred to the workplace (Arthur,
Bennett, Edens, & Bell, 2003; Van
Wijk, Jansen, & Lyles, 2008).
As this chapter will discuss, training transfer not only depends
on the trainee’s willingness
and ability, but also on an organizational climate that
encourages transfer—both tactically
and strategically. The importance of the organizational climate
is seen, for example, in a learn-
ing organization (Senge, 1990), an organization that, through
sharing and dialogue, promotes
A Framework for Training Transfer Chapter 8
positive training transfer. This chapter will also discuss whether
supervisors, trainees, or
trainers are responsible for the transfer of training (Broad,
2005; Kopp, 2006).
8.1 A Framework for Training Transfer
As Figure 8.1 shows, Baldwin and Ford (1988) first illustrated
the process of training trans-
fer by showing how, in addition to learning (level 2) from the
training, training transfer was
linked to three factors or dimensions, namely: trainee
characteristics, training design, and
work environment. The premise here is that each factor
contributes to the success of training
transfer and therefore to workplace performance. Let us break
down each factor.
Figure 8.1: Training transfer model
There are key dimensions linked to the transfer of training
including trainee characteristics,
the training design, and the work environment itself.
f08.01_BUS375.ai
Learning PERFORMANCETRANSFER
Trainee characteristics
• Ability
• Motivation
Training design
• Principles of learning
• Training content
Work environment
• Support
• Opportunity to use
Source: Adapted from Baldwin, T. T., & Ford, J. K. (1988).
Transfer of training: A review and directions for future
research. Personnel Psychology, 41, 63–105.
Trainee Characteristics
Trainee characteristics include how willing and able the trainee
is to apply the training.
Therefore, although other factors will influence whether the
training is transferred, transfer
depends in no small part on the states of ability and willingness,
as Table 8.1 summarizes.
The desired posttraining state is one in which the trainee is able
and willing to apply the
new learning to the job. As Chapter 2 discussed, specific
leadership styles, per Hersey and
Blanchard’s situational leadership theory, can influence or act
upon a follower’s willing-
ness and ability (Daft, 2014; Hersey & Blanchard, 1977). For
example, with a willing and
able (R4) trainee, the transfer is voluntary, and following
training, a supervisor might merely
monitor the trainee to ensure that workplace barriers are
limited.
A Framework for Training Transfer Chapter 8
Table 8.1: Trainee ability and willingness to transfer
Trainee type Ability Willingness Transfer potential
R1 – – None
R2 – + Low; stimulated
R3 + – Low; stimulated
R4 + + High; voluntary
For a trainee who remained not able but willing (R2) following
a training, a supervisor
might spend more time explaining and clarifying the training to
the trainee. Doing so might
uncover not only a need for additional training, but also perhaps
a learning style or disability
issue the employer needs to accommodate. For example, in the
United Kingdom, new legisla-
tion makes all workplaces dyslexia-friendly workplaces
(Dyslexia Action, n.d.).
Trainees who are able but not willing (R3) to apply the new
learning to the workplace may
need an attitudinal intervention; in these cases the supervisor
intervenes with the trainee
to address aspects of self-efficacy, commitment, or
interpersonal skills (James, 1890; Noe,
2012). The goal of these interventions with R2 and R3 trainees
is for the supervisor to stimu-
late the transfer that does not happen voluntarily (Broad, 2000;
Broad, 2005).
Did You Know? Transfer of Learning Versus
Transfer of Training
Semantically, although some assert that the terms transfer of
learning and transfer of
training are synonymous (Cormier & Hagman, 1987), sometimes
distinctions are made. One
distinction is when the focus is on cognition and knowledge
acquisition—underscoring that
not all that is learned is observable. For example, when a new
customer service agent tries
out the newly memorized sales script on a caller, the term
transfer of learning may be more
appropriate. When there is a focus on the transfer of particular
motor skills and outcome-
based behavior, such as when an employee from a cable
company is trying for the first time
to hook up a DVR to a television, then transfer of training
would be used.
Finally, if trainees routinely leave the training programs unable
and unwilling (R1) to
apply the new learning, this outcome suggests a systemic
problem; perhaps management
should review recruiting practices with the human resources
department (Alagaraja, 2012;
Blanchard & Thacker, 2010).
A Framework for Training Transfer Chapter 8
Training Design
Training design is the dimension of the transfer framework that
refers to factors built into
the training program to increase the chances that transfer of
training will occur (Baldwin &
Ford, 1988; Ford, 2014; Noe, 2012; Werner & DeSimone,
2011). Two particular theories of
transfer have implications for training design: theory of
identical elements and cognitive
theory, first proposed by Edward Thorndike in 1928.
Theory of Identical Elements
The theory of identical elements uses the idea that the amount
of transfer between the famil-
iar situation and the unfamiliar one is determined by the number
of elements that the two
situations have in common (Thorndike & Woodworth, 1901).
That is, transfer of training is
enhanced when what trainees learn in the training session
matches what they will be doing
on the job (Orata, 2013; Thorndike & Woodworth, 1901). In his
experiment to underscore the
importance of identical elements, Thorndike had participants
judge the area of rectangles,
and then he tested participants on the related task of estimating
the areas of circles and tri-
angles. Transfer was assessed by the degree to which learning
skill A (estimating the area of
squares) influenced skill B (estimating the area of circles or
triangles). Thorndike found little
evidence of transfer and, from this finding, concluded that
“transfer of a skill was directly
related to the similarity between two situations” (Thorndike &
Woodworth, 1901, p. 15).
As a result, transfer is based on making the training
environment similar to the job environ-
ment; this is known as near transfer—metaphorically, the
transfer distance between the
training environment and the application to the job environment
(Ford, 2014; Holton & Bald-
win, 2003; Wan, 2013). An example of near transfer would be a
training for a department
store cashier in which new employees train on a cash register
that matches the registers the
department actually uses.
An extension of the theory of identical elements is the concept
of stimulus generalization,
which emphasizes the transfer of general principles and
maintenance of skills. This concept
is known as far transfer, the application of learned behavior,
content knowledge, concepts,
or skills in a situation that is dissimilar to the original learning
context (Ford, 2014; Holton &
Baldwin, 2003). Suppose that a trainee had learned from a
workshop to use conflict-handling
skills not only at work, but also at home with his spouse; this
situation would be an example
of far transfer. Table 8.2 gives some everyday examples of near
and far transfer.
Table 8.2: Examples of near and far transfer
Near Far
Transfer from using one type of coffee mug to
another type of mug
Transfer from drinking hot coffee using a mug to
drinking hot coffee using a thermos (rule: do not
burn yourself )
Transfer from using one shuttle bus to another Transfer from
reading the shuttle bus schedule to
reading an airline schedule
Transfer from using a knife and fork to using a differ-
ent size knife and fork
Transfer from using a knife and fork to using
chopsticks
Source: Adapted from Svinicki, M. D. (2004). Learning and
motivation in the postsecondary classroom. New York: Wiley.
A Framework for Training Transfer Chapter 8
If we consider near and far transfers as transfer outcomes, then
the processes of transfer linked
to near and far are known as low-road transfer and high-road
transfer (Doyle, McDonald, &
Leberman, 2012; Perkins & Salomon, 1988; Salomon & Perkins,
1989). Specifically, low-road
transfer, which facilitates near transfer, occurs when the context
is so familiar or perceptually
similar (Ford, 2014; Svinicki, 2004) to what the trainee already
knows that a reflexive or auto-
matic triggering of transfer occurs without conscious
contemplation; this unconscious com-
petence is known as automaticity (Bargh, 2013). For example, a
trainee hired as a stockroom
forklift operator who has experience driving Caterpillar™
forklifts would most likely have a
low-road near transfer, even though the hiring company uses
Komatsu™ brand forklifts.
In high-road transfer, linked to far transfer, the trainee must
consciously draw on previous
knowledge, skills, or attitudes. The trainee now applies
conscious competence of previous
KSAs to perceptually different, but conceptually similar,
contexts (Ford, 2014; Perkins & Salo-
mon, 1988; Svinicki, 2004). An example of high-road far
transfer is a new marketing depart-
ment employee drawing on the concepts of game theory learned
in college to analyze the
competition and the interactions between manufacturers and
retailers (Chatterjee & Samu-
elson, 2013).
HRD in Practice: High-Road, Far Transfer
Justin Moore is the CEO of Axcient, a rapidly growing cloud
services provider. Moore, now
31, is also a former star of the youth chess circuit. Moore does
not play much competitively
anymore, but even so, the kinds of thinking learned from his
days as a chess prodigy have
deeply informed the way he runs a successful start-up. In a
sense, Moore does still play chess
every day—by running Axcient.
“Of course, it’s a business commonplace to recommend
forethought. But, in chess, the
metaphor is literalized. You’re constantly looking two, three,
four moves ahead,” explains
Moore. “If you do this move, what’s the countermove? What are
all the countermoves? And
then, for all of those, what are all of my potential
countermoves? Chess is constantly teaching
you to think about what comes next, and what comes after that,
and what the repercussions
could be.” In a chess game your mind is constantly running
permutations of decision trees. In
a business your mind should be doing the same.
A chess match is a war of attrition. If a soccer match is
egregiously lopsided at halftime, the
game still progresses. But, if White accidentally loses his queen
a few moves into the game, it
is likely he will resign. A properly matched chess game is often
fought to the point that only a
few pawns, pieces, and the opposing kings remain—a bare-
board state known as endgame.
The entirety of a chess game is all a prelude to endgame.
“Chess is about getting to endgame,” says Moore. “What
happens between the start and then
doesn’t necessarily matter. You could lose more pieces or a
more valuable piece, and at the
end of the day, if you capture the opponent’s king, you win the
game.”
Pattern recognition. Playing chess teaches you to recognize
patterns: the tempting bishop
sacrifice that actually led you into a trap, the queen swap that
looked favorable but prevented
you from castling. You play; you learn. Moore tells a story
about how pattern recognition
helped his business. In 2011 Moore and his team were trying to
improve customer
satisfaction. They worked from the assumption that one metric
in particular—case
(continued)
A Framework for Training Transfer Chapter 8
backlog—was the best predictor of customer satisfaction. It
seemed reasonable to assume
that if you had low or zero backlog, your customers would be
happy. “It turned out we were
wrong,” says Moore. After 3 months of wandering through the
weeds, Moore’s team realized
that a better predictor of customer satisfaction was the time it
took to respond to a customer
request, combined with frequency of updates.
A great chess player has a deep awareness of each piece’s role
on the board. A bishop has
different abilities than a knight has, and its powers are
expanded or limited by a board’s
pawn structure. In some ways chess is a laboratory for human
resources problems. “You have
to understand the strengths and weaknesses of the team, of your
employees,” says Moore.
“You have to understand that the pawn has its role, and it’s a
very important one, just as
important as the queen, rook, or bishop. Every piece is critical,
and the only way to win is to
leverage all those pieces’ skill sets together.”
Source: Zax, D. (2013, February 19). Six strategy lessons from
a former chess prodigy who’s now a CEO. Fast Company.
Retrieved from http://www.
fastcompany.com/3005989/innovation-agents/6-strategy-
lessons-former-chess-prodigy-whos-now-ceo
Consider This
1. How did Moore draw on the pattern recognition in chess to
solve his customer
service issue?
2. In what ways did the game of chess condition Moore to be
proactive versus reactive?
3. What was the significance of Moore’s example of
differentiating between soccer and chess?
Cognitive Theory of Transfer
The cognitive theory of transfer is based on trainees’ ability to
retrieve, manage, and deploy
learned capabilities. For training design, the richer the
connections between the skill and
real-world knowledge, the better the chance of retrieval, and
therefore, the better the likeli-
hood of transfer (Baldwin & Ford, 1988; Noe, 2012; Stolovitch
& Keeps, 2011). Specifically,
transfer is more probable if the trainees can see the potential
applications of the training con-
tent to their jobs; this idea is consistent with adult-learning
principles set forth by Malcolm
Knowles (Hafler, 2011; Knowles, 1973):
• Adults bring life experiences and knowledge to learning
experiences.
• Adults are goal oriented.
• Adults are relevancy oriented.
• Adults are practical.
As it relates to the cognitive methods of knowledge recall, the
late educational psychologist
Robert Gagné’s classic nine events of instruction (Gagné, 1965)
is still used today (Gagné,
Wager, Golas, & Keller, 2005; Romiszowski, 2013) in
instructional design.
Table 8.3 summarizes how—after gaining the trainee’s attention
(for example, level 1, reac-
tion) and ensuring that the trainee is aware of the training
objectives—stimulating recall of
prerequisite learning is reinforced by subsequent events that
ultimately lead to enhanced
retention and transfer; learning processes include semantic
encoding (learning in context),
opportunities for reinforcement, and providing cues to assist in
retrieval. As discussed in
http://www.fastcompany.com/3005989/innovation-agents/6-
strategy-lessons-former-chess-prodigy-whos-now-ceo
http://www.fastcompany.com/3005989/innovation-agents/6-
strategy-lessons-former-chess-prodigy-whos-now-ceo
A Framework for Training Transfer Chapter 8
Chapter 2, cues can include job aids, which can enhance
transfer. Job aids can be used during
actual performance of tasks; they give information that helps
the trainee know what actions
and decisions a specific task requires (Stolovitch & Keeps,
2011; Willmore, 2006).
Table 8.3: Gagné’s nine events of instruction
Instructional event Relation to learning process
1. Gaining attention Reception of patterns of neural impulses
2. Informing learner of the objective Activating a process of
executive control
3. Stimulating recall of the prerequisite knowledge Retrieval of
prior memory to working memory
4. Presenting the stimulus material Emphasizing features for
selective perception
5. Providing learning guidance Semantic encoding; cues for
retrieval
6. Eliciting the performance Activating response organization
7. Providing feedback about performance Establishing
reinforcement
8. Assessing performance Activating retrieval; making
reinforcement possible
9. Enhancing retention and transfer Providing cues and
strategies for retrieval
Source: Adapted from Gagné, R. M. (1965). The conditions of
learning. New York: Holt, Rinehart & Winston.
Self-Directed Learning Part of training design should
include aspects of self- management,
designing the training to use a trainee’s propensity and level for
self-direction (Broad, 2005;
Guglielmino, 2001; Noe, 2012; Rothwell & Sensenig, 1999;
Saks, Haccoun, & Belcourt, 2010).
Self-directed learning is the level of initiative in the trainee’s
motivation to acquire the new
ability and is linked to a trainee’s self-efficacy (Bijker, Van der
Klink, & Boshuizen, 2010). Self-
directed trainees are empowered to take more responsibility in
their learning endeavors; as
a result, self-directed trainees are more apt to transfer learning,
in terms of both knowledge
and skill, from one situation to another (Baldwin & Ford, 1988;
Guglielmino, 2001; Knowles
et al., 2012). As described in Chapter 5, self-direction does not
always equate to self-teaching;
for example, for purposes of reinforcing transfer, a self-directed
trainee may choose to be
shown again how to do a task rather than self-teaching.
Work Environment
Training transfer has also been linked to the trainees’
perceptions about the work environ-
ment (E). As discussed earlier and depicted here in Figure 8.2,
this idea is consistent with
the performance formula, whereby not only must E remain
positive (+), but also perceptions
about E must remain positive (+), as well.
For transfer to occur, the trainee must perceive that the work
environment has a climate
for transfer. A climate for training transfer includes factors such
as level of supervisor sup-
port, opportunities to practice trained tasks, and openness to
change (Baldwin & Ford, 1988;
Blume, Ford, Baldwin, & Huang, 2010; Rouiller & Goldstein,
1993; Salas, Tannenbaum, Cohen,
& Latham, 2013). Holton, Bates, and Ruona (2000) found
specific variables that influenced
A Framework for Training Transfer Chapter 8
the transfer climate; these include supervisor support or
sanctions, resistance or openness to
change, levels of coaching or mentoring, and positive or
negative personal outcomes (Holton,
Bates, & Ruona, 2000). Peer support, too, was seen as a
determinant of trainee transfer
(Broad, 2000; Broad, 2005; Burke & Hutchins, 2008; Holton &
Baldwin, 2003; Holton et al.,
2000; Rouiller & Goldstein, 1993), although not any stronger
than supervisor support (Van
den Bossche, Segers, & Jansen, 2010). Table 8.4 lists
frequencies of transfer categories.
Figure 8.2: Environment includes trainee perceptions
The organizational environment not only has to have an actual
climate for training transfer,
but the transfer-friendly environment must also be perceived by
the employees.
f08.03_BUS375.ai
Transfer Ability
Perceived
Actual (climate for
transfer)
Environment
Willingness
Performance = f(KSAs × M × E)
Source: Adapted from Blanchard, P. N., & Thacker, J. W.
(2010). Effective training: Systems, strategies, and practices
(4th ed.). Upper Saddle River, NJ: Pearson.
Table 8.4: Frequencies of transfer categories
Transfer factor Frequencey (%)
Transfer influences
Learner characteristics
Trainer characteristics
Design and development
Work environment
3 (2%)
8 (4%)
104 (46%)
112 (49%)
Time period
Before
During
After
Not time bound
28 (12%)
70 (31%)
74 (32%)
56 (25%)
Stakeholder support
Trainee
Trainer
Supervisor
Peer
Organization
53 (23%)
109 (48%)
57 (25%)
2 (1%)
7 (3%)
Source: Burke & Hutchins, 2008.
Note. Emergent factors are in italics. Transfer influences were
coded as 1 = learner characteristics, 2 = trainer characteristics,
3 = design and development, 4 = not time
bound; stakeholder support was coded as 1 = trainee, 2 =
trainer, 3 = supervisor, 4 = peer, 5 = organization.
A Framework for Training Transfer Chapter 8
Food for Thought: Apply Transfer of Training
Practice some key ideas in transfer of training with Baylor
University’s E-Learning Module:
http://business.baylor.edu/knue/3345TOT.
Consider This
1. Using the cognitive theory of transfer, what would be some
techniques that would
enhance transfer?
2. What would be examples of peer support in training transfer?
3. What is meant by the term intellectual capital as it relates to
training transfer?
Trainees are also more motivated to transfer training when it is
part of pursuing desirable
outcomes or rewards (or to avoid undesirable outcomes). The
value trainees place on such
outcomes is known as valence, and the trainee’s belief that he
or she will actually receive that
outcome or reward when the performance expectation is met is
known as instrumentality.
This is part of the expectancy process theory of motivation
(Vroom & Yetton, 1973) that influ-
ences certain decisions that employees will make—in this case,
transfer of the training.
Positive outcomes include not only extrinsic rewards such as
salary increases and bonuses,
but also intrinsic rewards such as opportunities for advancement
and recognition (Broad,
2005; Holton & Baldwin, 2003; Holton et al., 2000, Vroom &
Yetton, 1973).
On the Quality of Transfer: Negative and Positive
Not all transfer is equal, and when managing transfer, we need
to consider two states:
• Positive transfer. Near and far transfer enables what is
known as positive transfer.
Positive transfer is when workplace performance improves due
to the training. Posi-
tive transfer is more likely when the trainee’s prior learning
facilitates the trainee’s
acquisition of the new learning or skills. For example, a
trainee’s prior experience
in learning an older inventory package expedites his or her
learning procedures for
using the newer package. This concept is consistent with
Knowles’s principles of
adult learning, where prior experience informs new learning
(Knowles, 1973).
• Negative transfer. When the trainee performance worsens
following the training,
this is considered negative transfer. Specifically, negative
transfer can happen when
a trainee’s prior learning interferes with the acquisition of the
new learning or skills.
For example, users who switch from a BlackBerry phone, with
its physical keyboard,
to an iPhone, with its virtual keyboard, find it more difficult to
type and text than
users who are switching from a Samsung phone, which also has
a virtual keyboard.
This idea is consistent with Hedberg’s (1981) assertion that
there are times, in fact,
when adults have to unlearn ideas before new learning can
occur.
Training transfer is not just a binary proposition. That is, we do
not just evaluate whether
or not transfer occurred (zero transfer is when we observe no
change in the trainee’s KSAs).
http://business.baylor.edu/knue/3345TOT/
Accountability for Training Transfer Chapter 8
Specifically, we also must be mindful that new training may
negatively affect the trainee, and
the resulting performance not only may fail to improve but, in
fact, may become worse than
it was before the training.
8.2 Accountability for Training Transfer
At the end of the (training) day, who is responsible for level 3’s
training transfer? Is it the
trainer, the trainee, or the trainee’s supervisor?
While there is no clear HRD-policy answer (Burke & Hutchins,
2008), many training scholars
and practitioners have suggested a transfer trinity, or triad,
consisting of the trainer, the
trainee, and the manager (Blume et al., 2010; Haskell, 2001;
Rummler & Brache, 1990); each
one plays a role to ensure transfer success. (See Figure 8.3.)
Others propose that management
is ultimately responsible for ensuring transfer (Esque &
McCausland, 1997), and still others
place more on the trainer’s shoulders (Broad & Newstrom,
1992; Broad, 2005; Kopp, 2006).
Here, trainers not only lead their training toward voluntary
transfer, but also stimulate the
transfer after the training event, including having trainers
partner with supervisors and man-
agers to support trainees in their new learning.
Figure 8.3: Training transfer trinity
Although there has been no consensus about who is ultimately
accountable for the transfer
of training (or, “where the transfer buck stops”), many in the
field agree that a shared
accountability exists between the trainer, trainee, and direct
manager—the so-called
training trinity.
f08.04_BUS375.ai
Direct
Manager
Assessment
training
reinforcement
Trainee Trainer
Source: Adapted from Coates, D. (2008). Enhance the transfer
of training. Alexandria, VA: ASTD, p. 7.
Accountability for Training Transfer Chapter 8
Using diabetes education and training as a backdrop, Kopp
(2006) specifically suggested that
the trainer be primarily accountable for training transfer; he
argued that trainers should take
ownership of level 3, so that a distinction could be made
between effective trainers and inef-
fective ones. He viewed the trainer as individually necessary
and jointly sufficient in training
transfer. That is, although the trainer alone is not sufficient for
(and does not guarantee) trans-
fer, the trainer was fundamentally necessary—and it follows,
therefore, that the trainer can-
not be absolved of primary accountability. Burke and Saks
(2009) seek commonality rather
than a single-minded construct; they conclude that many
stakeholders can (and should) be
held accountable for transfer and the transfer-related activities
that they can affect.
The Who, What, and When of Transfer
Broad’s and Newstrom’s (1992) extensive research on training
transfer included assembling
a panel of experts and—using a Delphi method in which the
rankings from the experts are col-
lated—the perceptions of roles in transfer strategies where
given a final rank in every phase
of transfer: before, during, and after (see Table 8.5). (Also see
the Food for Thought feature
box titled “Transfer Strategies,” which provides a link to a
summary of Broad and Newstrom’s
work.) One of their findings was that the most frequently used
roles in transfer differed from
the most influential roles in transfer during a given phase of
transfer. For example, although
the panel thought the manager had the most influential role
before transfer (first), managers
were actually ranked fifth in frequency of use before transfer.
Table 8.5: Frequency versus influence
Ranking—most frequently used roles in transfer
Before During After
Trainer (facilitator) 2 1 7
Manager 5 6 9
Learner 8 3 4
Ranking—most influential roles in transfer
Before During After
Trainer (facilitator) 2 4 8
Manager 1 9 3
Learner 7 5 6
Source: Adapted from Broad, M., & Newstrom, J. (1992).
Transfer of training. Philadelphia, PA: Perseus Books.
For example, whereas the trainer was most frequently used in
the total transfer process, the
manager was thought to be the most influential in the transfer
process, even given the man-
ager’s limited role during the training. This finding was
consistent with Burke and Hutchins’s
(2008) more recent research, which confirmed that the role of
trainers (48%) was more influ-
ential than the role of supervisors (25%) during training
transfer. In their study, Burke and
Hutchins selected training professionals and practitioners who
were members of a large met-
ropolitan chapter of ASTD and asked about the suggested best
practices for enhancing and
bolstering training transfer.
Accountability for Training Transfer Chapter 8
Table 8.6 outlines recommended strategies and action items for
each transfer agent.
Table 8.6: Actions items of transfer agents
Transfer agent Time period
Before During After
Manager Communicate that learn-
ing is a prime organiza-
tional objective.
Encourage full par-
ticipation by ensuring
trainee’s job is covered
during the learning
program.
Provide opportunities
to practice and demon-
strate new skills.
Trainer Provide clear descrip-
tion and precourse
information to trainee
and manager.
Ensure good delivery. Provide follow-up con-
sultation to maximize
application.
Trainee Clear up daily activities
prior to the learning
program.
Participate actively and
ask questions.
Discuss performance
objectives and action
plans with manager.
Source: Broad, 2000; Broad & Newstrom, 1992; Broad, 2005;
Burke & Hutchins, 2008.
Food for Thought: Transfer Strategies
Listen to the Center for Corporate and Professional
Development describe transfer
strategies in every phase of transfer (before, during, and after):
http://www.youtube.com/
watch?v=cf2DoL4TDF4.
Consider This
1. How formalized should the responsibilities of manager,
trainer, and trainee be prior to
the training?
2. Is there a case to be made that the process of transfer should
be organic and not hard
coded? Why or why not?
3. Would the roles during transfer vary when it comes to
informal or incidental learning?
Explain your reasoning.
Manager or supervisor support for applying new skills has
consistently been found to be a
key factor affecting the success of the transfer process (Broad,
2000; Broad, 2005; Rouiller &
Goldstein, 1993). Specifically, a manager’s support and positive
attitudes toward the trainee
may result in opportunities to practice newly learned skills,
whereas negative attitudes
toward the trainee may cause the manager to assign
unchallenging tasks that fail to allow the
employee to practice newly learned skills.
In sum, a trainee’s manager may provide either more or fewer
opportunities to perform newly
learned skills (Broad & Newstrom, 1992; Ford, 2014; Hafler,
2011; Holton & Baldwin, 2003;
Noe, 2012). Table 8.7 summarizes transfer support
responsibility among the training transfer
triad of manager, trainer, or trainee.
http://www.youtube.com/watch?v=cf2DoL4TDF4
http://www.youtube.com/watch?v=cf2DoL4TDF4
Barriers to Training Transfer Chapter 8
Table 8.7: Support per transfer agent
Support method Implementing agent
Establish explicit objectives Manager
Repetition of learning Trainee
Evaluation and feedback Manager
Use multiple examples Trainee
Trainee selection Manager
Supervisory support Manager
Cultivation of meaning in material Trainer and trainee
Source: Adapted from Cresswell, S. (2006). Practitioner guide
to transfer of learning and training. Albany, NY: Rockefeller
College of Public Affairs & Policy; Haskell, R. E.
(2001). Transfer of learning: Cognition, instruction, and
reasoning. Waltham, MA: Academic Press.
8.3 Barriers to Training Transfer
Many potential barriers affect training transfer, and these
barriers are more likely to be situ-
ational, not dispositional; that is, these barriers affect the
trainee but are not caused by the
individual trainee (Broad & Newstrom, 1992; Burke &
Hutchins, 2008; Noe, 2012). As part of
their extensive research, Broad and Newstrom (1992) not only
surveyed trainers and train-
ees from a range of organizations to rank barriers to training
transfer, they also evaluated a
collection of organizational case studies—including their own at
Saturn Corporation, an auto-
maker subsidiary of General Motors, that described how transfer
was obstructed or enhanced
(see Table 8.8).
Table 8.8: Barriers to training transfer
Rank: Highest to lowest Organizational barrier
1 Lack of reinforcement on the job
2 Interference in the work environment
3 Nonsupportive organizational structure
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx
7(E)valuation of Training  and Development RidofranziStoc.docx

More Related Content

Similar to 7(E)valuation of Training and Development RidofranziStoc.docx

Training Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdfTraining Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdf
studywriters
 
TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx
TRAINING IMPACT QUESTIONNAIRE  DeWine, S. (1987). Evalua.docxTRAINING IMPACT QUESTIONNAIRE  DeWine, S. (1987). Evalua.docx
TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx
turveycharlyn
 
How to measure training effectiveness
How to measure training effectivenessHow to measure training effectiveness
How to measure training effectiveness
Mahmoud Omar
 
TOTADO paper and templates Birdi (2011).DOC
TOTADO paper and templates Birdi (2011).DOCTOTADO paper and templates Birdi (2011).DOC
TOTADO paper and templates Birdi (2011).DOC
Kamal Birdi
 
Effectiveness of training program at icici pru
Effectiveness of training program at icici pruEffectiveness of training program at icici pru
Effectiveness of training program at icici pru
Tanuj Poddar
 
Blended learning combines online and other i.docx
Blended learning combines online and other i.docxBlended learning combines online and other i.docx
Blended learning combines online and other i.docx
write31
 
Training Evaluation Model.pptx
Training Evaluation Model.pptxTraining Evaluation Model.pptx
Training Evaluation Model.pptx
HitkarshSethi2
 
1677Training and DevelopmentBlend ImagesBlend Images.docx
1677Training and DevelopmentBlend ImagesBlend Images.docx1677Training and DevelopmentBlend ImagesBlend Images.docx
1677Training and DevelopmentBlend ImagesBlend Images.docx
felicidaddinwoodie
 
Periyar University MBA Project Report PDF Download Learning and Development a...
Periyar University MBA Project Report PDF Download Learning and Development a...Periyar University MBA Project Report PDF Download Learning and Development a...
Periyar University MBA Project Report PDF Download Learning and Development a...
DistPub India
 
I need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docxI need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docx
evontdcichon
 
Evaluation And Evaluation Of Evaluation
Evaluation And Evaluation Of EvaluationEvaluation And Evaluation Of Evaluation
Evaluation And Evaluation Of Evaluation
Denise Enriquez
 
Kirkpatricks four levels_of_evaluation_271
Kirkpatricks four levels_of_evaluation_271Kirkpatricks four levels_of_evaluation_271
Kirkpatricks four levels_of_evaluation_271
sapnatiwari90
 

Similar to 7(E)valuation of Training and Development RidofranziStoc.docx (20)

Training Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdfTraining Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdf
 
Training & development
Training & developmentTraining & development
Training & development
 
TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx
TRAINING IMPACT QUESTIONNAIRE  DeWine, S. (1987). Evalua.docxTRAINING IMPACT QUESTIONNAIRE  DeWine, S. (1987). Evalua.docx
TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx
 
How to measure training effectiveness
How to measure training effectivenessHow to measure training effectiveness
How to measure training effectiveness
 
TOTADO paper and templates Birdi (2011).DOC
TOTADO paper and templates Birdi (2011).DOCTOTADO paper and templates Birdi (2011).DOC
TOTADO paper and templates Birdi (2011).DOC
 
Effectiveness of training program at icici pru
Effectiveness of training program at icici pruEffectiveness of training program at icici pru
Effectiveness of training program at icici pru
 
Training and development
Training and developmentTraining and development
Training and development
 
Training&Development_LT.pdf
Training&Development_LT.pdfTraining&Development_LT.pdf
Training&Development_LT.pdf
 
The Kirkpatrick-Phillips-Evaluation Model
The Kirkpatrick-Phillips-Evaluation ModelThe Kirkpatrick-Phillips-Evaluation Model
The Kirkpatrick-Phillips-Evaluation Model
 
Blended learning combines online and other i.docx
Blended learning combines online and other i.docxBlended learning combines online and other i.docx
Blended learning combines online and other i.docx
 
MED07_joycepagkatipunan.pdf
MED07_joycepagkatipunan.pdfMED07_joycepagkatipunan.pdf
MED07_joycepagkatipunan.pdf
 
Training Evaluation Model.pptx
Training Evaluation Model.pptxTraining Evaluation Model.pptx
Training Evaluation Model.pptx
 
1677Training and DevelopmentBlend ImagesBlend Images.docx
1677Training and DevelopmentBlend ImagesBlend Images.docx1677Training and DevelopmentBlend ImagesBlend Images.docx
1677Training and DevelopmentBlend ImagesBlend Images.docx
 
Training and development in human resources management
Training and development in human resources managementTraining and development in human resources management
Training and development in human resources management
 
Apt chapter 12 summative evaluation
Apt chapter 12 summative evaluationApt chapter 12 summative evaluation
Apt chapter 12 summative evaluation
 
Periyar University MBA Project Report PDF Download Learning and Development a...
Periyar University MBA Project Report PDF Download Learning and Development a...Periyar University MBA Project Report PDF Download Learning and Development a...
Periyar University MBA Project Report PDF Download Learning and Development a...
 
I need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docxI need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docx
 
Training evaluation
Training evaluationTraining evaluation
Training evaluation
 
Evaluation And Evaluation Of Evaluation
Evaluation And Evaluation Of EvaluationEvaluation And Evaluation Of Evaluation
Evaluation And Evaluation Of Evaluation
 
Kirkpatricks four levels_of_evaluation_271
Kirkpatricks four levels_of_evaluation_271Kirkpatricks four levels_of_evaluation_271
Kirkpatricks four levels_of_evaluation_271
 

More from alinainglis

· Previous professional experiences that have had a profound.docx
· Previous professional experiences that have had a profound.docx· Previous professional experiences that have had a profound.docx
· Previous professional experiences that have had a profound.docx
alinainglis
 
· Please select ONE of the following questions and write a 200-wor.docx
· Please select ONE of the following questions and write a 200-wor.docx· Please select ONE of the following questions and write a 200-wor.docx
· Please select ONE of the following questions and write a 200-wor.docx
alinainglis
 
· If we accept the fact that we may need to focus more on teaching.docx
· If we accept the fact that we may need to focus more on teaching.docx· If we accept the fact that we may need to focus more on teaching.docx
· If we accept the fact that we may need to focus more on teaching.docx
alinainglis
 
· How many employees are working for youtotal of 5 employees .docx
· How many employees are working for youtotal of 5 employees  .docx· How many employees are working for youtotal of 5 employees  .docx
· How many employees are working for youtotal of 5 employees .docx
alinainglis
 
· How should the risks be prioritized· Who should do the priori.docx
· How should the risks be prioritized· Who should do the priori.docx· How should the risks be prioritized· Who should do the priori.docx
· How should the risks be prioritized· Who should do the priori.docx
alinainglis
 
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx
alinainglis
 
· Global O365 Tenant Settings relevant to SPO, and recommended.docx
· Global O365 Tenant Settings relevant to SPO, and recommended.docx· Global O365 Tenant Settings relevant to SPO, and recommended.docx
· Global O365 Tenant Settings relevant to SPO, and recommended.docx
alinainglis
 
· Focus on the identified client within your chosen case.· Analy.docx
· Focus on the identified client within your chosen case.· Analy.docx· Focus on the identified client within your chosen case.· Analy.docx
· Focus on the identified client within your chosen case.· Analy.docx
alinainglis
 
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx
alinainglis
 
· Due Sat. Sep. · Format Typed, double-spaced, sub.docx
· Due Sat. Sep. · Format Typed, double-spaced, sub.docx· Due Sat. Sep. · Format Typed, double-spaced, sub.docx
· Due Sat. Sep. · Format Typed, double-spaced, sub.docx
alinainglis
 
· Expectations for Power Point Presentations in Units IV and V I.docx
· Expectations for Power Point Presentations in Units IV and V I.docx· Expectations for Power Point Presentations in Units IV and V I.docx
· Expectations for Power Point Presentations in Units IV and V I.docx
alinainglis
 
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx
alinainglis
 

More from alinainglis (20)

· Present a discussion of what team is. What type(s) of team do .docx
· Present a discussion of what team is. What type(s) of team do .docx· Present a discussion of what team is. What type(s) of team do .docx
· Present a discussion of what team is. What type(s) of team do .docx
 
· Presentation of your project. Prepare a PowerPoint with 8 slid.docx
· Presentation of your project. Prepare a PowerPoint with 8 slid.docx· Presentation of your project. Prepare a PowerPoint with 8 slid.docx
· Presentation of your project. Prepare a PowerPoint with 8 slid.docx
 
· Prepare a research proposal, mentioning a specific researchabl.docx
· Prepare a research proposal, mentioning a specific researchabl.docx· Prepare a research proposal, mentioning a specific researchabl.docx
· Prepare a research proposal, mentioning a specific researchabl.docx
 
· Previous professional experiences that have had a profound.docx
· Previous professional experiences that have had a profound.docx· Previous professional experiences that have had a profound.docx
· Previous professional experiences that have had a profound.docx
 
· Please select ONE of the following questions and write a 200-wor.docx
· Please select ONE of the following questions and write a 200-wor.docx· Please select ONE of the following questions and write a 200-wor.docx
· Please select ONE of the following questions and write a 200-wor.docx
 
· Please use Firefox for access to cronometer.com16 ye.docx
· Please use Firefox for access to cronometer.com16 ye.docx· Please use Firefox for access to cronometer.com16 ye.docx
· Please use Firefox for access to cronometer.com16 ye.docx
 
· Please share theoretical explanations based on social, cultural an.docx
· Please share theoretical explanations based on social, cultural an.docx· Please share theoretical explanations based on social, cultural an.docx
· Please share theoretical explanations based on social, cultural an.docx
 
· If we accept the fact that we may need to focus more on teaching.docx
· If we accept the fact that we may need to focus more on teaching.docx· If we accept the fact that we may need to focus more on teaching.docx
· If we accept the fact that we may need to focus more on teaching.docx
 
· How many employees are working for youtotal of 5 employees .docx
· How many employees are working for youtotal of 5 employees  .docx· How many employees are working for youtotal of 5 employees  .docx
· How many employees are working for youtotal of 5 employees .docx
 
· How should the risks be prioritized· Who should do the priori.docx
· How should the risks be prioritized· Who should do the priori.docx· How should the risks be prioritized· Who should do the priori.docx
· How should the risks be prioritized· Who should do the priori.docx
 
· How does the distribution mechanism control the issues address.docx
· How does the distribution mechanism control the issues address.docx· How does the distribution mechanism control the issues address.docx
· How does the distribution mechanism control the issues address.docx
 
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx
· Helen Petrakis Identifying Data Helen Petrakis is a 5.docx
 
· Global O365 Tenant Settings relevant to SPO, and recommended.docx
· Global O365 Tenant Settings relevant to SPO, and recommended.docx· Global O365 Tenant Settings relevant to SPO, and recommended.docx
· Global O365 Tenant Settings relevant to SPO, and recommended.docx
 
· Focus on the identified client within your chosen case.· Analy.docx
· Focus on the identified client within your chosen case.· Analy.docx· Focus on the identified client within your chosen case.· Analy.docx
· Focus on the identified client within your chosen case.· Analy.docx
 
· Find current events regarding any issues in public health .docx
· Find current events regarding any issues in public health .docx· Find current events regarding any issues in public health .docx
· Find current events regarding any issues in public health .docx
 
· Explore and assess different remote access solutions.Assig.docx
· Explore and assess different remote access solutions.Assig.docx· Explore and assess different remote access solutions.Assig.docx
· Explore and assess different remote access solutions.Assig.docx
 
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx
· FASB ASC & GARS Login credentials LinkUser ID AAA51628Pas.docx
 
· Due Sat. Sep. · Format Typed, double-spaced, sub.docx
· Due Sat. Sep. · Format Typed, double-spaced, sub.docx· Due Sat. Sep. · Format Typed, double-spaced, sub.docx
· Due Sat. Sep. · Format Typed, double-spaced, sub.docx
 
· Expectations for Power Point Presentations in Units IV and V I.docx
· Expectations for Power Point Presentations in Units IV and V I.docx· Expectations for Power Point Presentations in Units IV and V I.docx
· Expectations for Power Point Presentations in Units IV and V I.docx
 
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx
· Due Friday by 1159pmResearch Paper--IssueTopic Ce.docx
 

Recently uploaded

Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
AnaAcapella
 
SURVEY I created for uni project research
SURVEY I created for uni project researchSURVEY I created for uni project research
SURVEY I created for uni project research
CaitlinCummins3
 
Contoh Aksi Nyata Refleksi Diri ( NUR ).pdf
Contoh Aksi Nyata Refleksi Diri ( NUR ).pdfContoh Aksi Nyata Refleksi Diri ( NUR ).pdf
Contoh Aksi Nyata Refleksi Diri ( NUR ).pdf
cupulin
 

Recently uploaded (20)

How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Graduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxGraduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptx
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio App
 
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
 
e-Sealing at EADTU by Kamakshi Rajagopal
e-Sealing at EADTU by Kamakshi Rajagopale-Sealing at EADTU by Kamakshi Rajagopal
e-Sealing at EADTU by Kamakshi Rajagopal
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
 
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH FORM 50 CÂU TRẮC NGHI...
 
Rich Dad Poor Dad ( PDFDrive.com )--.pdf
Rich Dad Poor Dad ( PDFDrive.com )--.pdfRich Dad Poor Dad ( PDFDrive.com )--.pdf
Rich Dad Poor Dad ( PDFDrive.com )--.pdf
 
How to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxHow to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptx
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
 
The Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFThe Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDF
 
ESSENTIAL of (CS/IT/IS) class 07 (Networks)
ESSENTIAL of (CS/IT/IS) class 07 (Networks)ESSENTIAL of (CS/IT/IS) class 07 (Networks)
ESSENTIAL of (CS/IT/IS) class 07 (Networks)
 
SURVEY I created for uni project research
SURVEY I created for uni project researchSURVEY I created for uni project research
SURVEY I created for uni project research
 
Observing-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxObserving-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptx
 
Contoh Aksi Nyata Refleksi Diri ( NUR ).pdf
Contoh Aksi Nyata Refleksi Diri ( NUR ).pdfContoh Aksi Nyata Refleksi Diri ( NUR ).pdf
Contoh Aksi Nyata Refleksi Diri ( NUR ).pdf
 
Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...Andreas Schleicher presents at the launch of What does child empowerment mean...
Andreas Schleicher presents at the launch of What does child empowerment mean...
 
OSCM Unit 2_Operations Processes & Systems
OSCM Unit 2_Operations Processes & SystemsOSCM Unit 2_Operations Processes & Systems
OSCM Unit 2_Operations Processes & Systems
 
Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"
 

7(E)valuation of Training and Development RidofranziStoc.docx

  • 1. 7(E)valuation of Training and Development Ridofranz/iStock/Thinkstock Learning Objectives After reading this chapter, you should be able to: • Differentiate between formative and summative evaluations. • Use Kirkpatrick’s four-level evaluation framework. • Compute return on investment. • Explain why evaluation is often neglected. One of the great mistakes is to judge policies and programs by their intentions rather than their results. —Milton Friedman, Economist Introduction Chapter 7 Pretest 1. It is possible for organizations to try out trainings before they are launched. a. true
  • 2. b. false 2. Assessing whether trainees enjoyed training is important only as an evaluation of the trainer’s competence. a. true b. false 3. Return on investment should be calculated after every training session to determine whether it was cost-effective and benefited the company as a whole. a. true b. false 4. Fewer than 25% of organizations perform formal evaluations of training effectiveness. a. true b. false 5. Failure to evaluate trainings may be not only unprofessional but also unethical. a. true b. false Answers can be found at the end of the chapter. Introduction We seek to answer one overarching question in the final, evaluation phase of ADDIE: Was the training effective? (See Figure 7.1.) In particular, we assess whether we realized expected training goals—as uncovered by our analysis phase— specifically, whether the trainees’ post- training KSAs improve not only their performance, but also the organization’s performance.
  • 3. As we will see, the process of training evaluation includes all of these issues, as well as decid- ing which data to use when evaluating training effectiveness, determining whether further training is needed, and assessing whether the current training design needs improvement. Ultimately, evaluation creates accountability, which is vital given the significant amount organizations spend on training and developing employees— approximately $160 billion annually (ASTD, 2013). This significant investment makes it imperative that organizations know whether their training efforts yield a positive financial return on training invest- ment (ROI). Formative Evaluation Chapter 7 Figure 7.1: ADDIE model: Evaluate In this final phase of ADDIE, we evaluate how effective the training has been. From assessing any improvement in the KSAs of the trainees to the financial return on the training investment, the evaluation phase appraises the effectiveness of not only our prior analysis, design, development, and implementation, but also of the training in totality. f07.01_BUS375.ai Design Develop ImplementAnalyze Evaluate 7.1 Formative Evaluation
  • 4. Although evaluation is the last phase of ADDIE, it is not the first time aspects of the training program are evaluated. When it comes to training evaluation, we assess the training through- out all phases of ADDIE, using first what is known as a formative evaluation. Formative evalu- ation is done while the training is forming; that is, prior to the real-time implementation and full-scale deployment of the training (Morrison, Ross, & Kalman, 2012). Think of formative evaluation as a “try it and fix it” stage, an assessment of the internal processes of the training to further refine the external training program before it is launched. Formative evaluations are valuable because they can reveal deficiencies in the design, devel- opment, and implementation phases of the training that may need revision before real-time execution (Neirotti & Paolucci, 2013; U.S. Department of Health and Human Services, 2013; Wan, 2013). Recall from Chapter 6 that formative evaluations can range from editorial reviews of the train- ing and materials—which may include a routine proofread of the training materials to check for misspelled words, incomplete sentences, or inappropriate images—to content reviews, design reviews, and organizational reviews of the training (Larson & Lockee, 2013; Noe, 2012; Piskurich, 2010; Wan, 2013). So, for example, we may find in a content review that our training is not properly linked to the original learning objectives. Or we may conclude dur- ing a design review that because e-learning is not a good fit
  • 5. with the organizational culture, instructor-led training is a more appropriate choice. Formative evaluations also encompass pilot testing and beta testing. With pilot tests and beta tests, we are out to confirm the usability of the training, which includes assessing the effec- tiveness of the training materials and the quality of the activities (ASTD, 2013; Stolovitch & Keeps, 2011; Wan, 2013). Both beta tests and pilot tests are considered types of formative evaluation because they are performed as part of the prerelease of the training. For the pilot and beta testing, selected employees and SMEs are chosen to test the training under normal, everyday conditions; this approach is valuable because it allows us to pinpoint any remain- ing flaws and get feedback on particular training modules (Duggan, 2013; Piskurich, 2010; Wan, 2013). http://www.businessdictionary.com/definition/condition.html Summative Evaluation Chapter 7 7.2 Summative Evaluation Whereas formative evaluation focuses on the training processes, summative evaluation focuses on the training outcomes—for both the learning and the performance results following the training (ASTD, 2013; Piskurich, 2010; Wan, 2013). Summative evaluation is the focus of the E phase of ADDIE. According to Stake (2004), one way to look at the difference between forma- tive and summative evaluation is “when the cook tastes the
  • 6. soup, that’s formative evaluation; when the guests taste the soup, that’s summative” (p. 17). In summative evaluation, we assess whether the expected training goals were realized and, specifically, whether the trainees’ posttraining KSAs improved their individual performance (and, ultimately, improved the organization’s overall performance). As Figure 7.2 depicts, in summative evaluation, we assess both the short-term learning-based outcomes—such as the trainees’ reactions to the training and opinions about whether they actually learned anything—and the long-term performance-based outcomes. These long-term performance- based outcomes include assessing whether a transfer of training occurred—that is, applica- tion to the workplace via behavior on the job—as well as whether any positive organizational changes resulted, including return on investment (Noe, 2012; Phillips, 2003; Piskurich, 2010). Figure 7.2: Summative evaluation’s short-term and long-term outcomes Training evaluation can be broken down into short-term and long-term assessments. Short- term evaluations are usually trainee focused, whereas long-term assessments are focused on the training itself. f07.02_BUS375.ai Summative outcomes
  • 7. Short-term outcomes Learning by participants Reactions of learners Organizational impact and Return on Investment Behavior on the job Long-term outcomes As Figure 7.3 depicts, however, the most common assessments organizations perform with summative evaluation are ultimately the least valuable to them (ASTD, 2013; Nadler & Nadler, 1990). The next section will discuss each level of evaluation. Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 Figure 7.3: Use versus value in evaluation Although levels 1 and 2 are most used and usually easiest to compile, levels 3, 4, and 5 (ROI) are deemed to be the most valuable information in assessing training effectiveness, but they
  • 8. require complex calculations. f07.03_BUS375.ai Percentage who use the corresponding level to any extent Percentage who say this level has high or very high value Reactions of participants Evaluation of learning Evaluation of behavior Evaluation of results Return on investment 100 80 60 40 20 0 Source: Adapted from American Society for Training &
  • 9. Development. (2013). State of the industry report. Alexandria, VA: ASTD. 7.3 Kirkpatrick’s Four-Level Evaluation Framework Perhaps the best known and most drawn-upon framework for summative evaluation was introduced by Donald Kirkpatrick (Neirotti & Paolucci, 2013; Phillips, 2003; Piskurich, 2010; Vijayasamundeeswari, 2013; Wan, 2013), a Professor Emeritus at the University of Wisconsin and past president of the ASTD. Kirkpatrick’s four-level training evaluation taxonomy— first published in 1959 in the US Training and Development Journal (Kirkpatrick, 1959; Kirk- patrick, 2009)—depicts both the short-term learning outcomes and the long-term perfor- mance outcomes (see Figure 7.4). Let us detail each level now. Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 Figure 7.4: Kirkpatrick’s four-level evaluation Donald Kirkpatrick’s four-level evaluation is the widely used standard to illustrate each level of training’s impact on the trainee and the organization as a whole. Kirkpatrick’s typology is a good starting point to frame discussions regarding the trainee’s reaction to the training (level 1), if anything was learned from the training (level 2), if the trainee applied the training through new behavior (level 3), and ultimately, if the training resulted in positive organizational results (level 4).
  • 10. f07.04_BUS375.ai 4 Results 3 Transfer 2 Learning 1 Reactions Level 1—Reaction: Did They Like It? A level 1 assessment attempts to measure the trainees’ reactions to the training they have just completed (Kirkpatrick, 2009; Wan, 2013; Werner & DeSimone, 2011). Specifically, level 1 assessments ask participants questions such as: • Did you enjoy the training? • How was the instructor? • Did you consider the training relevant? • Was it a good use of your time? • Did you feel you could contribute to your learning experience? • Did you like the venue, amenities, and so forth? A level 1 assessment is important not only to assess whether the trainees were satisfied with the training session per se, but also—and perhaps more significantly—to predict the effec- tiveness of the next level of evaluation: level 2, learning (ASTD, 2013; Kirkpatrick, 2009; Mor-
  • 11. rison et al., 2012; Noe, 2012; Wan, 2013). That is, as level 1 reaction goes, so goes level 2 learning. According to a recent study (Kirkpatrick & Basarab, 2011), there was a meaningful Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 correlation between levels 1 and 2, in that positive learner engagement led to a higher degree of learning. This outcome specifically follows the idea of attitudinal direction (Harvey, Reich, & Wyer, 1968; Kruglanski & Higgins, 2007), whereby a positive reaction (emotional intensity) can lead to constructive conclusions, as depicted in the following formula: Attitudinal Direction Perception + Judgment → Emotion (Level 1) (Positive) Emotion → Learning (Level 2) With attitudinal direction in mind, a level 1 evaluation is attentive to the measurement of attitudes, usually using a questionnaire. A level 1 survey includes both rating scales and open- ended narrative opportunities (Clark, 2013; Neirotti & Paolucci, 2013; Wan, 2013). Typically, participants are not asked to put their names on the survey, based on the assump- tion that anonymity breeds honesty. Level 1 evaluation instruments are part of the training materials that would have been created in the development
  • 12. phase of ADDIE. Level 2—Learning: Did They Learn It? In a level 2 assessment, we attempt to measure the trainees’ learning following the training that they just completed (Kirkpatrick, 2009; Wan, 2013; Werner & DeSimone, 2011) and, spe- cifically, in relation to the learning outcomes we established during the analysis and design phases of ADDIE. Remember, learning outcomes can include cognitive outcomes (knowl- edge), psychomotor outcomes (skills), and affective outcomes (attitudes) (Noe, 2012; Piskurich, 2010; Rothwell & Kazanas, 2011). • With cognitive outcomes, we determine the degree to which trainees acquired new knowledge, such as principles, facts, techniques, procedures, or processes (Noe, 2012; Piskurich, 2010; Rothwell & Kazanas, 2011). For example, in a new employee orienta- tion, cognitive outcomes could include knowing the company safety rules or product line or learning the company mission. • With skills-based or psychomotor learning outcomes, we assess the level of new skills as a function of the new learning, as seen, for example, in newly learned listening skills, conflict-handling skills, or motor or manual skills such as com- puter repair and replacing a power supply (Morrison et al., 2012; Noe, 2012; Piskurich, 2010).
  • 13. • Affective learning outcomes focus on changes in attitudes as a function of the new learning (Noe, 2012; Piskurich, 2010). For example, trainees who learned a different attitude regarding other cultures following diversity training or those who gained a new attitude regarding the importance of safety prevention after a back injury– prevention training class have achieved learning outcomes. As with level 1, evaluations for level 2 are done immediately after the training event to deter- mine if participants gained the knowledge, skills, or attitudes expected (Morrison et al., 2012; Noe, 2012; Piskurich, 2010). Measuring the learned KSA outcomes of level 2 requires testing to demonstrate improvement in any or all level 2 outcomes: Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 • Cognitive outcomes and new knowledge are typically measured using trainer- constructed achievement tests (such as tests designed to measure the degree of learning that has taken place) (Duggan, 2013; Noe, 2012; Piskurich, 2010; Wan, 2013). • For newly learned motor or manual skills, we can use performance tests, which require the trainee to create a product or demonstrate a process (Duggan, 2013; Noe, 2012; Piskurich, 2010; Wan, 2013). • Attitudes are measured with questionnaires similar to the
  • 14. questionnaires described for level 1 evaluation, with the participants giving their ratings for various items (for example, strongly agree, agree, neutral, disagree, or strongly disagree). They also include open-ended items to let trainees describe any changed attitudes in their own words (for example, “How do you feel about diversity in the workplace?”) (Duggan, 2013; Kirkpatrick, 2009; Noe, 2012; Piskurich, 2010; Wan, 2013). With a level 2 posttraining learning evaluation, Kirkpatrick recommends first giving partici- pants a pretest before the training and then giving them a posttest after the training (Cohen, 2005; Kirkpatrick, 1959; Kirkpatrick, 2009; Phillips, 2003; Piskurich, 2010) to determine if the training had any effect, positive or negative. Creating valid and reliable tests is not a casual exercise; in fact, there is a credential one can attain to become an expert in testing and evalu- ation (http://www.itea.org/professional-certification.html). Does the test measure what it is intended to measure? If the same test is given 2 months apart, will it yield the same result? HRD in Practice: A U.S. Department Uses Level 2 Evaluation The U.S. Department of Transportation uses oral quizzes or tests for level 2 evaluation. Oral quizzes or tests are most often given face-to-face and can be conducted individually or in a group setting. Here is a typical example of the department’s level 2 oral quizzing:
  • 15. 1. When it comes to Highway Safety tell me two safety challenges you are facing right now in your state or region. 2. What are “special use” vehicles and what is special about them? 3. What type of crossing is required for train speeds over 201 km/h (125 mph)? 4. Identify the following safety device? … 5. Define what a passive device is? Can anyone give me an example of a passive device? 6. What are three types of light rail alignments? 7. Why is aiming of roundels so critical? (p. 4) Source: US Department of Transportation. (2004). Level II evaluation. Washington, DC: Author. Retrieved from https://www.nhi.f hwa.dot.gov/resources/ docs/Level%20II%20Evaluation%20Document.pdf Consider This 1. Do you think this is a good way to evaluate trainees’ knowledge? Why or why not? 2. Do you think it is better to conduct this oral quiz in a group or individually. Explain your reasoning. 3. What suggestions could you provide to improve the level 2 oral quizzes for the U.S. Department of Transportation? http://www.itea.org/professional-certification.html https://www.nhi.fhwa.dot.gov/resources/docs/Level%20II%20E valuation%20Document.pdf https://www.nhi.fhwa.dot.gov/resources/docs/Level%20II%20E valuation%20Document.pdf
  • 16. Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 Level 3—Behavior: Did They Apply It? A level 3 evaluation assesses the transfer of training; that is, do the participants of the train- ing program apply their new learning, transferring their skills from the training setting to the workplace, and as a result, did the training have a positive effect on job performance? Level 3 evaluations specifically focus on behavioral change via the transfer of knowledge, skills, and attitudes from the training context to the workplace. However, before assessing skills transfer to the job, let us consider a practicality to the trans- fer of training evaluation: We must allow trainees a sufficient amount of time and opportu- nity to apply the training skills in the workplace (Piskurich, 2010). The amount of time will depend on numerous factors, including (ASTD, 2013; Cohen, 2005; Morrison et al., 2012; Noe, 2012; Wan, 2013): • the nature of the training, • the opportunity available to implement the new KSAs, and • the level of encouragement from line management. Typically, we can confirm transfer by observing the posttrained participants and conduct- ing work sampling (Kirkpatrick, 2009; Noe, 2012; Wan, 2013); evaluation can occur 90 days to 6 months posttraining (Kirkpatrick, 2009; Tobias & Fletcher, 2000). Figure 7.5 shows an
  • 17. example of level 3 training results. Furthermore, as we will discuss in more detail in Chapter 8: • positive transfer of training is demonstrated when we observe positive changes in KSAs, and • negative transfer is evident when learning occurs, but we observe that KSAs are at less-than-pretraining levels (Noe, 2012; Roessingh, 2005; Underwood, 1966). As discussed in Chapter 2, a trainee may have learned from the training but not be willing to apply the training to the workplace for several reasons. It may sound something like, “Oh, I know how to do it, but I am not doing it for you.” This is known as zero transfer of training, in which learning occurs, but we observe no changes in trainee KSAs. So, and perhaps not surprisingly, there is not a strong positive correlation between level 2 learning and level 3 behavior (Kirkpatrick & Basarab, 2011). That is, just because trainees learn something does not mean they will necessarily apply it. As discussed in previous chapters, irrespective of learning the new KSAs and being able to apply them to the workplace, the trainee must also be willing to apply them. Level 4—Results: Did the Organization Benefit? With a level 4 evaluation, the goal is to find out if the training program led to improved bottom- line organizational results (such as business profits). Similar to
  • 18. the correlation between levels 1 and 2, studies have shown a correlation between levels 3 and 4 (Kirkpatrick, 2009); specifically, if employees consistently perform critical on-the- job behaviors, individual and overall productivity increase. Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 Level 4 outcomes can include other major results that contribute to an organization’s effec- tive functioning. Level 4 outcomes are either changes in financial outcomes or changes in other metrics (for example, excellent customer service) that should indirectly affect financial outcomes at some point in the future; these are known as performance drivers (Swanson, 1995; Swanson & Holton, 2001). Here are some examples of level 4 performance drivers and outcomes (Cohen, 2005; Kirkpatrick, 2009; Phillips, 2003; Piskurich, 2010): • Improved quality of work • Higher productivity • Reduction in turnover • Reduction in scrap rate • Improved quality of work life • Improved human relations • Increased sales • Fewer grievances • Lower absenteeism • Higher worker morale • Fewer accidents • Greater job satisfaction
  • 19. • Increased profits Isolating the Effects of Training A major challenge to evaluation training’s effectiveness is isolating any subsequent perfor- mance improvement to the training itself. That is, improved performance may correspond to the timing of the training but may not be linked to new training itself. Phillips (2003) attributes this to the need for isolation. For example, Cohen (2005) described the following scenario: Let’s say training was focused on new selling techniques for an organization’s sales reps and the post-training assessment of sales and call volume are found to be significantly better than the pre-training amounts; this change could be as much due to an upward turn in the economy as it is to the training itself. (p.23) In this case linking the improvement to training would be incorrect, so we must protect against erroneously ascribing performance improvement to nontraining reasons. To mitigate this possibility, along with using pretests and posttests in level 2, Kirkpatrick (1959, 2009) also recommends using control groups to statistically manage and separate the impact of other variables. Control groups do not receive the training, or they go through other train- ing unrelated to the training of interest, so we can assess the unique effect of the training intervention. In Cohen’s example, a control group would include sales reps not subjected
  • 20. to the specific training program, and then the control group’s performance would be com- pared to the trained group (known as the experimental group) of sales reps (Cohen, 2005; Kirkpatrick, 1959; Kirkpatrick, 2009; Phillips, 2003; Piskurich, 2010). Level 4 outcomes in particular may be difficult to isolate to the training program. This is because in order to assess any of the level 4 outcomes, more time must elapse to make a com- plete assessment. For example, an organization might have to wait 2 or 3 fiscal quarters to see Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 if decreased turnover or higher productivity follow training on those topics. As a result, by the time of assessment, other factors may have had a chance to affect the level 4 outcomes. This is what Sanders, Cogin, and Bainbridge (2013) called a confounding variable, or another fac- tor that obscures the effects or the impact of the training (Guerra-López, 2012). In sum, not unlike a 7-day weather forecast, a level 4 evaluation—although still valuable data—is usually more difficult to credit to the original training because it is the most removed from the train- ing event (Johnson & Christensen, 2010; Kirkpatrick, 2009; Sonnentag, 2003). Linking Kirkpatrick Outcome Levels to the Performance Formula
  • 21. Remember that in Chapter 2, we broke down workplace performance by understanding what components make up job performance; specifically, an outcome of three variables: • Ability—the employee’s capacity to perform the job; collectively, their KSAs • Motivation—the employee’s willingness to perform the job voluntarily • Environment—anything within the organizational environment (such as the supervi- sor, systems, and coworkers) that would affect the employee’s job performance The Performance Formula Performance = f(KSAs × M × E) KSAs = Ability; M = Motivation; E = Environment Using Kirkpatrick’s taxonomy (see Figure 7.5), we can see where summative outcomes are expressed within employee performance (Blanchard & Thacker, 2010; Mitchell, 1982). Figure 7.5: Synthesizing Kirkpatrick and the performance formula By synthesizing Kirkpatrick and the performance formula, we can illustrate a training’s impact not only on employee performance, but also on organizational performance in total. f07.05_BUS375.ai
  • 22. Level 3 Level 2 Learning Reaction Prior Level 4 Level 1 Future state of Level 4 ∑ Summation of all trainees Level 3 Performance = f (KSAs × M × E) 4 Results 3 Transfer 2 Learning 1 Reactions Performance = f(KSAs × M × E)
  • 23. Kirkpatrick’s Four-Level Evaluation Framework Chapter 7 As Figure 7.5 shows, posttraining employee performance (level 3) is dependent on the effec- tiveness of both levels 1 and 2, reaction and learning. Specifically, the newly learned knowl- edge and skills are in level 2, learning, and the attitudes and motivation toward the new learning are in level 1, reaction. Importantly, posttrained performance is both contingent on and subsequently affects the organizational environment level 4 outcomes. Specifically, post- trained employee performance is subject to the antecedent state of the organizational envi- ronment (for example, the quality and state of the departmental supervision would affect the efficacy of the posttraining employee performance). However, it is also expected that the collective performance from the posttrained employee base would ultimately influence and affect the future state of the organizational environment and organizational outcomes and show itself in level 4 outcomes such as improved customer service, more efficient systems, and reduced error rates. HRD in Practice: The Case of the $25,000 Hello Adam did a double take at the final invoice the consultants had faxed in. $7,000—the bold digits jumped out at him. Adding this invoice to their first two invoices, the total for the customer service training was now close to $25,000.
  • 24. Man, this training was expensive! Adam thought. It had all started because the receptionist had greeted a caller with a dry hello instead of giving a pleasant greeting and introducing herself, he remembered. They had had a few customer complaints about the receptionist’s lack of pleasantness, but unfortunately, on this day the caller was the owner, Mr. Lager. “What kind of message of customer service are we sending to folks, Adam?” Lager had asked. “I want those receptionists to make the callers feel like we are a likeable and friendly company. Take care of it, and ASAP!” Since Adam was in charge of administration, he contracted a customer service training firm immediately. And it seemed to be good training, too. It had spanned 2 months, and all the employees who dealt with customers were required to take it. Adam received reports that the trainers were very good; the sessions were said to be fun and informative. The trainers made sure the trainees learned new techniques about providing excellent customer service by requiring each attendee to pass a customer service test. All the trainees had earned a certificate to demonstrate the new learning. In fact, now, after the training, anyone who called into the company heard a pleasant and happy greeting: “Hello, So-and-So speaking. How can I help you?” But, $25,000? Was it worth the expense? Adam pondered.
  • 25. Would this be considered a questionable return on the company’s training investment? Consider This 1. What types of financial data could Adam review to establish the monetary benefits of the training to support the $25,000 expense and a positive return on the training investment? 2. What could Adam point to as proof of successful level 1 evaluation? 3. Success in Kirkpatrick’s level 3 is demonstrated in which part of the case? Return on Investment Chapter 7 7.4 Return on Investment As the case of the $25,000 hello illustrates, not only do we want new learning to be applied to the workplace and to impact organizational performance, we also want to do that in the most cost-effective and efficient way. Summative evaluation should, in the end, lead to judg- ments on the value and worthiness of a training program; therefore, we also evaluate the cost benefit of a training program and evaluate return on training investment, the so-called level 5. What Donald Kirkpatrick was to levels 1 to 4, Jack Phillips is to level 5. Phillips is an internationally renowned expert on measuring the return on investment of human resource development activities. Over the past 20 years, Phillips has produced more
  • 26. than 30 books on the subject of ROI and has been a leading figure in the debate about the future role of human resources (Noe, 2012; Phillips, 2003; Piskurich, 2010). ROI, or level 5, evaluates the benefits of the training versus the costs. Specifically, at this level we compare the monetary benefits from the program with the costs to conduct the training program (Noe, 2012; Phillips, 2003; Piskurich, 2010; Russ-Eft & Preskill, 2009). According to Phillips (2003), the ROI measurement must be simple, and the process must be designed with a number of features in mind. The ROI process must: • be simple, • be economical to implement, • be theoretically sound without being overly complex, • account for other factors that can influence the measured outcomes after training, • be appropriate in the context of other HRD programs, • be flexible enough to be applied in pre- and posttraining, • be applicable to all types of data collected, and • include the costs of the training and measurement program. The two common ways to express training’s return on investment are a benefit–cost ratio (BCR) and a return on investment (ROI) percentage. To find the BCR, we divide the total dollar value of the benefits by the cost, as shown in the following formula: BCR = (Total Dollar Value of Benefits) ÷ (Cost of Training)
  • 27. We determine ROI percentages by subtracting the costs from the total dollar value of the ben- efits to produce the dollar value of the net benefits; these are then divided by the costs and multiplied by 100 to develop a percentage: Total Dollar Benefits – Costs of Training = Net Benefits Net Benefits ÷ Costs × 100 = ROI So, for example, if a traditionally delivered training program produced total benefits of $221,600 with a training cost of $48,200, the BCR would be 4.6. That is, for every dollar invested, $4.60 in benefits is returned. The ROI, therefore, would be 360%. According to research conducted by SyberWorks, because e-learning alleviates the need for trainee and trainer travel, e-learning has ROIs that regularly outperform traditionally delivered training (Boggs, 2014). Return on Investment Chapter 7 Did You Know? Training ROI Not all return on investment is created equal! Depending on the industry and/or type of training, the ROI (measured by the BCR) will vary by sector, as shown in Table 7.1. As a result, it is difficult to formulate a rule of thumb about what an appropriate or fair ROI should be for a given training intervention. ROI will necessarily differ from organization to
  • 28. organization, based on variables such as required financial margins, stakeholder preferences, organizational culture, and overall corporate mission. In sum, and according to training ROI guru Jack Phillips, ROI sometimes is simply used qualitatively, just to see if a program is working or not. Table 7.1 Examples of benefit–cost ratio per industry Industry Training program BCR Bottle company Management development 15:1 Commercial bank Sales training 12:1 Electric utility Soft skills 5:1 Oil company Customer service 5:1 Health care firm Team training 14:1 Source: Based on Phillips, J. J. (2003). Return on investment in training and performance improvement programs. Oxford, England: Butterworth-Heinemann. In context, the significance of ROI—and training itself—means different things to different people; that is, different constituencies have different perceptions of ROI evaluation. For example, a board of directors may see a big picture of how the training affects the company’s ability to achieve its corporate goals: The finance department may be looking to see how training stacks up financially against other ways to invest the company’s money; the depart-
  • 29. ment manager may be solely concerned with the impact on performance and productivity in achieving department goals; and the training and development manager may be concerned with how training programs affect the credibility and status of the company’s training func- tion (Hewlett-Packard, 2004; Phillips & Phillips, 2012; Russ-Eft & Preskill, 2009). While ROI is seen as beneficial, determining ROI can be a time- consuming endeavor; in fact, for that reason, Phillips (2003) asserts that evaluating the ROI of a learning event is not appro- priate in every situation. Specifically, Phillips and Phillips (2012) suggest that calculating ROI does not add value in the following situations: Return on Investment Chapter 7 • If activities are very short, it is unlikely that any significant change in behavior will have resulted. • If activities are required by legislation or regulation, evaluators will have little power to initiate changes because of their findings. • If activities are used to provide learners with the basic technical know-how to per- form their role, ROI data will be meaningless. Here, Phillips argues that evaluating to level 3 is more appropriate in these situations because the training is not optional.
  • 30. Hard Data Versus Soft Data Part of the overall challenge in computing returns on investment in training concerns how we determine costs and benefits with regard to tangible and intangible data. For example, intangible or indirect training benefits such as customer satisfaction, improved work rela- tionships, and organizational morale are more difficult to put a dollar amount on than are tangible or direct benefits such as lower turnover, fewer workplace injuries, and decreased workers’ compensation premium costs. Training costs, too, can be direct or indirect. Direct costs include all expenses related to facilitating the training; examples are the cost of hiring a consultant, conference room fees, equipment rental, and employee travel costs (Piskurich, 2010). Indirect costs of training may include such personnel expenses as salary costs and the costs of lost sales while employees are at training (Piskurich, 2010). Tangible and direct data is easier to memorialize and list, as well. Training expense, for exam- ple, comes directly off an organization’s income statement. Well-trained workers, although an asset that serves as a good predictor of the tangible outcomes, are considered off-balance- sheet assets and are not as easily tracked on the organizational accounting systems (Brimson, 2002; Weatherly, 2003). Data Gathering Methods We need data to compute ROI, and we can choose from a
  • 31. variety of data gathering methods. As Figure 7.6 depicts, a review of data gathering methods shows that follow-up surveys of participants, action planning—such as “asking participants to isolate the impact of the train- ing” (Phillips & Phillips, 2012, p. 95)—performance records monitoring, and job observation were the preferred data collection methods. Return on Investment Chapter 7 Figure 7.6: Data gathering methods In gathering data to compute ROI, each method has its pros and cons. Methods vary from surveys (the most popular method in a recent survey) to interviews and focus groups, which are more complex and take more time. f07.06_BUS375.ai Percentage who use these approaches to a high or very high extent Follow-up surveys of participants Action planning Performance records monitoring Observation on the job
  • 32. Program follow-up session Follow-up surveys of participants’ supervisors Interviews with participants Interviews with participants’ supervisors Follow-up focus groups 0 2010 4030 50 60 Source: American Society for Training & Development, 2013; Phillips & Phillips, 2012. Each data gathering method has its unique advantages and disadvantages; this includes spe- cific consideration to and trade-offs between data collection time and the cost of collecting the data, as well as the fact that some data gathering methods may require a special skills set (for example, how to conduct a focus group). Additionally, each method offers aspects of soft and/or hard cost–benefit data and, as a result, subsequent analyses may be more complex. As the next section will discuss, because of these and other reasons, evaluation is many times postponed or neglected outright.
  • 33. Evaluation: Essential, but Often Neglected Chapter 7 7.5 Evaluation: Essential, but Often Neglected Perhaps, and not surprisingly so, many organizations neglect or overlook the higher levels of evaluation. Some surveys show that only about 20% of organizations conduct a formal evalu- ation of training’s effectiveness (ASTD, 2013; Brown & Gerhardt, 2002; Noe, 2012; Russ-Eft & Preskill, 2009; Wang & Wilcox, 2006; Werner & DeSimone, 2011). The reasons for not conducting training evaluation are varied. Recently, Russ-Eft & Preskill (2009) researched the prevailing reasons why evaluation is not done more often within orga- nizations; notably, their findings include the view that organizations do not value evaluation in general. This may be a function of many things, including the organization lacking expertise in performing evaluations, a fear as to what the evaluation may yield, and even the practical rationale that no one has asked for it! In the final analysis, neglecting evaluation is not only unprofessional, it is may also be unethi- cal (see the Food for Thought feature box titled “Application of Evaluation”). We will look further into the ethics of training in Chapter 10. Food for Thought: Application of Evaluation There are organizations that prioritize quality evaluations to maintain the integrity of the business. For example, the American Evaluation Association
  • 34. (http://www.eval.org) includes high-quality evaluation as part of its code of ethics value statements for organizations that would be socially responsible as it relates to evaluation practices. Specifically, the association’s value statements in the practice of evaluation are as follows: • We value high quality, ethically defensible, culturally responsive evaluation practices that lead to effective and humane organizations and ultimately to the enhancement of the public good. • We value high quality, ethically defensible, culturally responsive evaluation practices that contribute to decision-making processes, program improvement, and policy formulation. • We value a global and international evaluation community and understanding of evaluation practices. • We value the continual development of evaluation professionals and the development of evaluators from under-represented groups. • We value inclusiveness and diversity, welcoming members at any point in their career, from any context, and representing a range of thought and approaches. • We value efficient, effective, responsive, transparent, and socially responsible association operations. (American Evaluation Association,
  • 35. 2013) Consider This 1. What does the American Evaluation Association mean by culturally responsive evaluation practices? 2. How would ethical evaluation within an organization impact the public good? http://www.eval.org Evaluation: Essential, but Often Neglected Chapter 7 Even with its ethical obligations, at its core, evaluation’s objective is not only to ascertain if organizational training with its respective programs are effective, but also, if training is inef- fective, to produce data so as to hold those responsible for training accountable as well. Sampling of Evaluation Models Besides Kirkpatrick’s and Phillips’s, there are, of course, other evaluation models. However— and perhaps not surprisingly—many of the evaluation models are variations on the same themes. That is, evaluation models tend to assess the individual, process, and organizational levels, as well as consider the environment or context in which the training takes place. Let us look at some other popular evaluation models used. Stufflebeam’s CIPP The CIPP model of evaluation was developed by Daniel
  • 36. Stufflebeam and colleagues in the 1960s. CIPP is an acronym for “context, input, process, and product.” This evaluation model requires the evaluation of context, input, process, and product in judging a program’s value. CIPP is a decision-focused approach to evaluation; it emphasizes the systematic provision of information for program management and operation. As shown in Table 7.2, the CIPP model is an attempt to make evaluation directly relevant to the needs of decision makers during a program’s different phases and activities. Table 7.2: The CIPP model of evaluation Aspect of evaluation Type of decision Kind of question answered Context evaluation Planning decisions What should we do? Input evaluation Structuring decisions How should we do it? Process evaluation Implementing decisions Are we doing it as planned? And if not, why not? Product evaluation Recycling decisions Did it work? Source: Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. New York: Wiley. Reprinted with permission. Kaufman’s Five Levels of Evaluation Roger Kaufman (Kaufman, 1999) originally created a four-level assessment strategy called the organizational elements model; a modification to the model
  • 37. resulted in the addition of a fifth level, which assesses how the performance improvement program contributes to the good of society in general, as well as satisfying the client. Kaufman’s evaluation levels are shown in Table 7.3. Evaluation: Essential, but Often Neglected Chapter 7 Table 7.3: Kaufman’s five levels of evaluation Level Evaluation Focus 5 Societal outcomes Societal and client responsiveness, consequences and payoffs. 4 Organizational output Organizational contributions and payoffs. 3 Application Individual and small group (products) utilization within the organization. 2 Acquisition Individual and small group mastery and competency. 1b Reaction Methods’, means’, and processes’ acceptability and efficiency. 1a Enabling Availability and quality of human, financial, and physical resources input. Source: Kaufman, R. (1999). Mega Planning: Practical Tools for Organizational Success: SAGE Publications. Excerpted from p.6 Table 1.1 of Kaufman. R. (2008) The
  • 38. Assessment Book, HRD Press. ISBN 9781599961286. Reprinted with permission. CIRO: Context, Input, Reaction, and Outcome The CIRO (context, input, reaction, and outcome) four-level approach was developed by Peter Warr, Michael Bird, and Neil Rackham (Warr, Bird, & Rackham, 1971). Adopting the CIRO approach to evaluation gives employers a model to follow when conducting training and development assessments. Employers should conduct their evaluation in the following areas: • C—Context or environment within which the training took place. Evaluation here goes back to the reasons for the training or development event or strategy. Employers should look at the methods used to decide on the original training or development specification. Employers need to look at how the information was analyzed and how the needs were identified. • I—Inputs to the training event. Evaluation here looks at the planning and design processes, which led to the selection of trainers, programs, employees, and materials. Determining the appropriateness and accuracy of the inputs is crucial to the success of the training or development initiative. • R—Reactions to the training event. Evaluation methods here should be appropriate to the nature of the training undertaken. Employers may want to measure the reac- tion from learners to the training and to assess the relevance of
  • 39. the training course to the learner’s roles. Assessment might also look at the content and presentation of the training event to evaluate its quality. • O—Outcomes of the training event. Employers may want to measure the levels at which the learning has been transferred to the workplace. This measurement is easier when the training involves hard and specific skills—as would be the case for a train driver or signal operator—but is harder for softer and less quantifiable competencies, including behavioral skills. If performance is expected to change because of training, then the evaluation needs to establish the learner’s initial performance level. It is fair to say that, although many of the evaluation models may vary around the same themes, certain evaluation models may be more appropriate to use than others, depending on the context and focus. For example, whereas the Kirkpatrick and CIPP models focus on train- ing evaluation, they do not underscore the evaluation of the financial returns on investment like Phillips’s model. Likewise, unlike other tactical evaluation models, Kaufmann’s model, Evaluation: Essential, but Often Neglected Chapter 7 because of its focus on societal outcomes, is not limited to training initiatives and may be used more broadly in other evaluative contexts such as
  • 40. consumer marketing or evaluating an organization’s corporate citizenship efforts. HRD in Practice: Back to the Case of the $25,000 Hello When we last left Adam, he was pondering whether the $25,000 expense for the customer service training was worth it. Adam wondered, “Would this be considered a questionable return on the company’s training investment?” After performing a return on investment for the training program, Adam realized that, in fact, the training was not cost effective, with a –1.6% ROI. Tables 7.4 and 7.5 show some of Adam’s analysis, in which he found the benefits of the training were $24,615 but the direct costs were $25,000: Table 7.4: Adam’s ROI analysis Task Result 1. Focus on a unit of measure. Reduction in number of complaints. 2. Determine a value of each unit. Take an average cost per complaint; include direct and indirect costs—in this case $547. 3. Calculate the change in performance data. Six months after the program, there were 50 fewer complaints, with 30 of those directly attributed to supervisors as a result of techniques taught in the training program.
  • 41. 4. Determine an annual amount for the change. It was decided an annual reduction of 45 complaints was conser- vative and realistic. 5. Calculate the total value of the improvement. Total value of improvement attributable to training was 45 × $547 = $24,615. Table 7.5: Other organizations’ training ROI that Adam researched Study or setting Target group Program description Business measures ROI Verizon Communications Training staff,
  • 42. customer service Customer service skills training Reduced call escalations (–85%) Retain Merchandise Company Sales associate Retails sales skills Increased sales revenues 118% U.S. Department of Veterans Affairs Managers, supervisors Leadership competencies Cost, time sav- ings, reduced staff requirements 159% Source: Phillips, J. J., & Phillips, P. P. (2006). The ROI fieldbook. Copyright © 2006 International Society for Performance Improvement. New York: Wiley.
  • 43. Reprinted with permission of John Wiley and Sons. During his research on training evaluation, Adam saw that the results could have been much worse; in fact, he read that Verizon had a more extensive customer service training that had an astounding –85% ROI! “Wow!” Adam uttered aloud, “Evaluation cannot be overlooked!” (continued) Summary and Resources Chapter 7 Consider This 1. Adam agreed that the skills outcome for the customer service training was a success. Specifically, after the training, anyone who now called the company heard a pleasant and happy greeting: “Hello, So-and So-speaking. How can I help you?” In the final analysis, does it really matter if the ROI was -1.6%? 2. What measures could Adam have taken to ensure a positive ROI? 3. Do you think the training company that Adam contracted had an ethical obligation to ensure a positive ROI? Specifically, could they have charged less and gotten the same result? Summary and Resources Chapter Summary
  • 44. • The focus of formative evaluation is the evaluation of the process, as the training is forming; summative evaluation, however, focused on the outcomes and specific train- ing results—both the learning and performance. • For summative evaluation, we used Kirkpatrick’s four- level taxonomy, which is depicted as a pyramid showing the four stages of evaluation: reaction, learning, behavior, and results. • The chapter also discussed return on investment, sometimes known as level 5. With ROI, we can check to see how cost-effective and efficient the training program is, which in turn can lead to judgments on the value of training. A particular challenge in computing returns on investment in training concerns tangible data versus intangible data, also known as hard versus soft data. • Finally, we discussed why organizations often neglect evaluation. The number one reason is that organization members do not value evaluation. In sum, neglecting train- ing evaluation may be not only unprofessional, but also unethical. Posttest 1. Summative evaluation of training assesses both . a. learning-based and performance-based outcomes b. training processes and training outcomes c. readability and usability of the training materials d. beta testing and pilot testing results
  • 45. 2. occur(s) when employees apply new information learned in training to their jobs. a. Summative evaluation b. Accountability c. Transfer of training d. Organizational results Summary and Resources Chapter 7 3. Trainees who gain a new attitude after diversity training have achieved which type of learning outcome? a. a cognitive outcome b. a psychomotor outcome c. an affective outcome d. a performance-based outcome 4. A trainee who learned from a training but does not demonstrate any resulting change in knowledge, skills, or attitudes is exhibiting . a. negative transfer b. passive transfer c. null transfer d. zero transfer 5. Which possible outcomes of training, in Kirkpatrick’s model, are the hardest to isolate to a particular training program? a. level 1 b. level 2 c. level 3 d. level 4
  • 46. 6. A manager translates a safety training’s results into a dollar amount, determining how much money has been saved by reducing workplace accidents. She next divides this amount by the total amount the company paid to hold the training. Which calcu- lation is the manager using? a. a net benefit indicator b. the benefit–cost ratio c. the return on investment percentage d. a monetization equation 7. Which of the following is considered an indirect or intangible benefit of training? a. reduced job turnover b. decreased injuries in the workplace c. improved customer satisfaction d. lower costs of workers’ compensation 8. The number one reason more organizations do NOT conduct formal evaluations of trainings is that . a. organization members do not believe evaluation is valuable b. organization members lack understanding of the evaluation’s purpose c. the costsof an evaluation outweigh the benefits d. the organization has had previous negative experiences with evaluation Summary and Resources Chapter 7 9. Which model of program evaluation looks at how a program
  • 47. not only satisfies a client but also contributes to society? a. Kirkpatrick’s four-level taxonomy b. The CIPP model c. Kaufman’s five levels of evaluation d. The CIRO approach 10. Which evaluation model focuses on evaluation as an approach to decision making? a. Kirkpatrick’s four-level taxonomy b. The CIPP model c. Kaufman’s five levels of evaluation d. The CIRO approach Assess Your Learning: Critical Reflection 1. Explain how formative evaluation is linked to summative evaluation in the training evaluation process. 2. How dependent is level 1, reaction, on level 2, learning? How might a trainee learn something from a training workshop he or she thought was awful? 3. Could you make a case for continuing with a training program that is yielding a nega- tive ROI? 4. If a training program is found to have a positive ROI, does this measure indicate that the training should be renewed? If not, why? 5. Describe some ethical problems that might occur if training evaluation is neglected. 6. As it relates to levels 2 and 3, learning and behavior, what is meant by the statement
  • 48. “not everything learned is observable?” Additional Resources Web Resources Jack Phillips’s ROI Institute: http://www.roiinstitute.net The Bottom Line on ROI: The Jack Phillips Approach. Canadian Learning Journal, 7(1), Spring 2003: http://www.learning- designs.com/page_images/LDOArticleBottomLineonROI.pdf Evaluation of Training Effectiveness: http://www.youtube.com/watch?v=5HqEfxz5YNU For information on outcome evaluation: http://www.tc.umn.edu/~rkrueger/evaluation_oe.html For more on Kirkpatrick’s four levels of evaluation model: http://www.businessballs.com/kirkpatricklearningevaluationmod el.htm A government website on training and development policy: http://www.opm.gov/wiki/training/Training-Evaluation.ashx More information on how to measure training effectiveness: http://www.sentricocompetencymanagement.com/page11405617 .aspx http://www.roiinstitute.net http://www.learning- designs.com/page_images/LDOArticleBottomLineonROI.pdf http://www.youtube.com/watch?v=5HqEfxz5YNU http://www.tc.umn.edu/~rkrueger/evaluation_oe.html
  • 49. http://www.businessballs.com/kirkpatricklearningevaluationmod el.htm http://www.opm.gov/wiki/training/Training-Evaluation.ashx http://www.sentricocompetencymanagement.com/page11405617 .aspx Summary and Resources Chapter 7 More on formative and summative evaluation: http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations. html More on ROI in Training and Development: http://www.shrm.org/education/hreducation/ documents/09-0168%20kaminski%20roi%20tnd%20im_final.pdf and http:// www.shrm.org/Education/hreducation/Pages/ReturnonInvestmen tTrainingand- Development.aspx Measuring ROI on learning and development: http://www.astd.org/Publications/Books/Measuring-ROI Further Reading American Society for Training & Development. (2013). State of the industry report. Alexan- dria, VA: ASTD. Boggs, D. (2014). E-learning benefits and ROI comparison of e- learning vs. traditional train- ing. Retrieved from SyberWorks website: http://www.syberworks.com/articles/e- learningROI.htm
  • 50. Clark, D. (2013). Introduction to instructional system design. Retrieved from Big Dog & Little Dog’s Performance Juxtaposition website: http://www.nwlink.com/~donclark/hrd/ sat1.html Kirkpatrick, D. L. (2009). Evaluating training programs: The four levels. Berrett-Koehler. Phillips, J. J., & Phillips, P. P. (2012). Proving the value of HR: How and why to measure ROI. Alexandria, VA: Society for Human Resource Management. Piskurich, G. M. (2010). Rapid training development: Developing training courses fast and right. New York: Wiley. US Department of Health and Human Services. (2013). Tips and recommendations for suc- cessfully pilot testing your program. Retrieved from http://www.hhs.gov/ash/oah/ oah-initiatives/teen_pregnancy/training/tip_sheets/pilot-testing- 508.pdf Answers and Rejoinders to Chapter Pretest 1. true. Formative evaluation can be seen as a “try it and fix it” process, since it takes place while the training is still being developed. Ideally, any deficiencies are uncov- ered before the program is offered to an external audience. 2. false. Although trainees’ feedback can reveal a lot about trainers’ strengths and weak- nesses, this is not typically the main reason for evaluating whether trainees found a
  • 51. session interesting and useful. More significantly, employees’ satisfaction with a train- ing session predicts how much they learn from it. 3. false. Although useful in many situations, return on investment is time-consuming to calculate and is NOT valuable in all situations. For example, trainings that are very short, are required by legislation, or are necessary for learners to gain basic skills for their roles will not benefit from having return on investment calculated. http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations. html http://www.shrm.org/education/hreducation/documents/09- 0168%20kaminski%20roi%20tnd%20im_final.pdf http://www.shrm.org/education/hreducation/documents/09- 0168%20kaminski%20roi%20tnd%20im_final.pdf http://www.shrm.org/Education/hreducation/Pages/ReturnonInv estmentTrainingandDevelopment.aspx http://www.shrm.org/Education/hreducation/Pages/ReturnonInv estmentTrainingandDevelopment.aspx http://www.shrm.org/Education/hreducation/Pages/ReturnonInv estmentTrainingandDevelopment.aspx http://www.astd.org/Publications/Books/Measuring-ROI http://www.syberworks.com/articles/e-learningROI.htm http://www.syberworks.com/articles/e-learningROI.htm http://www.nwlink.com/~donclark/hrd/sat1.html http://www.nwlink.com/~donclark/hrd/sat1.html http://www.hhs.gov/ash/oah/oah- initiatives/teen_pregnancy/training/tip_sheets/pilot-testing- 508.pdf http://www.hhs.gov/ash/oah/oah- initiatives/teen_pregnancy/training/tip_sheets/pilot-testing- 508.pdf
  • 52. Summary and Resources Chapter 7 4. true. According to some surveys, only about 20% of organizations conduct formal evaluations of the effectiveness of their trainings, despite the fact that many experts consider this unprofessional. 5. true. The American Evaluation Association holds that high- quality evaluation is an essential part of organizations’ social responsibility. High- quality evaluation is included in the association’s code of ethics for organizations. Answers and Rejoinders to Chapter Posttest 1. a. Summative evaluation looks at both the short-term learning-based outcomes and the long-term performance-based outcomes of a training. Learning-based outcomes include employees’ assessments of whether or not they learned anything, whereas performance-based outcomes address how the training influenced the employees’ behavior or the organization’s return on investment. 2. c. Transfer of training describes the extent to which trainees apply what they learned in the training to the workplace, transferring their new learning. It is a longer-term, performance-based outcome measured in summative evaluation. 3. c. As opposed to cognitive outcomes, which link to fact- or procedure-based knowl-
  • 53. edge, and psychomotor outcomes, which link to skills, affective learning outcomes describe changes in attitude as a result of the new learning. 4. d. In zero transfer of training, evaluations show that learning has occurred, but no changes in KSAs are observed. This tends to occur when a trainee is able to apply learning but is not willing to apply it. 5. d. Level 4 outcomes include improvements to the overall organization’s functioning and bottom line. These are particularly difficult to isolate or attribute to a training, because time must elapse before they can be evaluated. During that time, other fac- tors may have influenced the overall organization, making it hard to know whether the training was responsible for the results. 6. b. The benefit–cost ratio (BCR) divides the total dollar value of a training’s benefits by the cost of the training. BCR is one common way of expressing a training’s return on investment. 7. c. Lower job turnover, reduced workers’ compensation premiums, and decreased workplace injuries are all examples of direct or tangible benefits. Improved customer satisfaction, on the other hand, is considered an indirect or intangible benefit, along with others such as improved work relationships and organizational morale. It is harder to assign a dollar amount to these intangible benefits.
  • 54. 8. a. The most common reason organizations do not conduct evaluations is that they do not yet understand the benefits that evaluation can bring. The explanations for this are varied and may include a lack of understanding of how evaluation is used. 9. c. Kaufman’s original four-level organizational elements model was modified to add a fifth level that addresses societal outcomes. This level looks at how a performance improvement program benefits clients and society as a whole. 10. b. The CIPP model is a decision-focused approach that attempts to directly relate evaluation to the needs of program decision makers. It emphasizes systematically providing the information needed for program operation and management. Summary and Resources Chapter 7 Key Terms accountability The willingness to accept responsibility or to account for one’s actions. achievement tests Tests designed to mea- sure the degree of learning that has taken place. affective outcomes Attitudes; focuses on changes in attitudes as a function of the new learning.
  • 55. antecedent state The organizational environment prior to the training, on which posttrained performance depends; for example, how effective and efficient the performance is. cognitive outcomes Knowledge; outcomes that show the degree to which trainees acquired new knowledge, such as principles, facts, techniques, procedures or processes. confounding variable Any factor that obscures the effects or the impact of the training. control group A group used in order to statistically manage and separate the impact of other variables so that the unique effect of the training intervention can be assessed. cost benefit The relationship between the cost of an action and the value of the results. experimental group A group of subjects exposed to an experimental study. four-level training evaluation tax- onomy A theory developed by Donald Kirkpatrick and used to determine the effectiveness of the training and develop- ment process, depicting both the short-term learning outcomes and the long-term per- formance outcomes at four levels: reaction, learning, transfer, and results. future state The posttraining organiza-
  • 56. tional environment, on which performance from a well-trained employee base has an effect; for example, a more effective and efficient environment. isolation Isolating any subsequent perfor- mance improvement to the training itself. learning outcomes Results that are estab- lished during the analysis and design phases of ADDIE; cognitive outcomes (knowledge), psychomotor outcomes (skills), and affective outcomes (attitudes). negative transfer A transfer demonstrated when KSAs are at less-than-pretraining levels. organizational results Outcomes or results that contribute to the functioning of an organization, such as business profits. performance drivers Changes in financial outcomes or other metrics that should indi- rectly affect financial outcomes in the future. performance tests Tests that require the trainee to create a product or demonstrate a process. positive transfer A transfer demonstrated when positive changes in KSAs are observed. posttest A test administered after a pro- gram to assess the level of a learner’s knowl- edge or skill.
  • 57. psychomotor outcomes Skills; assess- ment is based on the level of new skills as a function of the new learning, as seen, for example, in newly learned listening skills, conflict-handling skills, or new motor or manual skills. questionnaires A set of evaluation ques- tions asked of participants, who give their ratings for various items (for example, Strongly Agree, Agree, Neutral, Disagree, or Strongly Disagree); or open-ended items that allow participants to respond to any changed attitudes in their own words (for example, “How do you feel about diversity in the workplace?”). reaction The first level in Kirkpatrick’s four-level training evaluation, in which the evaluation assesses whether the trainees liked the training session per se; it is also a Summary and Resources Chapter 7 good predictor of the effectiveness of the next two levels of evaluation. return on investment (ROI) percentage A percentage calculated by subtracting the costs from the total dollar value of the ben- efits to produce the dollar value of the net benefits, and then dividing this amount by the costs and multiplying the result by 100
  • 58. to produce a percentage. return on training investment (ROI) An analysis that evaluates the cost benefit of a training program via evaluation of the benefits of the training versus the costs; sometimes called level 5 on top of Kirkpatrick’s four-level training evaluation taxonomy. transfer of training An evaluation that assesses whether the participants of the training program applied their new learning from the training setting to the workplace; the ability of trainees to apply to the job the knowledge and skills they gain in training. zero transfer A transfer of training that is demonstrated if learning occurs but no changes are observed in KSAs. 8Transfer of Training Lisa F. Young/iStock/Thinkstock Learning Objectives After reading this chapter, you should be able to: • Explain the framework for training transfer.
  • 59. • Describe the accountability for transfer of training. • Summarize the barriers to transfer. • Understand how the learning organization supports transfer. While Mark Twain once said, “Everybody talks about the weather, but nobody does anything about it,” the same could be said about training transfer: “Everybody talks about training transfer, but nobody does anything about it.” —Anonymous Introduction Chapter 8 Pretest 1. Supervisors’ support is essential for helping employees transfer what they learned in training to their job tasks. a. true b. false 2. Trainees who have more responsibility for their own learning are more likely to trans- fer that learning from one situation to another. a. true b. false 3. The trainer is generally considered the party most responsible
  • 60. for whether trainees apply the new learning to their work. a. true b. false 4. Research has found that most barriers to trainees’ application of new skills are caused by the trainees themselves. a. true b. false 5. In a learning organization, team and group learning take precedence over personal mastery. a. true b. false Answers can be found at the end of the chapter. Introduction As early as 1957 James Mosél, a professor of psychology at George Washington University and the founding director of the university’s industrial psychology program, observed that training often seemed to make little or no difference in job behavior (Broad, 2005; Mosél, 1957). Since that time, training transfer (Kirkpatrick’s level 3)—the degree to which train- ees demonstrate new behaviors by effectively applying to the job the KSAs gained in a train- ing context—has been what Dennis Coates (2008), the CEO of Performance Support Systems, calls the Holy Grail of workplace training programs. In fact, more than half a century later, two separate longitudinal research studies that aggregated individual studies of training transfer
  • 61. estimated that still as little as 10 to 20% of the knowledge or skills taught in training pro- grams is effectively transferred to the workplace (Arthur, Bennett, Edens, & Bell, 2003; Van Wijk, Jansen, & Lyles, 2008). As this chapter will discuss, training transfer not only depends on the trainee’s willingness and ability, but also on an organizational climate that encourages transfer—both tactically and strategically. The importance of the organizational climate is seen, for example, in a learn- ing organization (Senge, 1990), an organization that, through sharing and dialogue, promotes A Framework for Training Transfer Chapter 8 positive training transfer. This chapter will also discuss whether supervisors, trainees, or trainers are responsible for the transfer of training (Broad, 2005; Kopp, 2006). 8.1 A Framework for Training Transfer As Figure 8.1 shows, Baldwin and Ford (1988) first illustrated the process of training trans- fer by showing how, in addition to learning (level 2) from the training, training transfer was linked to three factors or dimensions, namely: trainee characteristics, training design, and work environment. The premise here is that each factor contributes to the success of training transfer and therefore to workplace performance. Let us break down each factor.
  • 62. Figure 8.1: Training transfer model There are key dimensions linked to the transfer of training including trainee characteristics, the training design, and the work environment itself. f08.01_BUS375.ai Learning PERFORMANCETRANSFER Trainee characteristics • Ability • Motivation Training design • Principles of learning • Training content Work environment • Support • Opportunity to use Source: Adapted from Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41, 63–105. Trainee Characteristics Trainee characteristics include how willing and able the trainee is to apply the training. Therefore, although other factors will influence whether the training is transferred, transfer depends in no small part on the states of ability and willingness, as Table 8.1 summarizes. The desired posttraining state is one in which the trainee is able
  • 63. and willing to apply the new learning to the job. As Chapter 2 discussed, specific leadership styles, per Hersey and Blanchard’s situational leadership theory, can influence or act upon a follower’s willing- ness and ability (Daft, 2014; Hersey & Blanchard, 1977). For example, with a willing and able (R4) trainee, the transfer is voluntary, and following training, a supervisor might merely monitor the trainee to ensure that workplace barriers are limited. A Framework for Training Transfer Chapter 8 Table 8.1: Trainee ability and willingness to transfer Trainee type Ability Willingness Transfer potential R1 – – None R2 – + Low; stimulated R3 + – Low; stimulated R4 + + High; voluntary For a trainee who remained not able but willing (R2) following a training, a supervisor might spend more time explaining and clarifying the training to the trainee. Doing so might uncover not only a need for additional training, but also perhaps a learning style or disability issue the employer needs to accommodate. For example, in the United Kingdom, new legisla-
  • 64. tion makes all workplaces dyslexia-friendly workplaces (Dyslexia Action, n.d.). Trainees who are able but not willing (R3) to apply the new learning to the workplace may need an attitudinal intervention; in these cases the supervisor intervenes with the trainee to address aspects of self-efficacy, commitment, or interpersonal skills (James, 1890; Noe, 2012). The goal of these interventions with R2 and R3 trainees is for the supervisor to stimu- late the transfer that does not happen voluntarily (Broad, 2000; Broad, 2005). Did You Know? Transfer of Learning Versus Transfer of Training Semantically, although some assert that the terms transfer of learning and transfer of training are synonymous (Cormier & Hagman, 1987), sometimes distinctions are made. One distinction is when the focus is on cognition and knowledge acquisition—underscoring that not all that is learned is observable. For example, when a new customer service agent tries out the newly memorized sales script on a caller, the term transfer of learning may be more appropriate. When there is a focus on the transfer of particular motor skills and outcome- based behavior, such as when an employee from a cable company is trying for the first time to hook up a DVR to a television, then transfer of training would be used. Finally, if trainees routinely leave the training programs unable and unwilling (R1) to
  • 65. apply the new learning, this outcome suggests a systemic problem; perhaps management should review recruiting practices with the human resources department (Alagaraja, 2012; Blanchard & Thacker, 2010). A Framework for Training Transfer Chapter 8 Training Design Training design is the dimension of the transfer framework that refers to factors built into the training program to increase the chances that transfer of training will occur (Baldwin & Ford, 1988; Ford, 2014; Noe, 2012; Werner & DeSimone, 2011). Two particular theories of transfer have implications for training design: theory of identical elements and cognitive theory, first proposed by Edward Thorndike in 1928. Theory of Identical Elements The theory of identical elements uses the idea that the amount of transfer between the famil- iar situation and the unfamiliar one is determined by the number of elements that the two situations have in common (Thorndike & Woodworth, 1901). That is, transfer of training is enhanced when what trainees learn in the training session matches what they will be doing on the job (Orata, 2013; Thorndike & Woodworth, 1901). In his experiment to underscore the importance of identical elements, Thorndike had participants judge the area of rectangles, and then he tested participants on the related task of estimating
  • 66. the areas of circles and tri- angles. Transfer was assessed by the degree to which learning skill A (estimating the area of squares) influenced skill B (estimating the area of circles or triangles). Thorndike found little evidence of transfer and, from this finding, concluded that “transfer of a skill was directly related to the similarity between two situations” (Thorndike & Woodworth, 1901, p. 15). As a result, transfer is based on making the training environment similar to the job environ- ment; this is known as near transfer—metaphorically, the transfer distance between the training environment and the application to the job environment (Ford, 2014; Holton & Bald- win, 2003; Wan, 2013). An example of near transfer would be a training for a department store cashier in which new employees train on a cash register that matches the registers the department actually uses. An extension of the theory of identical elements is the concept of stimulus generalization, which emphasizes the transfer of general principles and maintenance of skills. This concept is known as far transfer, the application of learned behavior, content knowledge, concepts, or skills in a situation that is dissimilar to the original learning context (Ford, 2014; Holton & Baldwin, 2003). Suppose that a trainee had learned from a workshop to use conflict-handling skills not only at work, but also at home with his spouse; this situation would be an example of far transfer. Table 8.2 gives some everyday examples of near and far transfer.
  • 67. Table 8.2: Examples of near and far transfer Near Far Transfer from using one type of coffee mug to another type of mug Transfer from drinking hot coffee using a mug to drinking hot coffee using a thermos (rule: do not burn yourself ) Transfer from using one shuttle bus to another Transfer from reading the shuttle bus schedule to reading an airline schedule Transfer from using a knife and fork to using a differ- ent size knife and fork Transfer from using a knife and fork to using chopsticks Source: Adapted from Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. New York: Wiley. A Framework for Training Transfer Chapter 8 If we consider near and far transfers as transfer outcomes, then the processes of transfer linked to near and far are known as low-road transfer and high-road transfer (Doyle, McDonald, & Leberman, 2012; Perkins & Salomon, 1988; Salomon & Perkins, 1989). Specifically, low-road transfer, which facilitates near transfer, occurs when the context
  • 68. is so familiar or perceptually similar (Ford, 2014; Svinicki, 2004) to what the trainee already knows that a reflexive or auto- matic triggering of transfer occurs without conscious contemplation; this unconscious com- petence is known as automaticity (Bargh, 2013). For example, a trainee hired as a stockroom forklift operator who has experience driving Caterpillar™ forklifts would most likely have a low-road near transfer, even though the hiring company uses Komatsu™ brand forklifts. In high-road transfer, linked to far transfer, the trainee must consciously draw on previous knowledge, skills, or attitudes. The trainee now applies conscious competence of previous KSAs to perceptually different, but conceptually similar, contexts (Ford, 2014; Perkins & Salo- mon, 1988; Svinicki, 2004). An example of high-road far transfer is a new marketing depart- ment employee drawing on the concepts of game theory learned in college to analyze the competition and the interactions between manufacturers and retailers (Chatterjee & Samu- elson, 2013). HRD in Practice: High-Road, Far Transfer Justin Moore is the CEO of Axcient, a rapidly growing cloud services provider. Moore, now 31, is also a former star of the youth chess circuit. Moore does not play much competitively anymore, but even so, the kinds of thinking learned from his days as a chess prodigy have deeply informed the way he runs a successful start-up. In a sense, Moore does still play chess
  • 69. every day—by running Axcient. “Of course, it’s a business commonplace to recommend forethought. But, in chess, the metaphor is literalized. You’re constantly looking two, three, four moves ahead,” explains Moore. “If you do this move, what’s the countermove? What are all the countermoves? And then, for all of those, what are all of my potential countermoves? Chess is constantly teaching you to think about what comes next, and what comes after that, and what the repercussions could be.” In a chess game your mind is constantly running permutations of decision trees. In a business your mind should be doing the same. A chess match is a war of attrition. If a soccer match is egregiously lopsided at halftime, the game still progresses. But, if White accidentally loses his queen a few moves into the game, it is likely he will resign. A properly matched chess game is often fought to the point that only a few pawns, pieces, and the opposing kings remain—a bare- board state known as endgame. The entirety of a chess game is all a prelude to endgame. “Chess is about getting to endgame,” says Moore. “What happens between the start and then doesn’t necessarily matter. You could lose more pieces or a more valuable piece, and at the end of the day, if you capture the opponent’s king, you win the game.” Pattern recognition. Playing chess teaches you to recognize patterns: the tempting bishop sacrifice that actually led you into a trap, the queen swap that
  • 70. looked favorable but prevented you from castling. You play; you learn. Moore tells a story about how pattern recognition helped his business. In 2011 Moore and his team were trying to improve customer satisfaction. They worked from the assumption that one metric in particular—case (continued) A Framework for Training Transfer Chapter 8 backlog—was the best predictor of customer satisfaction. It seemed reasonable to assume that if you had low or zero backlog, your customers would be happy. “It turned out we were wrong,” says Moore. After 3 months of wandering through the weeds, Moore’s team realized that a better predictor of customer satisfaction was the time it took to respond to a customer request, combined with frequency of updates. A great chess player has a deep awareness of each piece’s role on the board. A bishop has different abilities than a knight has, and its powers are expanded or limited by a board’s pawn structure. In some ways chess is a laboratory for human resources problems. “You have to understand the strengths and weaknesses of the team, of your employees,” says Moore. “You have to understand that the pawn has its role, and it’s a very important one, just as important as the queen, rook, or bishop. Every piece is critical, and the only way to win is to
  • 71. leverage all those pieces’ skill sets together.” Source: Zax, D. (2013, February 19). Six strategy lessons from a former chess prodigy who’s now a CEO. Fast Company. Retrieved from http://www. fastcompany.com/3005989/innovation-agents/6-strategy- lessons-former-chess-prodigy-whos-now-ceo Consider This 1. How did Moore draw on the pattern recognition in chess to solve his customer service issue? 2. In what ways did the game of chess condition Moore to be proactive versus reactive? 3. What was the significance of Moore’s example of differentiating between soccer and chess? Cognitive Theory of Transfer The cognitive theory of transfer is based on trainees’ ability to retrieve, manage, and deploy learned capabilities. For training design, the richer the connections between the skill and real-world knowledge, the better the chance of retrieval, and therefore, the better the likeli- hood of transfer (Baldwin & Ford, 1988; Noe, 2012; Stolovitch & Keeps, 2011). Specifically, transfer is more probable if the trainees can see the potential applications of the training con- tent to their jobs; this idea is consistent with adult-learning principles set forth by Malcolm Knowles (Hafler, 2011; Knowles, 1973): • Adults bring life experiences and knowledge to learning experiences. • Adults are goal oriented.
  • 72. • Adults are relevancy oriented. • Adults are practical. As it relates to the cognitive methods of knowledge recall, the late educational psychologist Robert Gagné’s classic nine events of instruction (Gagné, 1965) is still used today (Gagné, Wager, Golas, & Keller, 2005; Romiszowski, 2013) in instructional design. Table 8.3 summarizes how—after gaining the trainee’s attention (for example, level 1, reac- tion) and ensuring that the trainee is aware of the training objectives—stimulating recall of prerequisite learning is reinforced by subsequent events that ultimately lead to enhanced retention and transfer; learning processes include semantic encoding (learning in context), opportunities for reinforcement, and providing cues to assist in retrieval. As discussed in http://www.fastcompany.com/3005989/innovation-agents/6- strategy-lessons-former-chess-prodigy-whos-now-ceo http://www.fastcompany.com/3005989/innovation-agents/6- strategy-lessons-former-chess-prodigy-whos-now-ceo A Framework for Training Transfer Chapter 8 Chapter 2, cues can include job aids, which can enhance transfer. Job aids can be used during actual performance of tasks; they give information that helps the trainee know what actions and decisions a specific task requires (Stolovitch & Keeps, 2011; Willmore, 2006).
  • 73. Table 8.3: Gagné’s nine events of instruction Instructional event Relation to learning process 1. Gaining attention Reception of patterns of neural impulses 2. Informing learner of the objective Activating a process of executive control 3. Stimulating recall of the prerequisite knowledge Retrieval of prior memory to working memory 4. Presenting the stimulus material Emphasizing features for selective perception 5. Providing learning guidance Semantic encoding; cues for retrieval 6. Eliciting the performance Activating response organization 7. Providing feedback about performance Establishing reinforcement 8. Assessing performance Activating retrieval; making reinforcement possible 9. Enhancing retention and transfer Providing cues and strategies for retrieval Source: Adapted from Gagné, R. M. (1965). The conditions of learning. New York: Holt, Rinehart & Winston. Self-Directed Learning Part of training design should include aspects of self- management, designing the training to use a trainee’s propensity and level for self-direction (Broad, 2005;
  • 74. Guglielmino, 2001; Noe, 2012; Rothwell & Sensenig, 1999; Saks, Haccoun, & Belcourt, 2010). Self-directed learning is the level of initiative in the trainee’s motivation to acquire the new ability and is linked to a trainee’s self-efficacy (Bijker, Van der Klink, & Boshuizen, 2010). Self- directed trainees are empowered to take more responsibility in their learning endeavors; as a result, self-directed trainees are more apt to transfer learning, in terms of both knowledge and skill, from one situation to another (Baldwin & Ford, 1988; Guglielmino, 2001; Knowles et al., 2012). As described in Chapter 5, self-direction does not always equate to self-teaching; for example, for purposes of reinforcing transfer, a self-directed trainee may choose to be shown again how to do a task rather than self-teaching. Work Environment Training transfer has also been linked to the trainees’ perceptions about the work environ- ment (E). As discussed earlier and depicted here in Figure 8.2, this idea is consistent with the performance formula, whereby not only must E remain positive (+), but also perceptions about E must remain positive (+), as well. For transfer to occur, the trainee must perceive that the work environment has a climate for transfer. A climate for training transfer includes factors such as level of supervisor sup- port, opportunities to practice trained tasks, and openness to change (Baldwin & Ford, 1988; Blume, Ford, Baldwin, & Huang, 2010; Rouiller & Goldstein, 1993; Salas, Tannenbaum, Cohen,
  • 75. & Latham, 2013). Holton, Bates, and Ruona (2000) found specific variables that influenced A Framework for Training Transfer Chapter 8 the transfer climate; these include supervisor support or sanctions, resistance or openness to change, levels of coaching or mentoring, and positive or negative personal outcomes (Holton, Bates, & Ruona, 2000). Peer support, too, was seen as a determinant of trainee transfer (Broad, 2000; Broad, 2005; Burke & Hutchins, 2008; Holton & Baldwin, 2003; Holton et al., 2000; Rouiller & Goldstein, 1993), although not any stronger than supervisor support (Van den Bossche, Segers, & Jansen, 2010). Table 8.4 lists frequencies of transfer categories. Figure 8.2: Environment includes trainee perceptions The organizational environment not only has to have an actual climate for training transfer, but the transfer-friendly environment must also be perceived by the employees. f08.03_BUS375.ai Transfer Ability Perceived Actual (climate for transfer)
  • 76. Environment Willingness Performance = f(KSAs × M × E) Source: Adapted from Blanchard, P. N., & Thacker, J. W. (2010). Effective training: Systems, strategies, and practices (4th ed.). Upper Saddle River, NJ: Pearson. Table 8.4: Frequencies of transfer categories Transfer factor Frequencey (%) Transfer influences Learner characteristics Trainer characteristics Design and development Work environment 3 (2%) 8 (4%) 104 (46%) 112 (49%) Time period Before During After Not time bound 28 (12%) 70 (31%) 74 (32%)
  • 77. 56 (25%) Stakeholder support Trainee Trainer Supervisor Peer Organization 53 (23%) 109 (48%) 57 (25%) 2 (1%) 7 (3%) Source: Burke & Hutchins, 2008. Note. Emergent factors are in italics. Transfer influences were coded as 1 = learner characteristics, 2 = trainer characteristics, 3 = design and development, 4 = not time bound; stakeholder support was coded as 1 = trainee, 2 = trainer, 3 = supervisor, 4 = peer, 5 = organization. A Framework for Training Transfer Chapter 8 Food for Thought: Apply Transfer of Training Practice some key ideas in transfer of training with Baylor University’s E-Learning Module: http://business.baylor.edu/knue/3345TOT. Consider This 1. Using the cognitive theory of transfer, what would be some techniques that would
  • 78. enhance transfer? 2. What would be examples of peer support in training transfer? 3. What is meant by the term intellectual capital as it relates to training transfer? Trainees are also more motivated to transfer training when it is part of pursuing desirable outcomes or rewards (or to avoid undesirable outcomes). The value trainees place on such outcomes is known as valence, and the trainee’s belief that he or she will actually receive that outcome or reward when the performance expectation is met is known as instrumentality. This is part of the expectancy process theory of motivation (Vroom & Yetton, 1973) that influ- ences certain decisions that employees will make—in this case, transfer of the training. Positive outcomes include not only extrinsic rewards such as salary increases and bonuses, but also intrinsic rewards such as opportunities for advancement and recognition (Broad, 2005; Holton & Baldwin, 2003; Holton et al., 2000, Vroom & Yetton, 1973). On the Quality of Transfer: Negative and Positive Not all transfer is equal, and when managing transfer, we need to consider two states: • Positive transfer. Near and far transfer enables what is known as positive transfer. Positive transfer is when workplace performance improves due to the training. Posi- tive transfer is more likely when the trainee’s prior learning
  • 79. facilitates the trainee’s acquisition of the new learning or skills. For example, a trainee’s prior experience in learning an older inventory package expedites his or her learning procedures for using the newer package. This concept is consistent with Knowles’s principles of adult learning, where prior experience informs new learning (Knowles, 1973). • Negative transfer. When the trainee performance worsens following the training, this is considered negative transfer. Specifically, negative transfer can happen when a trainee’s prior learning interferes with the acquisition of the new learning or skills. For example, users who switch from a BlackBerry phone, with its physical keyboard, to an iPhone, with its virtual keyboard, find it more difficult to type and text than users who are switching from a Samsung phone, which also has a virtual keyboard. This idea is consistent with Hedberg’s (1981) assertion that there are times, in fact, when adults have to unlearn ideas before new learning can occur. Training transfer is not just a binary proposition. That is, we do not just evaluate whether or not transfer occurred (zero transfer is when we observe no change in the trainee’s KSAs). http://business.baylor.edu/knue/3345TOT/ Accountability for Training Transfer Chapter 8
  • 80. Specifically, we also must be mindful that new training may negatively affect the trainee, and the resulting performance not only may fail to improve but, in fact, may become worse than it was before the training. 8.2 Accountability for Training Transfer At the end of the (training) day, who is responsible for level 3’s training transfer? Is it the trainer, the trainee, or the trainee’s supervisor? While there is no clear HRD-policy answer (Burke & Hutchins, 2008), many training scholars and practitioners have suggested a transfer trinity, or triad, consisting of the trainer, the trainee, and the manager (Blume et al., 2010; Haskell, 2001; Rummler & Brache, 1990); each one plays a role to ensure transfer success. (See Figure 8.3.) Others propose that management is ultimately responsible for ensuring transfer (Esque & McCausland, 1997), and still others place more on the trainer’s shoulders (Broad & Newstrom, 1992; Broad, 2005; Kopp, 2006). Here, trainers not only lead their training toward voluntary transfer, but also stimulate the transfer after the training event, including having trainers partner with supervisors and man- agers to support trainees in their new learning. Figure 8.3: Training transfer trinity Although there has been no consensus about who is ultimately accountable for the transfer of training (or, “where the transfer buck stops”), many in the field agree that a shared
  • 81. accountability exists between the trainer, trainee, and direct manager—the so-called training trinity. f08.04_BUS375.ai Direct Manager Assessment training reinforcement Trainee Trainer Source: Adapted from Coates, D. (2008). Enhance the transfer of training. Alexandria, VA: ASTD, p. 7. Accountability for Training Transfer Chapter 8 Using diabetes education and training as a backdrop, Kopp (2006) specifically suggested that the trainer be primarily accountable for training transfer; he argued that trainers should take ownership of level 3, so that a distinction could be made between effective trainers and inef- fective ones. He viewed the trainer as individually necessary and jointly sufficient in training transfer. That is, although the trainer alone is not sufficient for (and does not guarantee) trans- fer, the trainer was fundamentally necessary—and it follows, therefore, that the trainer can- not be absolved of primary accountability. Burke and Saks
  • 82. (2009) seek commonality rather than a single-minded construct; they conclude that many stakeholders can (and should) be held accountable for transfer and the transfer-related activities that they can affect. The Who, What, and When of Transfer Broad’s and Newstrom’s (1992) extensive research on training transfer included assembling a panel of experts and—using a Delphi method in which the rankings from the experts are col- lated—the perceptions of roles in transfer strategies where given a final rank in every phase of transfer: before, during, and after (see Table 8.5). (Also see the Food for Thought feature box titled “Transfer Strategies,” which provides a link to a summary of Broad and Newstrom’s work.) One of their findings was that the most frequently used roles in transfer differed from the most influential roles in transfer during a given phase of transfer. For example, although the panel thought the manager had the most influential role before transfer (first), managers were actually ranked fifth in frequency of use before transfer. Table 8.5: Frequency versus influence Ranking—most frequently used roles in transfer Before During After Trainer (facilitator) 2 1 7 Manager 5 6 9
  • 83. Learner 8 3 4 Ranking—most influential roles in transfer Before During After Trainer (facilitator) 2 4 8 Manager 1 9 3 Learner 7 5 6 Source: Adapted from Broad, M., & Newstrom, J. (1992). Transfer of training. Philadelphia, PA: Perseus Books. For example, whereas the trainer was most frequently used in the total transfer process, the manager was thought to be the most influential in the transfer process, even given the man- ager’s limited role during the training. This finding was consistent with Burke and Hutchins’s (2008) more recent research, which confirmed that the role of trainers (48%) was more influ- ential than the role of supervisors (25%) during training transfer. In their study, Burke and Hutchins selected training professionals and practitioners who were members of a large met- ropolitan chapter of ASTD and asked about the suggested best practices for enhancing and bolstering training transfer. Accountability for Training Transfer Chapter 8 Table 8.6 outlines recommended strategies and action items for
  • 84. each transfer agent. Table 8.6: Actions items of transfer agents Transfer agent Time period Before During After Manager Communicate that learn- ing is a prime organiza- tional objective. Encourage full par- ticipation by ensuring trainee’s job is covered during the learning program. Provide opportunities to practice and demon- strate new skills. Trainer Provide clear descrip- tion and precourse information to trainee and manager. Ensure good delivery. Provide follow-up con- sultation to maximize application. Trainee Clear up daily activities prior to the learning program. Participate actively and
  • 85. ask questions. Discuss performance objectives and action plans with manager. Source: Broad, 2000; Broad & Newstrom, 1992; Broad, 2005; Burke & Hutchins, 2008. Food for Thought: Transfer Strategies Listen to the Center for Corporate and Professional Development describe transfer strategies in every phase of transfer (before, during, and after): http://www.youtube.com/ watch?v=cf2DoL4TDF4. Consider This 1. How formalized should the responsibilities of manager, trainer, and trainee be prior to the training? 2. Is there a case to be made that the process of transfer should be organic and not hard coded? Why or why not? 3. Would the roles during transfer vary when it comes to informal or incidental learning? Explain your reasoning. Manager or supervisor support for applying new skills has consistently been found to be a key factor affecting the success of the transfer process (Broad, 2000; Broad, 2005; Rouiller & Goldstein, 1993). Specifically, a manager’s support and positive
  • 86. attitudes toward the trainee may result in opportunities to practice newly learned skills, whereas negative attitudes toward the trainee may cause the manager to assign unchallenging tasks that fail to allow the employee to practice newly learned skills. In sum, a trainee’s manager may provide either more or fewer opportunities to perform newly learned skills (Broad & Newstrom, 1992; Ford, 2014; Hafler, 2011; Holton & Baldwin, 2003; Noe, 2012). Table 8.7 summarizes transfer support responsibility among the training transfer triad of manager, trainer, or trainee. http://www.youtube.com/watch?v=cf2DoL4TDF4 http://www.youtube.com/watch?v=cf2DoL4TDF4 Barriers to Training Transfer Chapter 8 Table 8.7: Support per transfer agent Support method Implementing agent Establish explicit objectives Manager Repetition of learning Trainee Evaluation and feedback Manager Use multiple examples Trainee Trainee selection Manager Supervisory support Manager
  • 87. Cultivation of meaning in material Trainer and trainee Source: Adapted from Cresswell, S. (2006). Practitioner guide to transfer of learning and training. Albany, NY: Rockefeller College of Public Affairs & Policy; Haskell, R. E. (2001). Transfer of learning: Cognition, instruction, and reasoning. Waltham, MA: Academic Press. 8.3 Barriers to Training Transfer Many potential barriers affect training transfer, and these barriers are more likely to be situ- ational, not dispositional; that is, these barriers affect the trainee but are not caused by the individual trainee (Broad & Newstrom, 1992; Burke & Hutchins, 2008; Noe, 2012). As part of their extensive research, Broad and Newstrom (1992) not only surveyed trainers and train- ees from a range of organizations to rank barriers to training transfer, they also evaluated a collection of organizational case studies—including their own at Saturn Corporation, an auto- maker subsidiary of General Motors, that described how transfer was obstructed or enhanced (see Table 8.8). Table 8.8: Barriers to training transfer Rank: Highest to lowest Organizational barrier 1 Lack of reinforcement on the job 2 Interference in the work environment 3 Nonsupportive organizational structure