SlideShare a Scribd company logo
1 of 42
TRAINING IMPACT QUESTIONNAIRE
DeWine, S. (1987). Evaluation of organizational communication
competency: The
development of the communication training impact
questionnaire. Journal of
Applied Communication Research, 15(1-2), 113-127.
Purpose: The Training Impact Questionnaire (Training IQ) is a
post-training
instrument that measures employees’ perception of their
capabilities to utilize a
tool taught in a training program.
Theory/Background: DeWine found that training and
development programs
conducted in-house of organizations lack an effect evaluation
process for the
training. Other evaluation tools focus mostly on employee
reaction to training and
do not look at the long term impact that training has on job
performance. This
instrument was created to fill the need for a training evaluation
form that also looks
at the benefits of training, the perceived skills and appropriate
application of skills
in the work place.
Description: The Training IQ is a 20-item questionnaire that
uses declarative
statements and asks respondents to respond using a 5-point
Likert scale that ranges
from (5) strongly agree to (1) strongly disagree. Items 2, 3, 8,
9, 11-19 are reverse-
coded. This questionnaire is meant to be given out two to four
weeks after a
training session. This delay provides enough time for
employees to utilize the new
skill on the job.
There are two factors in this questionnaire. The first factor is
called “Relationship
of training to job,” and measures the association between an
employees job
requirements and the information that was taught during the
training session. The
second factor is called “Skilled performance,” and measures the
extent to which
the new skill is used by the employee in their job.
TRAINING IMPACT QUESTIONNAIRE
This series of statements are possible perceptions of an
employee regarding
previously conducted training and its impact on his or her
ability to apply skills
taught during training to the job. Please respond to each
statement by placing the
appropriate number in the blank to the left of each item.
5 = strongly agree
4 = agree
3 = neutral
2 = disagree
1 = strongly disagree
1. After attending this training program, I am interested in
attending other training
programs.
2. I don’t perform the skill on the job because the skill is too
difficult for me.
3. I use this skill regularly on the job.
4. Because of learning this skill I feel more comfortable about
doing my job.
5. Because of attending this training program, I feel better
about the company.
6. I learned to perform the tasks well in the training program,
but I could have
learned it just as easily from a manual or an instruction
sheet.
7. I think my participation in this training program will help
me to advance in the
company.
8. I didn’t learn this skill in the training program, so I had to
learn it on the job.
9. Work conditions don’t allow me to perform the skill the
way I learned it in
training, so I do the task differently on the job.
10. After training I would perform this skill with practicing.
11. I don’t perform the skill on the job because the skill comes
up so rarely that I
forget how to do it.
12. I don’t perform the skill on the job because I didn’t learn
the skill in the
training program, so I get help to do the skill.
13. I had trouble learning the skill because the training program
was confusing.
14. I never perform this skill on the job.
15. The skill isn’t part of my job.
16. I don’t perform the skill because I was assigned a different
job.
17. I had trouble learning the skill in the training program
because there wasn’t
enough reference material.
18. I perform the skill differently on the job because the skill
doesn’t work the way
I learned it in training.
19. I perform the skill differently on the job because my
supervisor told me to do it
differently.
20. I learned to perform the task well in the training program
because the program
was effective.
Stakeholder Training Evaluation Activity
Fred Nickols provides a perspective that considers multiple
stakeholders in training evaluation. Reflecting upon his
perspective complete the following activity by following the
steps below and the form/table on the following page.
STEP 1:
Think of a training with which you have been involved.
STEP 2:
Write a brief description of the training.
STEP 3:
Consider all of the possible stakeholders involved in the
training and
list those in the center column labeled “Stakeholders” in the
table
below. Give consideration to all those in the organization (and
outside
of it) who could benefit in some way from the training.
STEP 4:
List the contributions made to the training by each group of
stakeholders.
STEP 5:
List the inducements taken away by each group of stakeholders.
Type of Training Evaluated (Provide a Brief but Specific
Description Below)
Stakeholder Model of Evaluation
Contributions (Put In)
Stakeholders
Inducements (Take Out)
www.nickols.us
[email protected]
A Stakeholder Approach to
Evaluating Training
Fred Nickols
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 1
Introduction
There are probably no more widely ac-
cepted “realities” or truisms in the world of
training than the following:
-
ing, the dominant model is “The
Kirkpatrick Model” (TKM).
entirety and training evaluations
are usually confined to the “smiles
test” (TKM Level 1: Trainee Reac-
tions).
evaluating training, particularly at
the higher levels of TKM (i.e., on-
the-job behavior change and busi-
ness results) and in going beyond
TKM (e.g., in determining the ROI
of training or even its societal im-
pact).
-
dant knowledge and an available
supply of viable tools for evaluating
training at all levels of TKM (and
beyond).
-mentioned
interest in and availability of tools
for more robust efforts, evaluations
of training remain mired in TKM
Level 1.
Why is this? If evaluation is so important and if the means of
carrying it out exist, why do evaluations typi-
cally consist of little more than the famous “smiles test”? Is it
because the interest in evaluating training is
feigned? Is it because the costs of evaluating training outweigh
the benefits? Is it a case of diminishing
returns, that is, the higher up TKM an evaluation goes, the more
costly the evaluation and the less valuable
the information? Or is it perhaps the case that trainers are the
only ones interested in TKM – and in going
beyond it?
It is my view that the training community is committed to an
approach to evaluating training that, after more
than 40 years, has failed to capture the commitment and support
of other important constituencies, most
especially, that of the trainees, their managers and the senior
managers of the organizations in and for
which training is conducted. If this is true, then the issue isn’t
one of figuring out how to apply TKM – or
even of extending it – instead, the issue is one of finding some
other approach to evaluating training.
It is also my view that there is a better approach to evaluating
training – a stakeholder-based approach. Al-
though the focus of this paper is on evaluating training, a
stakeholder approach can be applied to evaluating
HRD and other functional areas as well, especially those
considered as having “internal customers” or con-
stituencies to be satisfied.
The basic premise of the stakeholder approach is that several
groups within an organization have a stake in
training conducted for organization members and any effort to
design, develop, deliver and evaluate training
must factor in the needs and requirements of these stakeholder
groups or the results of any subsequent
evaluation are bound to fall short of expectations. The
approach proposed here has two theoretical roots:
stakeholder theory (Donaldson & Preston, 1995; Freeman, 1984)
and the contributions-inducements view of
organizational membership (Barnard, 1947; March & Simon,
1958).
This Article’s Key Points
-
tire function, must satisfy multiple constituencies
known as “stakeholders.”
-
terest in seeing a particular endeavor succeed.
-
deavor in question is rooted in a quid pro quo (i.e., a
stakeholder puts something into the endeavor with
the expectation of getting something out of it).
-
tions” and what they take out are known as “in-
ducements.”
ily agree
in general about the kinds of results expected from
training, they hold very different views about what is
important when it comes to evaluating training.
Their inducements are different.
en-
deavor having multiple constituencies), it is neces-
sary to assess the extent to which all stakeholder
groups are satisfied with what they receive from the
training.
groups are satisfied is to factor in their various re-
quirements during the design, development and de-
livery of the training.
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 2
Finally, it should be noted that this is a proposed approach, a
new approach; it speaks to what could and
should be done, not what is currently being done. There are,
then, no cases to point to, no testimonials to
present, no data to manipulate. There is simply a proposal to go
about evaluating training in a very different
way and some suggestions as to how to do that. But first, some
measurement and evaluation basics.
Measurement & Evaluation
There is a difference between measurement and evaluation.
Measurement focuses on obtaining information
as a result of comparing a given against a standard (e.g.,
information about the length of a board can be
determined by comparing it against the standard provided by a
tape measure). Evaluation concerns itself
with making judgments based on the information provided by
measurement (e.g., the board in question is
too long or too short or just right). Judgments are usually about
value and can be couched in terms of utility
or economics or even aesthetics. In organizations, the “givens”
typically consist of information about actual
performance and the “standards” consist of the goals and
objectives established for performance. Value
judgments come into play in deciding whether the performance
is “good enough” or whether improvement is
required.
To evaluate anything is to determine its value. From a
transaction perspective, the value of anything derives
from its importance or worth in an exchange. Whether you are
bartering or using money as a medium of
exchange, value is measured by the amount of one thing that can
be exchanged for another. Ultimately,
value is a highly individual matter; it boils down to how much
of one thing a person is willing to exchange for
another. I might be willing to give up time with my family to
put in long hours at work in return for the chance
of advancing my career. You might not. You might be willing
to pay $45,000 for an automobile; I might not.
You might be willing to burn the midnight oil to acquire an
advanced degree; I might not. I might be willing to
travel extensively as part of my work; you might not. In
ascertaining the value or worth of anything, including
training, one must always ask, “Ascertain its value to whom?”
To evaluate training, then, is to ascertain its value or
importance or worth; however, and this is extremely
important, the question that usually goes begging is, “To
whom?” It is one thing to ascertain the value of
training to the trainees. It is something else to determine its
value to management. And, it is yet a third mat-
ter to fix the value of training to trainers, be they instructors or
developers. Trainees, trainers and manage-
ment, these are just three of several groups with a stake in
training. Other stakeholders include training
vendors (whether selling off-the-shelf or custom-developed
materials) and, of course, the managers of the
trainees. Let us return now to TKM and the added notion of
ROI.
TKM & ROI
As noted at the outset of this article, current thinking about the
evaluation of training is dominated by what
most call “The Kirkpatrick Model” (TKM). TKM focuses on
four “levels” of evaluation: Reactions, Learning,
Behavior and Results (Kirkpatrick, 1975a, 1975b, 1975c,
1975d). TKM is widely known and widely ac-
cepted, even if it is rarely fully implemented. Another, more
recent addition to TKM, what some call a fifth
level, is the notion of determining the financial return on
investment (ROI) of training (Philips, 1997). And,
there are those who suggest that it is possible and desirable to
go beyond TKM and ROI to societal impact
(Watkins, Leigh, Foshay & Kaufman, 1998).
It is not the intent in this paper to engage in lengthy critiques of
TKM or efforts to determine the ROI of trai n-
ing. That has been done elsewhere (Alliger & Janak, 1989;
Holton, 1996, Kaufman & Keller, 1994; Nickols,
2000). Instead, this paper uses TKM as a point of departure, a
launch pad for introducing a stakeholder-
based approach to the evaluation of training. We will, however,
take a brief look at what typically happens in
evaluating training.
Evaluating Training: What Typically Happens
What typically happens is that the interests of most of the
stakeholders are subordinated to the interests of
the trainers and their managers.
Trainers and their managers are understandably anxious to
demonstrate the value of what they do. While it
is entirely conceivable that a funding manager will want to
know something about the ROI of the training, it is
equally conceivable that the trainees could care less. The
instructors and the developers are probably very
interested in the nature and extent of learning that has taken
place and, perhaps, in the degree of transfer to
the work place. However, unless they’re hoping for a
promotion into management or a transfer to a perfor-
mance consulting unit, their interest in the ROI of the training
is apt to take a back seat. The trainees are
likely to care mainly about two things: the applicability or
relevance of the subject matter (concepts, prin-
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 3
ciples, methods, tools, techniques, etc.) and the extent to which
the training makes good use of their time.
Training vendors want to know if their client, the training
department, is happy with the training they bought.
Everyone wants to know what the trainees think – and for good
reason. Why? Because if the trainees are
sharply and uniformly critical of the training, very little else
matters.
So, most of the time, efforts to evaluate training takes the form
of the required “smiles test,” a measure of
trainee reaction, perhaps some assessment of the learning that
has taken place, occasionally an attempt to
determine the extent of transfer of training or behavior change
on the job and job performance impact, and a
rare effort to quantify the bottom-line impact of training and
use it to establish the ROI of the training.
An interesting and useful question to ask about the four (or
five) levels of training evaluation is this: “Who is
interested in this particular evaluation?” In other words, who is
the audience for the information obtained at
each level? Further: What judgments are to be based on this
information? Who will make them?
As one considers the various audiences for training evaluations
and the judgments these audiences will
make about training, it becomes apparent that there are many
constituencies with an interest in training.
Trainee reactions, TKM Level 1, are obtained from the trainees
but they are of interest to many in the organ-
ization, not the least of which are the trainers and the trainees’
managers. Learning (i.e., skills or competen-
cies acquired) is clearly of interest to the trainees and trainers
and perhaps of importance to others as well.
Behavior change on the job is no doubt of interest to the
trainees’ managers – and to trainers as well, espe-
cially if they are interested in demonstrating the impact of
training. Results, too, are of interest to trainers
and to management, albeit for different purposes. Managers
want results from training for the sake of the
results themselves; trainers are more likely to want results more
for the purpose of demonstrating the value
of training than for the value of the result itself. As for the ROI
of training, the only ones likely to be interest-
ed in that are those who are under pressure to demonstrate it or
those who have a need for it. If such pres-
sure exists, it most likely focuses on trainers, not the trainees or
their management.
There are, then, several constituencies implied by TKM:
trainers, trainees, the trainees’ managers, manag-
ers of the training function or department and, perhaps, senior
managers throughout the organization.
These constituencies all have a vested interest in having things
go well in training; none of them want it to
be a waste; all want it to add value. In short, they have a stake
in the training, an interest in having it su c-
ceed, and that makes them stakeholders.
Stakeholder Defined
Freeman (1984, p.46) defined a stakeholder as “any group or
individual who can affect or is affected by the
achievement of an organization’s objectives.” This is a very
broad definition; too broad, perhaps, because it
would include competitors as stakeholders. Neely and Adams
(2003), in developing their “Performance
Prism,” took care to point out that any look at stakeholders must
include stakeholder contributions as well as
stakeholder satisfaction. In their view, stakeholders put in
something and they take out something. This
transaction view of a stakeholder is quite similar to the
contributions-inducements theory of organizational
membership articulated over a period of several decades by the
likes of Chester Barnard, James March and
Herbert Simon (more on contributions and inducements in a
moment).
For the purposes of this paper, a stakeholder is defined as a
person or group with an interest in seeing an
endeavor succeed. For example, most employees have an
interest in seeing their companies succeed. So
do that company’s suppliers, its customers and the community
in which the company is embedded. Similar-
ly, most trainers have an interest in seeing that the training they
develop and deliver is successful. There
are others who want training to be successful, too. Chief among
them are the managers who sponsor or
fund the training, the managers who manage the training
department and last, but not least, the trainees. A
list of typical training stakeholders follows:
In cases wherein the training is expected to have a fairly direct
and substantial impact on some critical as-
pect of the organization’s performance, senior managers and
executives are also important stakeholders.
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 4
There are even situations in which the community as well as
state and federal regulators become stake-
holders (e.g., as is likely the case when training nuclear power
plant operators).
Stakeholder Contributions and Inducements
As the definition of stakeholder provided earlier implies,
stakeholders are people with an interest in seeing
an endeavor succeed; they expect to get something out of the
endeavor or effort in question. That some-
thing might be a return on their investment, as is the case with
investors. But, and this is extremely impo r-
tant, stakeholders must also put something into the endeavor.
Stakeholders put something in and they take something out.
Investors put their money at risk in hopes of a
return just as the managers who fund training do so in hopes of
a positive impact on performance or costs or
productivity or some other payoff. Trainees contribute their
time, attention, energy and other forms of input
(e.g., participating in discussions and exercises) and they hope
to take out useful knowledge and skills, me-
thods, techniques and tools. Instructors put in their time and
energy, too, along with their skills at leading or
facilitating discussions, presenting subject matter in interesting,
relevant ways and handling the occasionally
difficult trainee. They hope to walk away with a return in the
form of a sense of accomplishment, a reputation
maintained or enhanced and high marks from the trainees.
Developers invest a great deal of time and
energy in designing, developing and field-testing instructional
materials and most of them hope to receive in
return a decent paycheck, a modicum of recognition and a sense
of satisfaction with a job well done. In the
formal language of organizational theory, stakeholders
exchange contributions in return for inducements.
The contributions-inducements schema has a long history and
has been observed and commented upon by
noted management and organizational theorists starting with
Chester Barnard (1947) and continuing through
James March and Herbert Simon (1958). Its essence is that the
various participants or stakeholders must
perceive value in the exchange. Generally speaking,
inducements must be seen as having equal or greater
value than contributions. From the stakeholders’ perspective,
what they receive is of equal or greater value
to them than what they contribute. That is why they are in the
relationship. And if that relationship does not
offer them inducements of equal or greater value to them than
the contributions expected of them, they
leave the relationship. That is why employees, customers and
suppliers go elsewhere and it is also why
training departments are periodically cut back or even
eliminated. They are not perceived as contributing or
adding value that is equal to or greater than their cost.
The importance of this contributions-inducements relationship
cannot be overstated. As James Burke, CEO
of Johnson & Johnson during its Tylenol crisis, once remarked,
“The ultimate measure of an organization’s
success is the extent to which it serves all of its constituencies
better than its competition” (PBS Video,
1995). It falls to management, then, to manage stakeholder or
constituent relationships. This is as true for
the training department and its management as it is for the
larger organization.
To meaningfully evaluate training one must assess the nature of
the contributions-inducements relationship
between each of the stakeholder groups and the training. What
are they putting in? What are they getting
out? Are they putting in what they should? Are they getting out
of it what they want or need? Do they view
the transaction as balanced or unbalanced (i.e., are they putting
in more than they’re getting out)?
Typical Training Stakeholder Contributions & Inducements
The table that follows identifies some of the typical
contributions and inducements that could be involved for
the various stakeholder groups with respect to a particular
training course or training in general. It does not
and cannot represent all such contributions and inducements.
These will vary with the course and the
people involved. A stakeholder “scorecard” must be
constructed to fit the situation. However, the table be-
low does serve as a model and a starting point.
Other groups who might be stakeholders and who might have to
be added include senior managers and
executives, the community and government regulators.
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 5
Table of Stakeholder Contributions and Inducements
Contributions (Put In) Stakeholder
Groups
Inducements (Take Out)
Their time, energy, skills and knowledge,
manifested in individual training events.
Trainers Pay, recognition, personal satisfaction in ac-
complishment, new insight and knowledge,
professional development, continued em-
ployment.
Resource commitments, direction, support,
leadership.
Training
Managers
Pay, pride in accomplishment and status or
standing in the organization, influence (e.g., a
seat at the table), both for themselves and
their unit.
Money, sanction, support. Funding
Managers
Operational and financial impact of greater
value.
Opportunity costs of releasing the em-
ployee for the training, sanction, support.
Using Managers Improved performance on the job.
Their time, attention, energy and know-
ledge, participation.
Trainees Useful information and knowledge, tools and
job aids, good use of their time, improved
skills, improved standing.
Courses and course materials, develop-
ment costs and their reputation.
Vendors Money, repeat business, enhanced reputa-
tion, referrals.
The courses, materials and their time,
energy, skills and knowledge.
Developers Pay, recognition, personal satisfaction in ac-
complishment, new insight and knowledge,
growth and development, improved standing.
A Process for Applying a Stakeholder Approach
At this point it is probably prudent to remind the reader that this
paper presents a proposed approach to eva-
luating training. So far as the author knows, no one has yet
done so. Stakeholder-based approaches, eval-
uations and scorecards have been developed for general
business use but not for evaluating training. Con-
sequently, the process outlined below is a conceptual view of
how one might go about evaluating training
using a stakeholder-based approach. It is not a detailed plan.
Conceptually, at least, the process is very
simple:
m to a short list for each stakeholder
various stakeholders with their in-
ducements
contributions made by the various
stakeholders
orate them into a Stakeholder Contributions-
Inducements Scorecard
conversations
post-mortems
Practically and politically, however, it will likely prove to
involve a lot of hard work.
There are those who will duck accountability and shirk
responsibility. Some of them are trainers. The last
thing many managers want is someone else to whom they have
to be accountable, especially when they
see little coming their way in return. So, the place to begin is
always with the value expected from training,
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 6
be it a single offering or the entire training function. If that
value proposition cannot be made clear and com-
pelling there is little hope for the training let alone a
stakeholder or any other approach to evaluating it.
Mutual Accountability and Shared Responsibility
A stakeholder approach leads to mutual accountability and
shared responsibility. Trainers are not and can-
not be solely responsible for the success of training. The
trainees have something to do with that, too. So do
developers and vendors and managers and clients. The ROI of
training is neither the sole nor the paramount
measure of training. The “smiles test” provides some useful
information but it also allows trainees to criticize
the training without any accompanying assessment of their
behavior and performance as trainees. There is
such a thing as “a responsible trainee” and the evaluation of
training rarely takes stock of that ingredient, yet
it is essential to the success of training. The managers who fund
the training have a right to expect some-
thing for the money they spend but they also have an obligation
to contribute to the success of that training
(even if it’s only to sit still and be interviewed regarding their
expectations of the training or to explain the
rationale that led them to conclude that training is the solution
to some problem of performance). There is,
then, the notion of a “responsible client” as well. Under a
stakeholder approach, the various stakeholders
are accountable to one another and they share the responsibility
for success.
What Value Is Added by A Stakeholder Approach?
value to be provided by training.
n training from transforming trainees
to providing value to stakeholders.
– the value expected
from the training by the various stake-
holders.
requirements of its many constituencies.
offers a balanced view.
– when and as they are relevant to
the stakeholder groups.
grips naturally with the politics of evaluation.
specialized expertise.
evaluation “up front” where it belongs.
ges and supports mutual accountability and shared
responsibility.
“responsible trainee” and the “responsible
client”).
Implications & Conclusion
If one accepts the notion that training has multiple
constituencies or stakeholders whose needs, wants, re-
quirements and preferences must be taken into account, one
must also accept that the only effective way of
doing so is take them into account during the design,
development and delivery of the training. Anything
else is bound to come up short at evaluation time. Moreover, it
is well to keep in mind that, although training
providers and their constituencies might agree in general about
the results to be obtained from training, they
also hold very different perceptions regarding the criteria to be
used in evaluating training programs (Mi-
chalski, G., 1997). For this reason, evaluation issues belong on
the front-end of training endeavors as well
as on the back-end. The real question, then, is how does one
design, develop and deliver training so as to
meet all the stakeholders’ needs and requirements? Do this and
do it well and any subsequent evaluation is
certain to be favorable. How does one do that? Well, that’s
beyond the purview of this paper but a few prin-
ciples to keep in mind are listed below:
-offs and take shortcuts.
r more art as it is science:
Trust your gut.
The stakeholder view, though not without its flaws (Key, 1999)
and critics (Jennings, 1999), is gathering
momentum in management thinking (Donaldson & Preston,
1995) and is increasingly reflected in managerial
tools and actions aimed at assessing organizational and
managerial performance (Atkinson, Waterhouse &
Wells, 1997; Fraser & Zarkada-Fraser, 2003; Neely, Adams &
Crowe, 2003). As one group of observers
writes, “The days when companies could survive and prosper by
focusing on the wants and needs of one
stakeholder – the shareholder – are long gone” (Neely, Adams
& Kennerly, 2002). Trainers, too, must satis-
fy multiple constituencies. Adopting a stakeholder approach to
evaluating training is a step in the right direc-
tion.
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 7
References
1. Alliger, G.M. & Janak, E.A. (1989). Kirkpatrick’s levels of
training criteria: Thirty years later. Per-
sonnel Psychology, 42(2), 331-342.
2. Atkinson, A. A., Waterhouse, J. H., and Wells, R. B. (1997).
A stakeholder approach to strategic
performance measurement. Sloan Management Review (Spring).
3. Barnard, C. A. (1947). The functions of the executive.
Cambridge, MA: Harvard University Publish-
ing.
4. Burke, J. (1995). Remarks made during a PBS video
[author’s notes].
5. Donaldson, T. and Preston, L. (1995). The stakeholder
theory of the corporation: concepts, evi-
dence, and implications. Academy of Management Review 20:1
65-91.
6. Fraser, C. and Zarkada-Fraser, A. (2003). Investigating the
effectiveness of managers through an
analysis of stakeholder perceptions. Journal of Management
Development 22:9 762-783.
7. Freeman, R. (1984). Strategic management: a stakeholder
approach. Boston, MA: Ballinger
8. Holton, E.F. (1996). The flawed four-level evaluation model.
Human Resources Development Quar-
terly, 7(1), 5-21.
9. Jennings, M. (1999, April). Stakeholder theory: letting
anyone who’s interested run the business –
no investment required. Paper presented at a conference titled
Corporate Governance: Ethics
Across the Board, hosted by the Center for Business Ethics at
the University of St. Thomas, Hou-
ston, TX. Retrieved December 6, 2003 from
http://www.stthom.edu/cbes/conferences/marianne_jennings.htm
l
10. Kaufman, R., and Keller, J. M. (1994). Levels of
evaluation: beyond Kirkpatrick. Human Resource
Development Quarterly, 5, 371-380.
11. Key, S. (1999). Toward a new theory of the firm: a critique
of stakeholder “theory.” Management
Decision 37:4, 317-328.
12. Kirkpatrick, D. L. (1975a). Techniques for evaluating
training programs, part 1: reaction. In Evaluat-
ing Training Programs, compiled by D. L. Kirkpatrick, 1-5.
Madison, WI: ASTD.
13. Kirkpatrick, D. L. (1975b). Techniques for evaluating
training programs, part 2: learning. In Evaluat-
ing Training Programs, compiled by D. L. Kirkpatrick, 16-9.
Madison, WI: ASTD.
14. Kirkpatrick, D. L. (1975c). Techniques for evaluating
training programs, part 3: behavior. In Eva-
luating Training Programs, compiled by D. L. Kirkpatrick, 10-
13. Madison, WI: ASTD.
15. Kirkpatrick, D. L. (1975d). Techniques for evaluating
training programs, part 4: results. In Evaluat-
ing Training Programs, compiled by D. L. Kirkpatrick, 14-17.
Madison, WI: ASTD.
16. March, J. G. and Simon, H. A. (1958). Organizations. New
York, NY: John Wiley & Sons.
17. Michalski, G. V. (1997, November). Stakeholder variation
in perceptions about training program
results and evaluation: a concept mapping investigation. Paper
presented at American Evaluation
Association Conference, San Diego, CA. Retrieved December
6, 2003 from
http://www.conceptsystems.com/papers/paperusr/michalsk/aea5
1.htm
18. Neely, A., Adams C. and Kennerly, M. (2002). The
performance prism: the scorecard for measur-
ing and managing success, 1. London: Financial Times
Prentice-Hall
19. Neely, A., Adams, C. and Crowe, P. (2003). The
performance prism in practice. Retrieved De-
cember 6, 2003 from
http://www.som.cranfield.ac.uk/som/cbp/PrismInPractice.pdf
20. Nickols, F. W. (2000). Evaluating training: there is no
“cookbook” approach. In J. Woods & J. Cor-
tada (Eds.), The 2001 ASTD Training & Performance Yearbook
(pp. 322-333). New York, NY:
McGraw-Hill.
21. Phillips, J. (1997). Return on investment in training and
performance improvement programs.
Houston, TX: Gulf Publishing Company.
22. Watkins, R., Leigh, D., Foshay, R., & Kaufman, R. (1998).
Kirkpatrick plus: evaluation and conti-
nuous improvement with a community focus. In Educational
Technology Research and Develop-
ment, 46:4 90-96
http://www.stthom.edu/cbes/conferences/marianne_jennings.htm
l
http://www.conceptsystems.com/papers/paperusr/michalsk/aea5
1.htm
http://www.som.cranfield.ac.uk/som/cbp/PrismInPractice.pdf
A Stakeholder Approach to Evaluating Training
© Fred Nickols 2003 (DRAFT: For Review Only – Do
Not Cite without Permission) 8
Author Bio & Contact Information
Fred Nickols is a senior management consultant and executive
with almost 50 years of experience in the
workplace, much of it associated with training and development
and with other efforts to improve perfor-
mance and productivity in organizational settings. For many
years he was an executive director with Educa-
tional Testing Service. His career began in the United States
Navy where he served on active duty for 20
years, retiring in 1974 with the rank of Chief Petty Officer.
While in the Navy he received his training and
early experiences as an instructor, a writer of programmed
instructional materials, an instructional systems
specialist and an internal management and organizational
development (OD) consultant. His consulting
career spans more than 30 years and his clients include many
well-known corporations, non-profit organiza-
tions and government agencies. He has published dozens of
articles in a wide variety of professional jour-
nals and trade magazines. Currently, he is the managing partner
of Distance Consulting LLC.
Fred Nickols
www.nickols.us
[email protected]
Author’s Note
A revised version of this article with a slightly different title
appears in Advances in Developing Human Re-
sources. Citation information is as follows:
Nickols, F.W. (2005). Why a stakeholder approach to
evaluating training. Advances in Developing
Human Resources. 7(1), 121-134.
http://www.nickols.us/
mailto:[email protected]
Evaluation Designs
Unit 9
There are several related topics in this unit…
Training as an Independent Variable
Causal Relationships
Control Groups
Matched Groups
Full Evaluation Designs
Time Series Evaluation Designs
Partial Evaluation Designs
Training—an Independent Variable
Evaluation of Training
In research terms, training is an independent variable. That is
something
thought to affect an outcome or dependent variable. The
dependent variable in
terms of training could be any number of things ranging from
better customer
service, better employee communication, or better adherence to
the mission of
the company. Whatever workplace issue we have arranged the
training to
address ends up as the dependent variable.
In very simple terms, then, we want to evaluate whether or not
the training had
an impact on a particular outcome. That is, whether or not or to
what degree
the independent variable affected the dependent variable.
But how can we show that training was successful in bringing
about a
particular outcome? To answer this question we need an
evaluation design.
Establishing a Causal Relationship
Evaluation of Training
In order to establish a causal relationship—that is one in which
we can say ‘this caused that to happen’ or ‘the training led to
this
specific outcome’ three conditions must be met…
1. The training must precede the observed outcome in time. We
can not attribute training to behavior that exists before training
occurs.
2. The training must relate to the expected behavior in some
meaningful way. Training that does not relate to the expected
behavior will not produce the intended results.
3. Changes due to training must be the result of the training and
not some other factor (known as a confounding variable).
Control Groups
Evaluation of Training
Control refers to the need to control as many
factors as possible so that we can isolate the
causal relationship between training and the
expected outcomes.
With regard to training, we exercise control by
having a control group or a group that does not
receive training to compare the training group
against.
Matched Groups
Evaluation of Training
When comparing groups we want to begin with groups that are
as evenly
matched as possible. If we train a group that already has some
proclivity to
behave a particular way (say being polite to customers) then we
can not be
sure that training produced any notable differences between the
trained group
and the control group.
There are two primary strategies for creating matched groups:
Assign people randomly to either the trained group or the
control group so that differences end up spread out and
dispersed among the groups.
2. Use pretests to determine where people reside with regard
to the outcomes of interest and then assign them to groups so
that the groups contain equally matched participants (that is
people that are better and/or worse in the area of training end up
in both the training and the control group).
Evaluation Designs
Evaluation of Training
Evaluation designs vary in the degree to which they are more
or
less sophisticated. More sophisticated designs demonstrate the
greatest degree of control through the use of pretests, control
groups, and random assignment of people to groups. Weaker
designs lose some of these features.
Understand though that we can not always set up an evaluation
design that is completely robust. Thus, it is helpful to know and
understand the range of possibilities available and to choose
which is best given the circumstances and possible constraints.
Full Designs
Evaluation of Training
Full designs include random assignment, control and
training groups, and at times a pretest. There are several
full designs from which to choose.
The Pretest-Posttest Control Group Design
Random Assignment Pretest Training Posttest
Random Assignment Pretest Posttest
This is the basic model for full designs, it includes both
random assignment and pretests, as well as a control
and training group. Outcomes are assessed with a
posttest after training.
Full Designs
Evaluation of Training
The pretest however may not be necessary if we
randomly assign people to groups. Thus, if it proves
cumbersome or problematic to have a pretest we can use
the design below, which is essentially the same, with the
exception of not having a pretest.
The Posttest-Only Control Group Design
Random Assignment Training Posttest
Random Assignment Posttest
Full Designs
Evaluation of Training
Similarly, random assignment may not be necessary if we use
pretests and match groups well. In this case we can use the
design
below.
The Pretest-Only Control Group Design
Pretest Training Posttest
Pretest Posttest
This design is the same as the pretest-posttest design, except it
does not use random assignment. It does though have pretests, a
control group, and a posttest.
Full Designs
Evaluation of Training
It may be helpful to see the three full designs together to better
understand how
they compare to and differ from one another.
The Pretest-Posttest Control Group Design
Random AssignmentPretest Training Posttest
Random Assignment Pretest Posttest
The Posttest-Only Control Group Design
Random Assignment Training Posttest
Random Assignment Posttest
The Pretest-Only Control Group Design
Pretest Training Posttest
Pretest Posttest
The first has both a pretest and random assignment, whereas the
second and
third have one or the other. All three have training and control
groups and of
course a posttest.
Time Series Designs
Evaluation of Training
Sometimes it is important to establish a baseline for training
outcomes before training occurs. That is, to have evidence of
the
training outcome of interest over time rather than simply from
one single pretest.
When this is the case a time series design is appropriate. Time
series designs rely on multiple pretests to establish a baseline
score for training participants.
Time Series Designs
Evaluation of Training
Time series designs rely on a set of pretests that help to
establish a “baseline”
or average to compare the posttest score against. Time series
designs include
the simple time series design and the multiple time series
design.
Time-Series Design
Multiple Pretests Over Time Training Posttest
The simple time-series design lacks a control group, which can
be added to
create what is called a…
Multiple Time Series Design
Multiple Pretests Over Time Training Posttest
Multiple Pretests Over Time Posttest
Partial Designs
Evaluation of Training
The final set of evaluation designs exercise the least
amount of control because they lack several key
features. Partial designs are scaled down versions of
full designs. They have only the very basic structure of
full designs.
Although not preferable, they are appropriate when
organizational constraints limit the degree to which we
can conduct fuller evaluation designs.
Partial Designs
Evaluation of Training
There are three common partial designs:
One Shot Case-Study Design
Training Posttest
One Group Pretest-Posttest Design
Pretest Training Posttest
Static Group Comparison Design
Training Posttest
Posttest
As you can see, the simplest partial design of all is the one shot
case-study design
which simply involves a posttest of a training group.
There are two possible ways to improve on this design without
creating a full design.
First, to add a pretest (the one group pretest-posttest design).
Second, to add a control
group (the static group comparison design). Adding both would
lead to a full design.
TRAINING IMPACT QUESTIONNAIRE  DeWine, S. (1987). Evalua.docx

More Related Content

Similar to TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx

How to measure training effectiveness
How to measure training effectivenessHow to measure training effectiveness
How to measure training effectivenessMahmoud Omar
 
Training&Development_LT.pdf
Training&Development_LT.pdfTraining&Development_LT.pdf
Training&Development_LT.pdfAsadullahBaloch7
 
Running Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docx
Running Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docxRunning Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docx
Running Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docxagnesdcarey33086
 
Training Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdfTraining Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdfstudywriters
 
Chapter 11 Training Evaluation.ppt
Chapter 11 Training Evaluation.pptChapter 11 Training Evaluation.ppt
Chapter 11 Training Evaluation.pptDr. Nazrul Islam
 
Complete guide to Effective Training - Nick Blanchard and James W Thacker
Complete guide to Effective Training - Nick Blanchard and James W ThackerComplete guide to Effective Training - Nick Blanchard and James W Thacker
Complete guide to Effective Training - Nick Blanchard and James W ThackerManu Melwin Joy
 
Sales White Paper: Evaluating Sales Training Programs
Sales White Paper: Evaluating Sales Training ProgramsSales White Paper: Evaluating Sales Training Programs
Sales White Paper: Evaluating Sales Training ProgramsAltify
 
SummaryTable 5-15 provides a tool to use in reviewing design.docx
SummaryTable 5-15 provides a tool to use in reviewing design.docxSummaryTable 5-15 provides a tool to use in reviewing design.docx
SummaryTable 5-15 provides a tool to use in reviewing design.docxdeanmtaylor1545
 
Research paper - Training and Development
Research paper - Training and DevelopmentResearch paper - Training and Development
Research paper - Training and DevelopmentSanjana Meduri
 
I need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docxI need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docxevontdcichon
 
effect of trainin
effect of trainineffect of trainin
effect of traininnavid1921
 
Orientation & Training
Orientation & TrainingOrientation & Training
Orientation & Trainingnurewan
 
Evaluation Of Evaluation And Performance Measurement
Evaluation Of Evaluation And Performance MeasurementEvaluation Of Evaluation And Performance Measurement
Evaluation Of Evaluation And Performance MeasurementAmanda Burkett
 
Study on effectiveness of training and development
Study on effectiveness of training and developmentStudy on effectiveness of training and development
Study on effectiveness of training and developmentAnoop Voyager
 
Training and development
Training and developmentTraining and development
Training and developmentDreams Design
 
MODULE 3.pptx of human resource develpoment
MODULE 3.pptx of human resource develpomentMODULE 3.pptx of human resource develpoment
MODULE 3.pptx of human resource develpomentAnshikaThakur73
 

Similar to TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx (20)

How to measure training effectiveness
How to measure training effectivenessHow to measure training effectiveness
How to measure training effectiveness
 
Training&Development_LT.pdf
Training&Development_LT.pdfTraining&Development_LT.pdf
Training&Development_LT.pdf
 
Running Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docx
Running Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docxRunning Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docx
Running Head TRAINING AND DEVELOPMENT PROPOSALTRAINING AND DE.docx
 
Training Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdfTraining Effectiveness Top Premier Essays.pdf
Training Effectiveness Top Premier Essays.pdf
 
u2-lect-1-to-9-mtd-pvb-1801161758710.ppt
u2-lect-1-to-9-mtd-pvb-1801161758710.pptu2-lect-1-to-9-mtd-pvb-1801161758710.ppt
u2-lect-1-to-9-mtd-pvb-1801161758710.ppt
 
Chapter 11 Training Evaluation.ppt
Chapter 11 Training Evaluation.pptChapter 11 Training Evaluation.ppt
Chapter 11 Training Evaluation.ppt
 
T&D
T&DT&D
T&D
 
Hrd 16
Hrd 16Hrd 16
Hrd 16
 
Complete guide to Effective Training - Nick Blanchard and James W Thacker
Complete guide to Effective Training - Nick Blanchard and James W ThackerComplete guide to Effective Training - Nick Blanchard and James W Thacker
Complete guide to Effective Training - Nick Blanchard and James W Thacker
 
Sales White Paper: Evaluating Sales Training Programs
Sales White Paper: Evaluating Sales Training ProgramsSales White Paper: Evaluating Sales Training Programs
Sales White Paper: Evaluating Sales Training Programs
 
SummaryTable 5-15 provides a tool to use in reviewing design.docx
SummaryTable 5-15 provides a tool to use in reviewing design.docxSummaryTable 5-15 provides a tool to use in reviewing design.docx
SummaryTable 5-15 provides a tool to use in reviewing design.docx
 
03 needs assessment
03   needs assessment03   needs assessment
03 needs assessment
 
Research paper - Training and Development
Research paper - Training and DevelopmentResearch paper - Training and Development
Research paper - Training and Development
 
I need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docxI need someone to complete this for me by tonight at 8pm EST. Please.docx
I need someone to complete this for me by tonight at 8pm EST. Please.docx
 
effect of trainin
effect of trainineffect of trainin
effect of trainin
 
Orientation & Training
Orientation & TrainingOrientation & Training
Orientation & Training
 
Evaluation Of Evaluation And Performance Measurement
Evaluation Of Evaluation And Performance MeasurementEvaluation Of Evaluation And Performance Measurement
Evaluation Of Evaluation And Performance Measurement
 
Study on effectiveness of training and development
Study on effectiveness of training and developmentStudy on effectiveness of training and development
Study on effectiveness of training and development
 
Training and development
Training and developmentTraining and development
Training and development
 
MODULE 3.pptx of human resource develpoment
MODULE 3.pptx of human resource develpomentMODULE 3.pptx of human resource develpoment
MODULE 3.pptx of human resource develpoment
 

More from turveycharlyn

Exam #3 ReviewChapter 10· Balance of payment statements · .docx
Exam #3 ReviewChapter 10· Balance of payment statements · .docxExam #3 ReviewChapter 10· Balance of payment statements · .docx
Exam #3 ReviewChapter 10· Balance of payment statements · .docxturveycharlyn
 
Evolving Role of the Nursing Informatics Specialist Ly.docx
Evolving Role of the Nursing Informatics Specialist Ly.docxEvolving Role of the Nursing Informatics Specialist Ly.docx
Evolving Role of the Nursing Informatics Specialist Ly.docxturveycharlyn
 
eworkMarket45135.0 (441)adminNew bid from Madam Cathy.docx
eworkMarket45135.0 (441)adminNew bid from Madam Cathy.docxeworkMarket45135.0 (441)adminNew bid from Madam Cathy.docx
eworkMarket45135.0 (441)adminNew bid from Madam Cathy.docxturveycharlyn
 
Evolving Technology Please respond to the following Analyze t.docx
Evolving Technology Please respond to the following Analyze t.docxEvolving Technology Please respond to the following Analyze t.docx
Evolving Technology Please respond to the following Analyze t.docxturveycharlyn
 
Evolving Health Care Environment and Political ActivismRead and .docx
Evolving Health Care Environment and Political ActivismRead and .docxEvolving Health Care Environment and Political ActivismRead and .docx
Evolving Health Care Environment and Political ActivismRead and .docxturveycharlyn
 
Evolving Families PresentationPrepare a PowerPoint presentatio.docx
Evolving Families PresentationPrepare a PowerPoint presentatio.docxEvolving Families PresentationPrepare a PowerPoint presentatio.docx
Evolving Families PresentationPrepare a PowerPoint presentatio.docxturveycharlyn
 
EvolutionLets keep this discussion scientific! I do not want .docx
EvolutionLets keep this discussion scientific! I do not want .docxEvolutionLets keep this discussion scientific! I do not want .docx
EvolutionLets keep this discussion scientific! I do not want .docxturveycharlyn
 
Evolutionary Theory ApproachDiscuss your understanding of .docx
Evolutionary Theory ApproachDiscuss your understanding of .docxEvolutionary Theory ApproachDiscuss your understanding of .docx
Evolutionary Theory ApproachDiscuss your understanding of .docxturveycharlyn
 
Evolution or change over time occurs through the processes of natura.docx
Evolution or change over time occurs through the processes of natura.docxEvolution or change over time occurs through the processes of natura.docx
Evolution or change over time occurs through the processes of natura.docxturveycharlyn
 
Evolution, Religion, and Intelligent DesignMany people mistakenl.docx
Evolution, Religion, and Intelligent DesignMany people mistakenl.docxEvolution, Religion, and Intelligent DesignMany people mistakenl.docx
Evolution, Religion, and Intelligent DesignMany people mistakenl.docxturveycharlyn
 
Evolution of Millon’sPersonality PrototypesJames P. Choc.docx
Evolution of Millon’sPersonality PrototypesJames P. Choc.docxEvolution of Millon’sPersonality PrototypesJames P. Choc.docx
Evolution of Millon’sPersonality PrototypesJames P. Choc.docxturveycharlyn
 
Evolution and Its ProcessesFigure 1 Diversity of Life on Eart.docx
Evolution and Its ProcessesFigure 1 Diversity of Life on Eart.docxEvolution and Its ProcessesFigure 1 Diversity of Life on Eart.docx
Evolution and Its ProcessesFigure 1 Diversity of Life on Eart.docxturveycharlyn
 
Evolution in Animals and Population of HumansHumans belong t.docx
Evolution in Animals and Population of HumansHumans belong t.docxEvolution in Animals and Population of HumansHumans belong t.docx
Evolution in Animals and Population of HumansHumans belong t.docxturveycharlyn
 
Evolution of Seoul City in South KoreaHow the City changed s.docx
Evolution of Seoul City in South KoreaHow the City changed s.docxEvolution of Seoul City in South KoreaHow the City changed s.docx
Evolution of Seoul City in South KoreaHow the City changed s.docxturveycharlyn
 
evise your own definition of homegrown terrorism. Then using t.docx
evise your own definition of homegrown terrorism. Then using t.docxevise your own definition of homegrown terrorism. Then using t.docx
evise your own definition of homegrown terrorism. Then using t.docxturveycharlyn
 
eview the Paraphrasing tutorial here (Links to an external sit.docx
eview the Paraphrasing tutorial here (Links to an external sit.docxeview the Paraphrasing tutorial here (Links to an external sit.docx
eview the Paraphrasing tutorial here (Links to an external sit.docxturveycharlyn
 
Evidenced-Based Practice- Sample Selection and Application .docx
Evidenced-Based Practice- Sample Selection and Application  .docxEvidenced-Based Practice- Sample Selection and Application  .docx
Evidenced-Based Practice- Sample Selection and Application .docxturveycharlyn
 
Evidenced-Based Practice- Evaluating a Quantitative Research S.docx
Evidenced-Based Practice- Evaluating a Quantitative Research S.docxEvidenced-Based Practice- Evaluating a Quantitative Research S.docx
Evidenced-Based Practice- Evaluating a Quantitative Research S.docxturveycharlyn
 
eview the Captain Edith Strong case study in Ch. 6 of Organi.docx
eview the Captain Edith Strong case study in Ch. 6 of Organi.docxeview the Captain Edith Strong case study in Ch. 6 of Organi.docx
eview the Captain Edith Strong case study in Ch. 6 of Organi.docxturveycharlyn
 
Evidenced based practice In this writing, locate an article pert.docx
Evidenced based practice In this writing, locate an article pert.docxEvidenced based practice In this writing, locate an article pert.docx
Evidenced based practice In this writing, locate an article pert.docxturveycharlyn
 

More from turveycharlyn (20)

Exam #3 ReviewChapter 10· Balance of payment statements · .docx
Exam #3 ReviewChapter 10· Balance of payment statements · .docxExam #3 ReviewChapter 10· Balance of payment statements · .docx
Exam #3 ReviewChapter 10· Balance of payment statements · .docx
 
Evolving Role of the Nursing Informatics Specialist Ly.docx
Evolving Role of the Nursing Informatics Specialist Ly.docxEvolving Role of the Nursing Informatics Specialist Ly.docx
Evolving Role of the Nursing Informatics Specialist Ly.docx
 
eworkMarket45135.0 (441)adminNew bid from Madam Cathy.docx
eworkMarket45135.0 (441)adminNew bid from Madam Cathy.docxeworkMarket45135.0 (441)adminNew bid from Madam Cathy.docx
eworkMarket45135.0 (441)adminNew bid from Madam Cathy.docx
 
Evolving Technology Please respond to the following Analyze t.docx
Evolving Technology Please respond to the following Analyze t.docxEvolving Technology Please respond to the following Analyze t.docx
Evolving Technology Please respond to the following Analyze t.docx
 
Evolving Health Care Environment and Political ActivismRead and .docx
Evolving Health Care Environment and Political ActivismRead and .docxEvolving Health Care Environment and Political ActivismRead and .docx
Evolving Health Care Environment and Political ActivismRead and .docx
 
Evolving Families PresentationPrepare a PowerPoint presentatio.docx
Evolving Families PresentationPrepare a PowerPoint presentatio.docxEvolving Families PresentationPrepare a PowerPoint presentatio.docx
Evolving Families PresentationPrepare a PowerPoint presentatio.docx
 
EvolutionLets keep this discussion scientific! I do not want .docx
EvolutionLets keep this discussion scientific! I do not want .docxEvolutionLets keep this discussion scientific! I do not want .docx
EvolutionLets keep this discussion scientific! I do not want .docx
 
Evolutionary Theory ApproachDiscuss your understanding of .docx
Evolutionary Theory ApproachDiscuss your understanding of .docxEvolutionary Theory ApproachDiscuss your understanding of .docx
Evolutionary Theory ApproachDiscuss your understanding of .docx
 
Evolution or change over time occurs through the processes of natura.docx
Evolution or change over time occurs through the processes of natura.docxEvolution or change over time occurs through the processes of natura.docx
Evolution or change over time occurs through the processes of natura.docx
 
Evolution, Religion, and Intelligent DesignMany people mistakenl.docx
Evolution, Religion, and Intelligent DesignMany people mistakenl.docxEvolution, Religion, and Intelligent DesignMany people mistakenl.docx
Evolution, Religion, and Intelligent DesignMany people mistakenl.docx
 
Evolution of Millon’sPersonality PrototypesJames P. Choc.docx
Evolution of Millon’sPersonality PrototypesJames P. Choc.docxEvolution of Millon’sPersonality PrototypesJames P. Choc.docx
Evolution of Millon’sPersonality PrototypesJames P. Choc.docx
 
Evolution and Its ProcessesFigure 1 Diversity of Life on Eart.docx
Evolution and Its ProcessesFigure 1 Diversity of Life on Eart.docxEvolution and Its ProcessesFigure 1 Diversity of Life on Eart.docx
Evolution and Its ProcessesFigure 1 Diversity of Life on Eart.docx
 
Evolution in Animals and Population of HumansHumans belong t.docx
Evolution in Animals and Population of HumansHumans belong t.docxEvolution in Animals and Population of HumansHumans belong t.docx
Evolution in Animals and Population of HumansHumans belong t.docx
 
Evolution of Seoul City in South KoreaHow the City changed s.docx
Evolution of Seoul City in South KoreaHow the City changed s.docxEvolution of Seoul City in South KoreaHow the City changed s.docx
Evolution of Seoul City in South KoreaHow the City changed s.docx
 
evise your own definition of homegrown terrorism. Then using t.docx
evise your own definition of homegrown terrorism. Then using t.docxevise your own definition of homegrown terrorism. Then using t.docx
evise your own definition of homegrown terrorism. Then using t.docx
 
eview the Paraphrasing tutorial here (Links to an external sit.docx
eview the Paraphrasing tutorial here (Links to an external sit.docxeview the Paraphrasing tutorial here (Links to an external sit.docx
eview the Paraphrasing tutorial here (Links to an external sit.docx
 
Evidenced-Based Practice- Sample Selection and Application .docx
Evidenced-Based Practice- Sample Selection and Application  .docxEvidenced-Based Practice- Sample Selection and Application  .docx
Evidenced-Based Practice- Sample Selection and Application .docx
 
Evidenced-Based Practice- Evaluating a Quantitative Research S.docx
Evidenced-Based Practice- Evaluating a Quantitative Research S.docxEvidenced-Based Practice- Evaluating a Quantitative Research S.docx
Evidenced-Based Practice- Evaluating a Quantitative Research S.docx
 
eview the Captain Edith Strong case study in Ch. 6 of Organi.docx
eview the Captain Edith Strong case study in Ch. 6 of Organi.docxeview the Captain Edith Strong case study in Ch. 6 of Organi.docx
eview the Captain Edith Strong case study in Ch. 6 of Organi.docx
 
Evidenced based practice In this writing, locate an article pert.docx
Evidenced based practice In this writing, locate an article pert.docxEvidenced based practice In this writing, locate an article pert.docx
Evidenced based practice In this writing, locate an article pert.docx
 

Recently uploaded

Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 

Recently uploaded (20)

Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 

TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evalua.docx

  • 1. TRAINING IMPACT QUESTIONNAIRE DeWine, S. (1987). Evaluation of organizational communication competency: The development of the communication training impact questionnaire. Journal of Applied Communication Research, 15(1-2), 113-127. Purpose: The Training Impact Questionnaire (Training IQ) is a post-training instrument that measures employees’ perception of their capabilities to utilize a tool taught in a training program. Theory/Background: DeWine found that training and development programs conducted in-house of organizations lack an effect evaluation process for the training. Other evaluation tools focus mostly on employee reaction to training and do not look at the long term impact that training has on job performance. This instrument was created to fill the need for a training evaluation form that also looks at the benefits of training, the perceived skills and appropriate application of skills in the work place. Description: The Training IQ is a 20-item questionnaire that uses declarative
  • 2. statements and asks respondents to respond using a 5-point Likert scale that ranges from (5) strongly agree to (1) strongly disagree. Items 2, 3, 8, 9, 11-19 are reverse- coded. This questionnaire is meant to be given out two to four weeks after a training session. This delay provides enough time for employees to utilize the new skill on the job. There are two factors in this questionnaire. The first factor is called “Relationship of training to job,” and measures the association between an employees job requirements and the information that was taught during the training session. The second factor is called “Skilled performance,” and measures the extent to which the new skill is used by the employee in their job. TRAINING IMPACT QUESTIONNAIRE This series of statements are possible perceptions of an employee regarding previously conducted training and its impact on his or her ability to apply skills taught during training to the job. Please respond to each statement by placing the appropriate number in the blank to the left of each item.
  • 3. 5 = strongly agree 4 = agree 3 = neutral 2 = disagree 1 = strongly disagree 1. After attending this training program, I am interested in attending other training programs. 2. I don’t perform the skill on the job because the skill is too difficult for me. 3. I use this skill regularly on the job. 4. Because of learning this skill I feel more comfortable about doing my job. 5. Because of attending this training program, I feel better about the company. 6. I learned to perform the tasks well in the training program, but I could have learned it just as easily from a manual or an instruction sheet. 7. I think my participation in this training program will help me to advance in the company. 8. I didn’t learn this skill in the training program, so I had to learn it on the job.
  • 4. 9. Work conditions don’t allow me to perform the skill the way I learned it in training, so I do the task differently on the job. 10. After training I would perform this skill with practicing. 11. I don’t perform the skill on the job because the skill comes up so rarely that I forget how to do it. 12. I don’t perform the skill on the job because I didn’t learn the skill in the training program, so I get help to do the skill. 13. I had trouble learning the skill because the training program was confusing. 14. I never perform this skill on the job. 15. The skill isn’t part of my job. 16. I don’t perform the skill because I was assigned a different job. 17. I had trouble learning the skill in the training program because there wasn’t enough reference material. 18. I perform the skill differently on the job because the skill
  • 5. doesn’t work the way I learned it in training. 19. I perform the skill differently on the job because my supervisor told me to do it differently. 20. I learned to perform the task well in the training program because the program was effective. Stakeholder Training Evaluation Activity Fred Nickols provides a perspective that considers multiple stakeholders in training evaluation. Reflecting upon his perspective complete the following activity by following the steps below and the form/table on the following page. STEP 1: Think of a training with which you have been involved. STEP 2: Write a brief description of the training. STEP 3: Consider all of the possible stakeholders involved in the training and list those in the center column labeled “Stakeholders” in the table
  • 6. below. Give consideration to all those in the organization (and outside of it) who could benefit in some way from the training. STEP 4: List the contributions made to the training by each group of stakeholders. STEP 5: List the inducements taken away by each group of stakeholders. Type of Training Evaluated (Provide a Brief but Specific Description Below) Stakeholder Model of Evaluation Contributions (Put In) Stakeholders Inducements (Take Out)
  • 7. www.nickols.us [email protected] A Stakeholder Approach to Evaluating Training Fred Nickols
  • 8. A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 1 Introduction There are probably no more widely ac- cepted “realities” or truisms in the world of training than the following: - ing, the dominant model is “The Kirkpatrick Model” (TKM). entirety and training evaluations are usually confined to the “smiles test” (TKM Level 1: Trainee Reac- tions).
  • 9. evaluating training, particularly at the higher levels of TKM (i.e., on- the-job behavior change and busi- ness results) and in going beyond TKM (e.g., in determining the ROI of training or even its societal im- pact). - dant knowledge and an available supply of viable tools for evaluating training at all levels of TKM (and beyond). -mentioned interest in and availability of tools for more robust efforts, evaluations of training remain mired in TKM Level 1. Why is this? If evaluation is so important and if the means of carrying it out exist, why do evaluations typi- cally consist of little more than the famous “smiles test”? Is it because the interest in evaluating training is feigned? Is it because the costs of evaluating training outweigh the benefits? Is it a case of diminishing returns, that is, the higher up TKM an evaluation goes, the more costly the evaluation and the less valuable
  • 10. the information? Or is it perhaps the case that trainers are the only ones interested in TKM – and in going beyond it? It is my view that the training community is committed to an approach to evaluating training that, after more than 40 years, has failed to capture the commitment and support of other important constituencies, most especially, that of the trainees, their managers and the senior managers of the organizations in and for which training is conducted. If this is true, then the issue isn’t one of figuring out how to apply TKM – or even of extending it – instead, the issue is one of finding some other approach to evaluating training. It is also my view that there is a better approach to evaluating training – a stakeholder-based approach. Al- though the focus of this paper is on evaluating training, a stakeholder approach can be applied to evaluating HRD and other functional areas as well, especially those considered as having “internal customers” or con- stituencies to be satisfied. The basic premise of the stakeholder approach is that several groups within an organization have a stake in training conducted for organization members and any effort to design, develop, deliver and evaluate training must factor in the needs and requirements of these stakeholder groups or the results of any subsequent evaluation are bound to fall short of expectations. The approach proposed here has two theoretical roots: stakeholder theory (Donaldson & Preston, 1995; Freeman, 1984) and the contributions-inducements view of organizational membership (Barnard, 1947; March & Simon, 1958).
  • 11. This Article’s Key Points - tire function, must satisfy multiple constituencies known as “stakeholders.” - terest in seeing a particular endeavor succeed. - deavor in question is rooted in a quid pro quo (i.e., a stakeholder puts something into the endeavor with the expectation of getting something out of it). - tions” and what they take out are known as “in- ducements.” ily agree in general about the kinds of results expected from training, they hold very different views about what is important when it comes to evaluating training. Their inducements are different. en-
  • 12. deavor having multiple constituencies), it is neces- sary to assess the extent to which all stakeholder groups are satisfied with what they receive from the training. groups are satisfied is to factor in their various re- quirements during the design, development and de- livery of the training. A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 2 Finally, it should be noted that this is a proposed approach, a new approach; it speaks to what could and should be done, not what is currently being done. There are, then, no cases to point to, no testimonials to present, no data to manipulate. There is simply a proposal to go about evaluating training in a very different way and some suggestions as to how to do that. But first, some measurement and evaluation basics. Measurement & Evaluation There is a difference between measurement and evaluation. Measurement focuses on obtaining information as a result of comparing a given against a standard (e.g., information about the length of a board can be
  • 13. determined by comparing it against the standard provided by a tape measure). Evaluation concerns itself with making judgments based on the information provided by measurement (e.g., the board in question is too long or too short or just right). Judgments are usually about value and can be couched in terms of utility or economics or even aesthetics. In organizations, the “givens” typically consist of information about actual performance and the “standards” consist of the goals and objectives established for performance. Value judgments come into play in deciding whether the performance is “good enough” or whether improvement is required. To evaluate anything is to determine its value. From a transaction perspective, the value of anything derives from its importance or worth in an exchange. Whether you are bartering or using money as a medium of exchange, value is measured by the amount of one thing that can be exchanged for another. Ultimately, value is a highly individual matter; it boils down to how much of one thing a person is willing to exchange for another. I might be willing to give up time with my family to put in long hours at work in return for the chance of advancing my career. You might not. You might be willing to pay $45,000 for an automobile; I might not. You might be willing to burn the midnight oil to acquire an advanced degree; I might not. I might be willing to travel extensively as part of my work; you might not. In ascertaining the value or worth of anything, including training, one must always ask, “Ascertain its value to whom?” To evaluate training, then, is to ascertain its value or importance or worth; however, and this is extremely important, the question that usually goes begging is, “To whom?” It is one thing to ascertain the value of
  • 14. training to the trainees. It is something else to determine its value to management. And, it is yet a third mat- ter to fix the value of training to trainers, be they instructors or developers. Trainees, trainers and manage- ment, these are just three of several groups with a stake in training. Other stakeholders include training vendors (whether selling off-the-shelf or custom-developed materials) and, of course, the managers of the trainees. Let us return now to TKM and the added notion of ROI. TKM & ROI As noted at the outset of this article, current thinking about the evaluation of training is dominated by what most call “The Kirkpatrick Model” (TKM). TKM focuses on four “levels” of evaluation: Reactions, Learning, Behavior and Results (Kirkpatrick, 1975a, 1975b, 1975c, 1975d). TKM is widely known and widely ac- cepted, even if it is rarely fully implemented. Another, more recent addition to TKM, what some call a fifth level, is the notion of determining the financial return on investment (ROI) of training (Philips, 1997). And, there are those who suggest that it is possible and desirable to go beyond TKM and ROI to societal impact (Watkins, Leigh, Foshay & Kaufman, 1998). It is not the intent in this paper to engage in lengthy critiques of TKM or efforts to determine the ROI of trai n- ing. That has been done elsewhere (Alliger & Janak, 1989; Holton, 1996, Kaufman & Keller, 1994; Nickols, 2000). Instead, this paper uses TKM as a point of departure, a launch pad for introducing a stakeholder- based approach to the evaluation of training. We will, however, take a brief look at what typically happens in evaluating training.
  • 15. Evaluating Training: What Typically Happens What typically happens is that the interests of most of the stakeholders are subordinated to the interests of the trainers and their managers. Trainers and their managers are understandably anxious to demonstrate the value of what they do. While it is entirely conceivable that a funding manager will want to know something about the ROI of the training, it is equally conceivable that the trainees could care less. The instructors and the developers are probably very interested in the nature and extent of learning that has taken place and, perhaps, in the degree of transfer to the work place. However, unless they’re hoping for a promotion into management or a transfer to a perfor- mance consulting unit, their interest in the ROI of the training is apt to take a back seat. The trainees are likely to care mainly about two things: the applicability or relevance of the subject matter (concepts, prin- A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 3 ciples, methods, tools, techniques, etc.) and the extent to which the training makes good use of their time. Training vendors want to know if their client, the training department, is happy with the training they bought. Everyone wants to know what the trainees think – and for good reason. Why? Because if the trainees are sharply and uniformly critical of the training, very little else
  • 16. matters. So, most of the time, efforts to evaluate training takes the form of the required “smiles test,” a measure of trainee reaction, perhaps some assessment of the learning that has taken place, occasionally an attempt to determine the extent of transfer of training or behavior change on the job and job performance impact, and a rare effort to quantify the bottom-line impact of training and use it to establish the ROI of the training. An interesting and useful question to ask about the four (or five) levels of training evaluation is this: “Who is interested in this particular evaluation?” In other words, who is the audience for the information obtained at each level? Further: What judgments are to be based on this information? Who will make them? As one considers the various audiences for training evaluations and the judgments these audiences will make about training, it becomes apparent that there are many constituencies with an interest in training. Trainee reactions, TKM Level 1, are obtained from the trainees but they are of interest to many in the organ- ization, not the least of which are the trainers and the trainees’ managers. Learning (i.e., skills or competen- cies acquired) is clearly of interest to the trainees and trainers and perhaps of importance to others as well. Behavior change on the job is no doubt of interest to the trainees’ managers – and to trainers as well, espe- cially if they are interested in demonstrating the impact of training. Results, too, are of interest to trainers and to management, albeit for different purposes. Managers want results from training for the sake of the results themselves; trainers are more likely to want results more for the purpose of demonstrating the value
  • 17. of training than for the value of the result itself. As for the ROI of training, the only ones likely to be interest- ed in that are those who are under pressure to demonstrate it or those who have a need for it. If such pres- sure exists, it most likely focuses on trainers, not the trainees or their management. There are, then, several constituencies implied by TKM: trainers, trainees, the trainees’ managers, manag- ers of the training function or department and, perhaps, senior managers throughout the organization. These constituencies all have a vested interest in having things go well in training; none of them want it to be a waste; all want it to add value. In short, they have a stake in the training, an interest in having it su c- ceed, and that makes them stakeholders. Stakeholder Defined Freeman (1984, p.46) defined a stakeholder as “any group or individual who can affect or is affected by the achievement of an organization’s objectives.” This is a very broad definition; too broad, perhaps, because it would include competitors as stakeholders. Neely and Adams (2003), in developing their “Performance Prism,” took care to point out that any look at stakeholders must include stakeholder contributions as well as stakeholder satisfaction. In their view, stakeholders put in something and they take out something. This transaction view of a stakeholder is quite similar to the contributions-inducements theory of organizational membership articulated over a period of several decades by the likes of Chester Barnard, James March and Herbert Simon (more on contributions and inducements in a moment).
  • 18. For the purposes of this paper, a stakeholder is defined as a person or group with an interest in seeing an endeavor succeed. For example, most employees have an interest in seeing their companies succeed. So do that company’s suppliers, its customers and the community in which the company is embedded. Similar- ly, most trainers have an interest in seeing that the training they develop and deliver is successful. There are others who want training to be successful, too. Chief among them are the managers who sponsor or fund the training, the managers who manage the training department and last, but not least, the trainees. A list of typical training stakeholders follows: In cases wherein the training is expected to have a fairly direct and substantial impact on some critical as- pect of the organization’s performance, senior managers and executives are also important stakeholders. A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do
  • 19. Not Cite without Permission) 4 There are even situations in which the community as well as state and federal regulators become stake- holders (e.g., as is likely the case when training nuclear power plant operators). Stakeholder Contributions and Inducements As the definition of stakeholder provided earlier implies, stakeholders are people with an interest in seeing an endeavor succeed; they expect to get something out of the endeavor or effort in question. That some- thing might be a return on their investment, as is the case with investors. But, and this is extremely impo r- tant, stakeholders must also put something into the endeavor. Stakeholders put something in and they take something out. Investors put their money at risk in hopes of a return just as the managers who fund training do so in hopes of a positive impact on performance or costs or productivity or some other payoff. Trainees contribute their time, attention, energy and other forms of input (e.g., participating in discussions and exercises) and they hope to take out useful knowledge and skills, me- thods, techniques and tools. Instructors put in their time and energy, too, along with their skills at leading or facilitating discussions, presenting subject matter in interesting, relevant ways and handling the occasionally difficult trainee. They hope to walk away with a return in the form of a sense of accomplishment, a reputation maintained or enhanced and high marks from the trainees. Developers invest a great deal of time and energy in designing, developing and field-testing instructional materials and most of them hope to receive in
  • 20. return a decent paycheck, a modicum of recognition and a sense of satisfaction with a job well done. In the formal language of organizational theory, stakeholders exchange contributions in return for inducements. The contributions-inducements schema has a long history and has been observed and commented upon by noted management and organizational theorists starting with Chester Barnard (1947) and continuing through James March and Herbert Simon (1958). Its essence is that the various participants or stakeholders must perceive value in the exchange. Generally speaking, inducements must be seen as having equal or greater value than contributions. From the stakeholders’ perspective, what they receive is of equal or greater value to them than what they contribute. That is why they are in the relationship. And if that relationship does not offer them inducements of equal or greater value to them than the contributions expected of them, they leave the relationship. That is why employees, customers and suppliers go elsewhere and it is also why training departments are periodically cut back or even eliminated. They are not perceived as contributing or adding value that is equal to or greater than their cost. The importance of this contributions-inducements relationship cannot be overstated. As James Burke, CEO of Johnson & Johnson during its Tylenol crisis, once remarked, “The ultimate measure of an organization’s success is the extent to which it serves all of its constituencies better than its competition” (PBS Video, 1995). It falls to management, then, to manage stakeholder or constituent relationships. This is as true for the training department and its management as it is for the larger organization.
  • 21. To meaningfully evaluate training one must assess the nature of the contributions-inducements relationship between each of the stakeholder groups and the training. What are they putting in? What are they getting out? Are they putting in what they should? Are they getting out of it what they want or need? Do they view the transaction as balanced or unbalanced (i.e., are they putting in more than they’re getting out)? Typical Training Stakeholder Contributions & Inducements The table that follows identifies some of the typical contributions and inducements that could be involved for the various stakeholder groups with respect to a particular training course or training in general. It does not and cannot represent all such contributions and inducements. These will vary with the course and the people involved. A stakeholder “scorecard” must be constructed to fit the situation. However, the table be- low does serve as a model and a starting point. Other groups who might be stakeholders and who might have to be added include senior managers and executives, the community and government regulators. A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 5 Table of Stakeholder Contributions and Inducements
  • 22. Contributions (Put In) Stakeholder Groups Inducements (Take Out) Their time, energy, skills and knowledge, manifested in individual training events. Trainers Pay, recognition, personal satisfaction in ac- complishment, new insight and knowledge, professional development, continued em- ployment. Resource commitments, direction, support, leadership. Training Managers Pay, pride in accomplishment and status or standing in the organization, influence (e.g., a seat at the table), both for themselves and their unit. Money, sanction, support. Funding Managers Operational and financial impact of greater value. Opportunity costs of releasing the em- ployee for the training, sanction, support. Using Managers Improved performance on the job. Their time, attention, energy and know-
  • 23. ledge, participation. Trainees Useful information and knowledge, tools and job aids, good use of their time, improved skills, improved standing. Courses and course materials, develop- ment costs and their reputation. Vendors Money, repeat business, enhanced reputa- tion, referrals. The courses, materials and their time, energy, skills and knowledge. Developers Pay, recognition, personal satisfaction in ac- complishment, new insight and knowledge, growth and development, improved standing. A Process for Applying a Stakeholder Approach At this point it is probably prudent to remind the reader that this paper presents a proposed approach to eva- luating training. So far as the author knows, no one has yet done so. Stakeholder-based approaches, eval- uations and scorecards have been developed for general business use but not for evaluating training. Con- sequently, the process outlined below is a conceptual view of how one might go about evaluating training using a stakeholder-based approach. It is not a detailed plan. Conceptually, at least, the process is very simple:
  • 24. m to a short list for each stakeholder various stakeholders with their in- ducements contributions made by the various stakeholders orate them into a Stakeholder Contributions- Inducements Scorecard conversations post-mortems Practically and politically, however, it will likely prove to involve a lot of hard work. There are those who will duck accountability and shirk responsibility. Some of them are trainers. The last thing many managers want is someone else to whom they have to be accountable, especially when they see little coming their way in return. So, the place to begin is always with the value expected from training, A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 6
  • 25. be it a single offering or the entire training function. If that value proposition cannot be made clear and com- pelling there is little hope for the training let alone a stakeholder or any other approach to evaluating it. Mutual Accountability and Shared Responsibility A stakeholder approach leads to mutual accountability and shared responsibility. Trainers are not and can- not be solely responsible for the success of training. The trainees have something to do with that, too. So do developers and vendors and managers and clients. The ROI of training is neither the sole nor the paramount measure of training. The “smiles test” provides some useful information but it also allows trainees to criticize the training without any accompanying assessment of their behavior and performance as trainees. There is such a thing as “a responsible trainee” and the evaluation of training rarely takes stock of that ingredient, yet it is essential to the success of training. The managers who fund the training have a right to expect some- thing for the money they spend but they also have an obligation to contribute to the success of that training (even if it’s only to sit still and be interviewed regarding their expectations of the training or to explain the rationale that led them to conclude that training is the solution to some problem of performance). There is, then, the notion of a “responsible client” as well. Under a stakeholder approach, the various stakeholders are accountable to one another and they share the responsibility for success. What Value Is Added by A Stakeholder Approach?
  • 26. value to be provided by training. n training from transforming trainees to providing value to stakeholders. – the value expected from the training by the various stake- holders. requirements of its many constituencies. offers a balanced view. – when and as they are relevant to the stakeholder groups. grips naturally with the politics of evaluation. specialized expertise. evaluation “up front” where it belongs. ges and supports mutual accountability and shared responsibility. “responsible trainee” and the “responsible client”). Implications & Conclusion If one accepts the notion that training has multiple constituencies or stakeholders whose needs, wants, re- quirements and preferences must be taken into account, one must also accept that the only effective way of doing so is take them into account during the design, development and delivery of the training. Anything else is bound to come up short at evaluation time. Moreover, it
  • 27. is well to keep in mind that, although training providers and their constituencies might agree in general about the results to be obtained from training, they also hold very different perceptions regarding the criteria to be used in evaluating training programs (Mi- chalski, G., 1997). For this reason, evaluation issues belong on the front-end of training endeavors as well as on the back-end. The real question, then, is how does one design, develop and deliver training so as to meet all the stakeholders’ needs and requirements? Do this and do it well and any subsequent evaluation is certain to be favorable. How does one do that? Well, that’s beyond the purview of this paper but a few prin- ciples to keep in mind are listed below: -offs and take shortcuts. r more art as it is science: Trust your gut. The stakeholder view, though not without its flaws (Key, 1999) and critics (Jennings, 1999), is gathering momentum in management thinking (Donaldson & Preston, 1995) and is increasingly reflected in managerial tools and actions aimed at assessing organizational and managerial performance (Atkinson, Waterhouse & Wells, 1997; Fraser & Zarkada-Fraser, 2003; Neely, Adams & Crowe, 2003). As one group of observers writes, “The days when companies could survive and prosper by focusing on the wants and needs of one stakeholder – the shareholder – are long gone” (Neely, Adams & Kennerly, 2002). Trainers, too, must satis-
  • 28. fy multiple constituencies. Adopting a stakeholder approach to evaluating training is a step in the right direc- tion. A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 7 References 1. Alliger, G.M. & Janak, E.A. (1989). Kirkpatrick’s levels of training criteria: Thirty years later. Per- sonnel Psychology, 42(2), 331-342. 2. Atkinson, A. A., Waterhouse, J. H., and Wells, R. B. (1997). A stakeholder approach to strategic performance measurement. Sloan Management Review (Spring). 3. Barnard, C. A. (1947). The functions of the executive. Cambridge, MA: Harvard University Publish- ing. 4. Burke, J. (1995). Remarks made during a PBS video [author’s notes]. 5. Donaldson, T. and Preston, L. (1995). The stakeholder theory of the corporation: concepts, evi- dence, and implications. Academy of Management Review 20:1 65-91. 6. Fraser, C. and Zarkada-Fraser, A. (2003). Investigating the effectiveness of managers through an analysis of stakeholder perceptions. Journal of Management
  • 29. Development 22:9 762-783. 7. Freeman, R. (1984). Strategic management: a stakeholder approach. Boston, MA: Ballinger 8. Holton, E.F. (1996). The flawed four-level evaluation model. Human Resources Development Quar- terly, 7(1), 5-21. 9. Jennings, M. (1999, April). Stakeholder theory: letting anyone who’s interested run the business – no investment required. Paper presented at a conference titled Corporate Governance: Ethics Across the Board, hosted by the Center for Business Ethics at the University of St. Thomas, Hou- ston, TX. Retrieved December 6, 2003 from http://www.stthom.edu/cbes/conferences/marianne_jennings.htm l 10. Kaufman, R., and Keller, J. M. (1994). Levels of evaluation: beyond Kirkpatrick. Human Resource Development Quarterly, 5, 371-380. 11. Key, S. (1999). Toward a new theory of the firm: a critique of stakeholder “theory.” Management Decision 37:4, 317-328. 12. Kirkpatrick, D. L. (1975a). Techniques for evaluating training programs, part 1: reaction. In Evaluat- ing Training Programs, compiled by D. L. Kirkpatrick, 1-5. Madison, WI: ASTD. 13. Kirkpatrick, D. L. (1975b). Techniques for evaluating training programs, part 2: learning. In Evaluat- ing Training Programs, compiled by D. L. Kirkpatrick, 16-9. Madison, WI: ASTD.
  • 30. 14. Kirkpatrick, D. L. (1975c). Techniques for evaluating training programs, part 3: behavior. In Eva- luating Training Programs, compiled by D. L. Kirkpatrick, 10- 13. Madison, WI: ASTD. 15. Kirkpatrick, D. L. (1975d). Techniques for evaluating training programs, part 4: results. In Evaluat- ing Training Programs, compiled by D. L. Kirkpatrick, 14-17. Madison, WI: ASTD. 16. March, J. G. and Simon, H. A. (1958). Organizations. New York, NY: John Wiley & Sons. 17. Michalski, G. V. (1997, November). Stakeholder variation in perceptions about training program results and evaluation: a concept mapping investigation. Paper presented at American Evaluation Association Conference, San Diego, CA. Retrieved December 6, 2003 from http://www.conceptsystems.com/papers/paperusr/michalsk/aea5 1.htm 18. Neely, A., Adams C. and Kennerly, M. (2002). The performance prism: the scorecard for measur- ing and managing success, 1. London: Financial Times Prentice-Hall 19. Neely, A., Adams, C. and Crowe, P. (2003). The performance prism in practice. Retrieved De- cember 6, 2003 from http://www.som.cranfield.ac.uk/som/cbp/PrismInPractice.pdf 20. Nickols, F. W. (2000). Evaluating training: there is no “cookbook” approach. In J. Woods & J. Cor- tada (Eds.), The 2001 ASTD Training & Performance Yearbook (pp. 322-333). New York, NY:
  • 31. McGraw-Hill. 21. Phillips, J. (1997). Return on investment in training and performance improvement programs. Houston, TX: Gulf Publishing Company. 22. Watkins, R., Leigh, D., Foshay, R., & Kaufman, R. (1998). Kirkpatrick plus: evaluation and conti- nuous improvement with a community focus. In Educational Technology Research and Develop- ment, 46:4 90-96 http://www.stthom.edu/cbes/conferences/marianne_jennings.htm l http://www.conceptsystems.com/papers/paperusr/michalsk/aea5 1.htm http://www.som.cranfield.ac.uk/som/cbp/PrismInPractice.pdf A Stakeholder Approach to Evaluating Training © Fred Nickols 2003 (DRAFT: For Review Only – Do Not Cite without Permission) 8 Author Bio & Contact Information Fred Nickols is a senior management consultant and executive with almost 50 years of experience in the workplace, much of it associated with training and development and with other efforts to improve perfor- mance and productivity in organizational settings. For many years he was an executive director with Educa- tional Testing Service. His career began in the United States Navy where he served on active duty for 20 years, retiring in 1974 with the rank of Chief Petty Officer. While in the Navy he received his training and
  • 32. early experiences as an instructor, a writer of programmed instructional materials, an instructional systems specialist and an internal management and organizational development (OD) consultant. His consulting career spans more than 30 years and his clients include many well-known corporations, non-profit organiza- tions and government agencies. He has published dozens of articles in a wide variety of professional jour- nals and trade magazines. Currently, he is the managing partner of Distance Consulting LLC. Fred Nickols www.nickols.us [email protected] Author’s Note A revised version of this article with a slightly different title appears in Advances in Developing Human Re- sources. Citation information is as follows: Nickols, F.W. (2005). Why a stakeholder approach to evaluating training. Advances in Developing Human Resources. 7(1), 121-134. http://www.nickols.us/ mailto:[email protected] Evaluation Designs Unit 9
  • 33. There are several related topics in this unit… Training as an Independent Variable Causal Relationships Control Groups Matched Groups Full Evaluation Designs Time Series Evaluation Designs Partial Evaluation Designs Training—an Independent Variable Evaluation of Training In research terms, training is an independent variable. That is something thought to affect an outcome or dependent variable. The dependent variable in terms of training could be any number of things ranging from better customer
  • 34. service, better employee communication, or better adherence to the mission of the company. Whatever workplace issue we have arranged the training to address ends up as the dependent variable. In very simple terms, then, we want to evaluate whether or not the training had an impact on a particular outcome. That is, whether or not or to what degree the independent variable affected the dependent variable. But how can we show that training was successful in bringing about a particular outcome? To answer this question we need an evaluation design. Establishing a Causal Relationship Evaluation of Training In order to establish a causal relationship—that is one in which we can say ‘this caused that to happen’ or ‘the training led to this specific outcome’ three conditions must be met… 1. The training must precede the observed outcome in time. We can not attribute training to behavior that exists before training occurs. 2. The training must relate to the expected behavior in some meaningful way. Training that does not relate to the expected behavior will not produce the intended results.
  • 35. 3. Changes due to training must be the result of the training and not some other factor (known as a confounding variable). Control Groups Evaluation of Training Control refers to the need to control as many factors as possible so that we can isolate the causal relationship between training and the expected outcomes. With regard to training, we exercise control by having a control group or a group that does not receive training to compare the training group against. Matched Groups Evaluation of Training When comparing groups we want to begin with groups that are as evenly matched as possible. If we train a group that already has some proclivity to behave a particular way (say being polite to customers) then we can not be sure that training produced any notable differences between the trained group
  • 36. and the control group. There are two primary strategies for creating matched groups: Assign people randomly to either the trained group or the control group so that differences end up spread out and dispersed among the groups. 2. Use pretests to determine where people reside with regard to the outcomes of interest and then assign them to groups so that the groups contain equally matched participants (that is people that are better and/or worse in the area of training end up in both the training and the control group). Evaluation Designs Evaluation of Training Evaluation designs vary in the degree to which they are more or less sophisticated. More sophisticated designs demonstrate the greatest degree of control through the use of pretests, control groups, and random assignment of people to groups. Weaker designs lose some of these features. Understand though that we can not always set up an evaluation design that is completely robust. Thus, it is helpful to know and understand the range of possibilities available and to choose which is best given the circumstances and possible constraints.
  • 37. Full Designs Evaluation of Training Full designs include random assignment, control and training groups, and at times a pretest. There are several full designs from which to choose. The Pretest-Posttest Control Group Design Random Assignment Pretest Training Posttest Random Assignment Pretest Posttest This is the basic model for full designs, it includes both random assignment and pretests, as well as a control and training group. Outcomes are assessed with a posttest after training. Full Designs Evaluation of Training The pretest however may not be necessary if we randomly assign people to groups. Thus, if it proves cumbersome or problematic to have a pretest we can use the design below, which is essentially the same, with the exception of not having a pretest. The Posttest-Only Control Group Design Random Assignment Training Posttest Random Assignment Posttest
  • 38. Full Designs Evaluation of Training Similarly, random assignment may not be necessary if we use pretests and match groups well. In this case we can use the design below. The Pretest-Only Control Group Design Pretest Training Posttest Pretest Posttest This design is the same as the pretest-posttest design, except it does not use random assignment. It does though have pretests, a control group, and a posttest. Full Designs Evaluation of Training It may be helpful to see the three full designs together to better understand how they compare to and differ from one another. The Pretest-Posttest Control Group Design Random AssignmentPretest Training Posttest Random Assignment Pretest Posttest The Posttest-Only Control Group Design Random Assignment Training Posttest
  • 39. Random Assignment Posttest The Pretest-Only Control Group Design Pretest Training Posttest Pretest Posttest The first has both a pretest and random assignment, whereas the second and third have one or the other. All three have training and control groups and of course a posttest. Time Series Designs Evaluation of Training Sometimes it is important to establish a baseline for training outcomes before training occurs. That is, to have evidence of the training outcome of interest over time rather than simply from one single pretest. When this is the case a time series design is appropriate. Time series designs rely on multiple pretests to establish a baseline score for training participants. Time Series Designs
  • 40. Evaluation of Training Time series designs rely on a set of pretests that help to establish a “baseline” or average to compare the posttest score against. Time series designs include the simple time series design and the multiple time series design. Time-Series Design Multiple Pretests Over Time Training Posttest The simple time-series design lacks a control group, which can be added to create what is called a… Multiple Time Series Design Multiple Pretests Over Time Training Posttest Multiple Pretests Over Time Posttest Partial Designs Evaluation of Training The final set of evaluation designs exercise the least amount of control because they lack several key features. Partial designs are scaled down versions of full designs. They have only the very basic structure of full designs. Although not preferable, they are appropriate when organizational constraints limit the degree to which we can conduct fuller evaluation designs.
  • 41. Partial Designs Evaluation of Training There are three common partial designs: One Shot Case-Study Design Training Posttest One Group Pretest-Posttest Design Pretest Training Posttest Static Group Comparison Design Training Posttest Posttest As you can see, the simplest partial design of all is the one shot case-study design which simply involves a posttest of a training group. There are two possible ways to improve on this design without creating a full design. First, to add a pretest (the one group pretest-posttest design). Second, to add a control group (the static group comparison design). Adding both would lead to a full design.