Chapter 6
Training Evaluation
Copyright © 2010 by the McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin
6-2
Training effectivenessTraining effectiveness - the benefits
that the company and the trainees
receive from training.
Training outcomes or criteriaTraining outcomes or criteria -
measures that the trainer and the
company use to evaluate training
programs.
Introduction
6-3
Training evaluationTraining evaluation - the process of
collecting the outcomes needed to
determine if training is effective.
Evaluation designEvaluation design - collection of
information, including whom, what, when,
and how, for determining the
effectiveness of the training program.
Introduction (cont.)
6-4
Companies make large investments in
training and education and view them as a
strategy to be successful; they expect the
outcomes of training to be measurable.
Training evaluation provides the data
needed to demonstrate that training does
provide benefits to the company.
It involves formative and summative
evaluation.
Reasons for Evaluating Training
6-5
Formative evaluation - takes place
during program design and development.
It helps ensure that the training program is
well organized and runs smoothly, and
trainees learn and are satisfied with the
program.
It provides information about how to make the
program better; it involves collecting
qualitative data about the program.
Reasons for Evaluating Training
(cont.)
6-6
Formative evaluation
Pilot testing - process of previewing the
training program with potential trainees and
managers or with other customers.
Reasons for Evaluating Training
(cont.)
6-7
Summative evaluation - determine the
extent to which trainees have changed as
a result of participating in the training
program.
It may include measuring the monetary
benefits that the company receives from the
program.
It involves collecting quantitative data.
Reasons for Evaluating Training
(cont.)
6-8
A training program should be evaluated:
To identify the program’s strengths and
weaknesses.
To assess whether content, organization, and
administration of the program contribute to
learning and the use of training content on
the job.
To identify which trainees benefited most or
least from the program.
Reasons for Evaluating Training
(cont.)
6-9
A training program should be evaluated:
To gather data to assist in marketing training
programs.
To determine the financial benefits and costs
of the program.
To compare the costs and benefits of:
training versus non-training investments.
different training programs to choose the best
program.
Reasons for Evaluating Training
(cont.)
6-10
Figure 6.1 - The Evaluation
Process
6-11
Table 6.1 - Kirkpatrick’s Four-Level
Framework of Evaluation Criteria
6-12
Outcomes Used in the Evaluation
of Training Programs
The hierarchical nature of Kirkpatrick’s
framework suggests that higher level
outcomes should not be measured unless
positive changes occur in lower level
outcomes.
The framework implies that changes at a
higher level are more beneficial than
changes at a lower level.
6-13
Outcomes Used in the Evaluation
of Training Programs (cont.)
Kirkpatrick’s framework criticisms:
Research has not found that each level is
caused by the level that precedes it in the
framework, nor does evidence suggest that
the levels differ in importance.
The approach does not take into account the
purpose of the evaluation.
Outcomes can and should be collected in an
orderly manner, that is, measures of reaction
followed by measures of learning, behavior,
and results.
6-14
Table 6.2 - Evaluation Outcomes
6-15
Outcomes Used in the Evaluation
of Training Programs (cont.)
Reaction outcomes
It is collected at the program’s conclusion.
Cognitive outcomes
They do not help to determine if the trainee
will actually use decision-making skills on the
job.
Skill-based outcomes
The extent to which trainees have learned
skills can be evaluated by observing their
performance in work samples such as
simulators.
6-16
Outcomes Used in the Evaluation
of Training Programs (cont.)
Return on investment
Direct costs - salaries and benefits for all
employees involved in training; program
material and supplies; equipment or
classroom rentals or purchases; and travel
costs.
Indirect costs - not related directly to the
design, development, or delivery of the
training program.
Benefits - value that the company gains from
the training program.
6-17
Determining Whether
Outcomes are Appropriate
Criteria
Relevance
The extent to which training outcomes are related to the
learned capabilities emphasized in the training program.
Criterion contamination - the extent that training
outcomes measure inappropriate capabilities or are
affected by extraneous conditions.
Criterion deficiency - the failure to measure training
outcomes that were emphasized in the training objectives.
Reliability The degree to which outcomes can be measured
consistently over time.
Discrimination The degree to which trainees’ performance on the
outcome actually reflects true differences in performance.
Practicality The ease with which the outcome measures can be
collected.
6-18
Figure 6.2 - Criterion Deficiency,
Relevance, and Contamination
6-19
Figure 6.4 - Training Program
Objectives and Their Implications for
Evaluation
6-20
Evaluation Designs
Threats to validity - factors that will
lead an evaluator to question either the:
Internal validity - the believability of the study
results.
External validity - the extent to which the
evaluation results are generalizable to other
groups of trainees and situations.
6-21
Table 6.7 - Threats to Validity
6-22
Methods to Control for Threats to Validity
Pretests and Posttests
A comparison of the posttraining and pretraining
measures can indicate the degree to which trainees
have changed as a result of training.
Random assignment - assigning employees
to the training or comparison group on the
basis of chance.
Helps to reduce the effects of employees dropping
out of the study, and differences between the
training group and comparison group in ability,
knowledge, skill, or other personal characteristics.
Evaluation Designs (cont.)
6-23
Methods to Control for Threats to Validity
Using a comparison group - employees who
participate in the evaluation study but do not
attend the training program.
Helps to rule out the possibility that changes found
in the outcome measures are due to factors other
than training.
Evaluation Designs (cont.)
6-24
Table 6.8 - Comparison of
Evaluation Designs
6-25
Time series - training outcomes are
collected at periodic intervals both before
and after training.
It allows an analysis of the stability of
training outcomes over time.
Reversal - time period in which
participants no longer receive the training
intervention.
Types of Evaluation Designs
6-26
Table 6.12 - Factors That Influence
the Type of Evaluation Design
6-27
Determining Return on
Investment (ROI)
Cost-benefit analysis - process of
determining the economic benefits of a
training program using accounting
methods that look at training costs and
benefits.
ROI should be limited only to certain
training programs, because it can be
costly.
6-28
Determining Return on
Investment (ROI) (cont.)
Determining costs
Methods for comparing costs of alternative
training programs include the resource
requirements model and accounting.
Determining benefits – methods include:
technical, academic, and practitioner
literature.
pilot training programs and observance of
successful job performers.
Estimates by trainees and their managers.
6-29
Determining Return on
Investment (ROI) (cont.)
To calculate ROI, divide benefits by costs.
The ROI gives an estimate of the dollar
return expected from each dollar invested
in training.
6-30
Table 6.13 - Determining Costs
for a Cost Benefit Analysis
6-31
Determining Return on
Investment (ROI) (cont.)
Utility analysis - a cost-benefit analysis
method that involves assessing the dollar
value of training based on:
estimates of the difference in job performance
between trained and untrained employees.
the number of individuals trained.
the length of time a training program is
expected to influence performance.
the variability in job performance in the
untrained group of employees.
6-32
Table 6.16 - Training Metrics

Chap006

  • 1.
    Chapter 6 Training Evaluation Copyright© 2010 by the McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin
  • 2.
    6-2 Training effectivenessTraining effectiveness- the benefits that the company and the trainees receive from training. Training outcomes or criteriaTraining outcomes or criteria - measures that the trainer and the company use to evaluate training programs. Introduction
  • 3.
    6-3 Training evaluationTraining evaluation- the process of collecting the outcomes needed to determine if training is effective. Evaluation designEvaluation design - collection of information, including whom, what, when, and how, for determining the effectiveness of the training program. Introduction (cont.)
  • 4.
    6-4 Companies make largeinvestments in training and education and view them as a strategy to be successful; they expect the outcomes of training to be measurable. Training evaluation provides the data needed to demonstrate that training does provide benefits to the company. It involves formative and summative evaluation. Reasons for Evaluating Training
  • 5.
    6-5 Formative evaluation -takes place during program design and development. It helps ensure that the training program is well organized and runs smoothly, and trainees learn and are satisfied with the program. It provides information about how to make the program better; it involves collecting qualitative data about the program. Reasons for Evaluating Training (cont.)
  • 6.
    6-6 Formative evaluation Pilot testing- process of previewing the training program with potential trainees and managers or with other customers. Reasons for Evaluating Training (cont.)
  • 7.
    6-7 Summative evaluation -determine the extent to which trainees have changed as a result of participating in the training program. It may include measuring the monetary benefits that the company receives from the program. It involves collecting quantitative data. Reasons for Evaluating Training (cont.)
  • 8.
    6-8 A training programshould be evaluated: To identify the program’s strengths and weaknesses. To assess whether content, organization, and administration of the program contribute to learning and the use of training content on the job. To identify which trainees benefited most or least from the program. Reasons for Evaluating Training (cont.)
  • 9.
    6-9 A training programshould be evaluated: To gather data to assist in marketing training programs. To determine the financial benefits and costs of the program. To compare the costs and benefits of: training versus non-training investments. different training programs to choose the best program. Reasons for Evaluating Training (cont.)
  • 10.
    6-10 Figure 6.1 -The Evaluation Process
  • 11.
    6-11 Table 6.1 -Kirkpatrick’s Four-Level Framework of Evaluation Criteria
  • 12.
    6-12 Outcomes Used inthe Evaluation of Training Programs The hierarchical nature of Kirkpatrick’s framework suggests that higher level outcomes should not be measured unless positive changes occur in lower level outcomes. The framework implies that changes at a higher level are more beneficial than changes at a lower level.
  • 13.
    6-13 Outcomes Used inthe Evaluation of Training Programs (cont.) Kirkpatrick’s framework criticisms: Research has not found that each level is caused by the level that precedes it in the framework, nor does evidence suggest that the levels differ in importance. The approach does not take into account the purpose of the evaluation. Outcomes can and should be collected in an orderly manner, that is, measures of reaction followed by measures of learning, behavior, and results.
  • 14.
    6-14 Table 6.2 -Evaluation Outcomes
  • 15.
    6-15 Outcomes Used inthe Evaluation of Training Programs (cont.) Reaction outcomes It is collected at the program’s conclusion. Cognitive outcomes They do not help to determine if the trainee will actually use decision-making skills on the job. Skill-based outcomes The extent to which trainees have learned skills can be evaluated by observing their performance in work samples such as simulators.
  • 16.
    6-16 Outcomes Used inthe Evaluation of Training Programs (cont.) Return on investment Direct costs - salaries and benefits for all employees involved in training; program material and supplies; equipment or classroom rentals or purchases; and travel costs. Indirect costs - not related directly to the design, development, or delivery of the training program. Benefits - value that the company gains from the training program.
  • 17.
    6-17 Determining Whether Outcomes areAppropriate Criteria Relevance The extent to which training outcomes are related to the learned capabilities emphasized in the training program. Criterion contamination - the extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions. Criterion deficiency - the failure to measure training outcomes that were emphasized in the training objectives. Reliability The degree to which outcomes can be measured consistently over time. Discrimination The degree to which trainees’ performance on the outcome actually reflects true differences in performance. Practicality The ease with which the outcome measures can be collected.
  • 18.
    6-18 Figure 6.2 -Criterion Deficiency, Relevance, and Contamination
  • 19.
    6-19 Figure 6.4 -Training Program Objectives and Their Implications for Evaluation
  • 20.
    6-20 Evaluation Designs Threats tovalidity - factors that will lead an evaluator to question either the: Internal validity - the believability of the study results. External validity - the extent to which the evaluation results are generalizable to other groups of trainees and situations.
  • 21.
    6-21 Table 6.7 -Threats to Validity
  • 22.
    6-22 Methods to Controlfor Threats to Validity Pretests and Posttests A comparison of the posttraining and pretraining measures can indicate the degree to which trainees have changed as a result of training. Random assignment - assigning employees to the training or comparison group on the basis of chance. Helps to reduce the effects of employees dropping out of the study, and differences between the training group and comparison group in ability, knowledge, skill, or other personal characteristics. Evaluation Designs (cont.)
  • 23.
    6-23 Methods to Controlfor Threats to Validity Using a comparison group - employees who participate in the evaluation study but do not attend the training program. Helps to rule out the possibility that changes found in the outcome measures are due to factors other than training. Evaluation Designs (cont.)
  • 24.
    6-24 Table 6.8 -Comparison of Evaluation Designs
  • 25.
    6-25 Time series -training outcomes are collected at periodic intervals both before and after training. It allows an analysis of the stability of training outcomes over time. Reversal - time period in which participants no longer receive the training intervention. Types of Evaluation Designs
  • 26.
    6-26 Table 6.12 -Factors That Influence the Type of Evaluation Design
  • 27.
    6-27 Determining Return on Investment(ROI) Cost-benefit analysis - process of determining the economic benefits of a training program using accounting methods that look at training costs and benefits. ROI should be limited only to certain training programs, because it can be costly.
  • 28.
    6-28 Determining Return on Investment(ROI) (cont.) Determining costs Methods for comparing costs of alternative training programs include the resource requirements model and accounting. Determining benefits – methods include: technical, academic, and practitioner literature. pilot training programs and observance of successful job performers. Estimates by trainees and their managers.
  • 29.
    6-29 Determining Return on Investment(ROI) (cont.) To calculate ROI, divide benefits by costs. The ROI gives an estimate of the dollar return expected from each dollar invested in training.
  • 30.
    6-30 Table 6.13 -Determining Costs for a Cost Benefit Analysis
  • 31.
    6-31 Determining Return on Investment(ROI) (cont.) Utility analysis - a cost-benefit analysis method that involves assessing the dollar value of training based on: estimates of the difference in job performance between trained and untrained employees. the number of individuals trained. the length of time a training program is expected to influence performance. the variability in job performance in the untrained group of employees.
  • 32.
    6-32 Table 6.16 -Training Metrics