6-2Training effectivenessTraining effectiveness - the benefitsthat the company and the traineesreceive from training.Training outcomes or criteriaTraining outcomes or criteria -measures that the trainer and thecompany use to evaluate trainingprograms.Introduction
6-3Training evaluationTraining evaluation - the process ofcollecting the outcomes needed todetermine if training is effective.Evaluation designEvaluation design - collection ofinformation, including whom, what, when,and how, for determining theeffectiveness of the training program.Introduction (cont.)
6-4Companies make large investments intraining and education and view them as astrategy to be successful; they expect theoutcomes of training to be measurable.Training evaluation provides the dataneeded to demonstrate that training doesprovide benefits to the company.It involves formative and summativeevaluation.Reasons for Evaluating Training
6-5Formative evaluation - takes placeduring program design and development.It helps ensure that the training program iswell organized and runs smoothly, andtrainees learn and are satisfied with theprogram.It provides information about how to make theprogram better; it involves collectingqualitative data about the program.Reasons for Evaluating Training(cont.)
6-6Formative evaluationPilot testing - process of previewing thetraining program with potential trainees andmanagers or with other customers.Reasons for Evaluating Training(cont.)
6-7Summative evaluation - determine theextent to which trainees have changed asa result of participating in the trainingprogram.It may include measuring the monetarybenefits that the company receives from theprogram.It involves collecting quantitative data.Reasons for Evaluating Training(cont.)
6-8A training program should be evaluated:To identify the program’s strengths andweaknesses.To assess whether content, organization, andadministration of the program contribute tolearning and the use of training content onthe job.To identify which trainees benefited most orleast from the program.Reasons for Evaluating Training(cont.)
6-9A training program should be evaluated:To gather data to assist in marketing trainingprograms.To determine the financial benefits and costsof the program.To compare the costs and benefits of:training versus non-training investments.different training programs to choose the bestprogram.Reasons for Evaluating Training(cont.)
6-11Table 6.1 - Kirkpatrick’s Four-LevelFramework of Evaluation Criteria
6-12Outcomes Used in the Evaluationof Training ProgramsThe hierarchical nature of Kirkpatrick’sframework suggests that higher leveloutcomes should not be measured unlesspositive changes occur in lower leveloutcomes.The framework implies that changes at ahigher level are more beneficial thanchanges at a lower level.
6-13Outcomes Used in the Evaluationof Training Programs (cont.)Kirkpatrick’s framework criticisms:Research has not found that each level iscaused by the level that precedes it in theframework, nor does evidence suggest thatthe levels differ in importance.The approach does not take into account thepurpose of the evaluation.Outcomes can and should be collected in anorderly manner, that is, measures of reactionfollowed by measures of learning, behavior,and results.
6-15Outcomes Used in the Evaluationof Training Programs (cont.)Reaction outcomesIt is collected at the program’s conclusion.Cognitive outcomesThey do not help to determine if the traineewill actually use decision-making skills on thejob.Skill-based outcomesThe extent to which trainees have learnedskills can be evaluated by observing theirperformance in work samples such assimulators.
6-16Outcomes Used in the Evaluationof Training Programs (cont.)Return on investmentDirect costs - salaries and benefits for allemployees involved in training; programmaterial and supplies; equipment orclassroom rentals or purchases; and travelcosts.Indirect costs - not related directly to thedesign, development, or delivery of thetraining program.Benefits - value that the company gains fromthe training program.
6-17Determining WhetherOutcomes are AppropriateCriteriaRelevanceThe extent to which training outcomes are related to thelearned capabilities emphasized in the training program.Criterion contamination - the extent that trainingoutcomes measure inappropriate capabilities or areaffected by extraneous conditions.Criterion deficiency - the failure to measure trainingoutcomes that were emphasized in the training objectives.Reliability The degree to which outcomes can be measuredconsistently over time.Discrimination The degree to which trainees’ performance on theoutcome actually reflects true differences in performance.Practicality The ease with which the outcome measures can becollected.
6-18Figure 6.2 - Criterion Deficiency,Relevance, and Contamination
6-19Figure 6.4 - Training ProgramObjectives and Their Implications forEvaluation
6-20Evaluation DesignsThreats to validity - factors that willlead an evaluator to question either the:Internal validity - the believability of the studyresults.External validity - the extent to which theevaluation results are generalizable to othergroups of trainees and situations.
6-22Methods to Control for Threats to ValidityPretests and PosttestsA comparison of the posttraining and pretrainingmeasures can indicate the degree to which traineeshave changed as a result of training.Random assignment - assigning employeesto the training or comparison group on thebasis of chance.Helps to reduce the effects of employees droppingout of the study, and differences between thetraining group and comparison group in ability,knowledge, skill, or other personal characteristics.Evaluation Designs (cont.)
6-23Methods to Control for Threats to ValidityUsing a comparison group - employees whoparticipate in the evaluation study but do notattend the training program.Helps to rule out the possibility that changes foundin the outcome measures are due to factors otherthan training.Evaluation Designs (cont.)
6-25Time series - training outcomes arecollected at periodic intervals both beforeand after training.It allows an analysis of the stability oftraining outcomes over time.Reversal - time period in whichparticipants no longer receive the trainingintervention.Types of Evaluation Designs
6-26Table 6.12 - Factors That Influencethe Type of Evaluation Design
6-27Determining Return onInvestment (ROI)Cost-benefit analysis - process ofdetermining the economic benefits of atraining program using accountingmethods that look at training costs andbenefits.ROI should be limited only to certaintraining programs, because it can becostly.
6-28Determining Return onInvestment (ROI) (cont.)Determining costsMethods for comparing costs of alternativetraining programs include the resourcerequirements model and accounting.Determining benefits – methods include:technical, academic, and practitionerliterature.pilot training programs and observance ofsuccessful job performers.Estimates by trainees and their managers.
6-29Determining Return onInvestment (ROI) (cont.)To calculate ROI, divide benefits by costs.The ROI gives an estimate of the dollarreturn expected from each dollar investedin training.
6-30Table 6.13 - Determining Costsfor a Cost Benefit Analysis
6-31Determining Return onInvestment (ROI) (cont.)Utility analysis - a cost-benefit analysismethod that involves assessing the dollarvalue of training based on:estimates of the difference in job performancebetween trained and untrained employees.the number of individuals trained.the length of time a training program isexpected to influence performance.the variability in job performance in theuntrained group of employees.