• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Evaluating hrd programs
 

Evaluating hrd programs

on

  • 7,554 views

kuliah HRD En Sharil-UTM

kuliah HRD En Sharil-UTM

Statistics

Views

Total Views
7,554
Views on SlideShare
7,554
Embed Views
0

Actions

Likes
2
Downloads
336
Comments
1

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • VERY GOOD
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Evaluating hrd programs Evaluating hrd programs Presentation Transcript

    • Evaluating HRD Programs Chapter 7CH-7 Copyright 2008 Werner, et al 1
    • Learning Objectives – 1• Define evaluation and explain its role in HRD• Compare different frameworks for HRD evaluation• Discuss the various types of evaluation information available and compare the methods of data collection• Explain the role of research design in HRD evaluationCH-7 Copyright 2008 Werner, et al 2
    • Learning Objectives – 2• Describe the ethical issues involved in conducting HRD evaluation• Identify and explain the choices available for translating evaluation results into dollar terms• Calculate a utility estimate for a target organization. Discuss how technology impacts HRD evaluationCH-7 Copyright 2008 Werner, et al 3
    • Questions to Consider• How do you evaluate training and HRD?• What measures can be used to evaluate training?• Is there one best way to evaluate training?• What should be considered as one prepares to evaluate HRD?• What are the ethical issues involved in evaluating HRD?• How can the value of HRD be expressed in terms of costs and benefits, or dollars and cents?CH-7 Copyright 2008 Werner, et al 4
    • HRD Program Effectiveness• What is meant by effectiveness? – Is it the same thing as efficiency?• How is effectiveness measured?• What is the purpose of determining effectiveness? – That is, what decisions are made after a program is judged effective or ineffective?CH-7 Copyright 2008 Werner, et al 5
    • Effectiveness• A relative term – Effectiveness is determined with respect to the achievement of a goal or a set of goals – Must be determined with respect to the goals of the program or programs being examinedCH-7 Copyright 2008 Werner, et al 6
    • A Quandary• Program can be effective in meeting some goals – staying within budget – increasing a participant’s skills• and be ineffective in meeting others – Improving customer satisfaction• How do you ensure effectiveness?CH-7 Copyright 2008 Werner, et al 7
    • Training and HRD Process Assessment Design Implementation EvaluationFig. 7-1 Assess needs Prioritize Define Select needs objectives evaluation criteria Develop lesson plan Determine evaluation design Develop/acquire materials Conduct Deliver the Select evaluation HRD program trainer/leader of program or intervention or intervention Select methods and techniques Interpret results Schedule the program/interventionCH-7 Copyright 2008 Werner, et al 8
    • Purpose of Evaluation• HRD evaluation: – “The systematic collection of descriptive and judgmental information necessary to make effective training decisions related to the selection, adoption, value, and modification of various instructional activities.”CH-7 Copyright 2008 Werner, et al 9
    • Significant Points in Definition – 1• Both descriptive and judgmental information may be collected – Descriptive information provides a picture of what is happening or has happened – Judgmental information communicates some opinion or belief about what has happenedCH-7 Copyright 2008 Werner, et al 10
    • Significant Points in Definition – 2• Evaluation involves the systematic collection of information – According to a predetermined plan to ensure that the information is appropriate and useful• Evaluation is conducted to help make informed decisions about particular programs and methodsCH-7 Copyright 2008 Werner, et al 11
    • Evaluation Can Help• Determine whether a program is accomplishing its objectives• Identify the strengths and weaknesses of HRD programs• Determine the cost-benefit ratio of an HRD program• Decide who should participate in future HRD programs• Identify which participants benefited the most or least from the program• Gather data to assist in marketing future programs• Establish a database to assist management in making decisionsCH-7 Copyright 2008 Werner, et al 12
    • Other Major Factors – 1• If HRD staff cannot substantiate its contribution to the organization, its funding and programs may be cut during the budgeting process, especially when the organization faces tough timesCH-7 Copyright 2008 Werner, et al 13
    • Other Major Factors – 2• Evaluation can build credibility with top managers and others in the organization• Senior management often wants to know the benefits of HRD -programs• Building credibility is a key aspect of evaluationCH-7 Copyright 2008 Werner, et al 14
    • How Often Are HRD Programs Evaluated?• Most company-sponsored training• Less than half of executive MBA programs• Most used measures participant reaction – Not always usefulCH-7 Copyright 2008 Werner, et al 15
    • Why Not Done Frequently?• Conducting an evaluation is not easy• Many external factors can affect whether employee performance improves – makes it difficult to evaluate the impact of just the training• HRD managers afraid of criticism and program cutsCH-7 Copyright 2008 Werner, et al 16
    • Evaluation Prior to Purchase• Many HRD and training programs are purchased by organizations from third parties – They wouldn’t buy a program they didn’t think was going to work – they have evaluated the program before buying it• Equally important to evaluate after useCH-7 Copyright 2008 Werner, et al 17
    • Changing Evaluation Emphasis• Stage One focuses on anecdotal reactions• Stage Two involves experimental methodology• Stage Three creatively matches research methodology to organizational constraints• Stage Four shifts the focus of evaluation from post-program results to the entire HRD processCH-7 Copyright 2008 Werner, et al 18
    • Evaluation Frameworks – 1Table 7-1 Model Training Evaluation Criteria Kirkpatrick Four levels: (1967, 1987, •Reaction 1994) •Learning •Job Behavior •ResultsCH-7 Copyright 2008 Werner, et al 19
    • Evaluation Frameworks – 2Table 7-1 Model Training Evaluation Criteria CIPP (Galvin, Four levels: 1983) •Context •Input •Process •ProductCH-7 Copyright 2008 Werner, et al 20
    • Evaluation Frameworks – 3Table 7-1 Model Training Evaluation Criteria Brinkerhoff Six stages: (1987) •Goal Setting, •Program Design, •Program Implementation, •Immediate Outcomes •Intermediate or Usage Outcomes •Impacts and WorthCH-7 Copyright 2008 Werner, et al 21
    • Evaluation Frameworks – 4Table 7-1 Model Training Evaluation Criteria Kraiger, Ford, Classification scheme that specifies three & Salas (1993) categories of learning outcomes •cognitive •skill-based •affective Evaluation measures appropriate for each category of outcomesCH-7 Copyright 2008 Werner, et al 22
    • Evaluation Frameworks – 5Table 7-1 Model Training Evaluation Criteria Holton Identifies five categories of variables (1996) and the relationships among them: •Secondary Influences •Motivation Elements •Environmental Elements •Outcomes •Ability/Enabling ElementsCH-7 Copyright 2008 Werner, et al 23
    • Evaluation Frameworks – 6Table 7-1 Model Training Evaluation Criteria Phillips (1996) Five levels: •Reaction and Planned Action •Learning •Applied Learning on the Job •Business Results •Return on InvestmentCH-7 Copyright 2008 Werner, et al 24
    • Kirkpatrick’s Framework• Reaction • Did trainees like program? • Did trainees think it valuable?• Learning • Did they learn what objections said they should learn?• Job Behavior • Did they use learning back on job?• Results • Has HRD improved organization’s effectiveness?CH-7 Copyright 2008 Werner, et al 25
    • Kirkpatrick and Industry• Most organizations do not collect information on all four types of outcomes• About one-third of organizations use Kirkpatrick’s model• Some feel it only measures after training• Others feel it is more of a taxonomy of outcomesCH-7 Copyright 2008 Werner, et al 26
    • Brinkerhoff’s Six Stages• Goal Setting: – What is the need?• Program Design: – What will work to meet the need?• Program Implementation: – Is it working, with the focus on the implementation of the program?• Immediate Outcomes: – Did participants learn?• Intermediate or Usage Outcomes: – Are the participants using what they learned?• Impacts and Worth: – Did it make a worthwhile difference to the organization?CH-7 Copyright 2008 Werner, et al 27
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Cognitive Outcomes Verbal Declarative Amount of Recognition and knowledge knowledge knowledge recall tests Accuracy of recall Speed, Power tests accessibility of Speed tests knowledge SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 28
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Cognitive Outcomes •Knowledge •Mental •Similarity to ideal •Free sorts organization Models •Interrelationships •Structural of elements assessment SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 29
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Cognitive Outcomes •Cognitive •Self insight •Self-awareness •Probed protocol strategies •Metacognitive •Self-regulation analysis skills •Self-report Readiness for testing SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 30
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Skill Based Outcomes •Compilation •Composition •Speed •Targeted behavioral •Procedural- •Fluidity of observation ization performance •Hands-on testing •Error rates •Structural situational •Chunking interviews •Generalization •Discrimination •Strengthening SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 31
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Skill Based Outcomes •Automaticity •Automatic •Attentional •Secondary task processing requirements performance •Tuning •Available •Interference cognitive problems resources •Embedded measurement SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 32
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Affective Outcomes •Attitudinal •Target •Attitude direction •Self report objective •Attitude Strength measures •Accessibility •Centrality conviction SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 33
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Affective Outcomes •Motivation •Motivational •Mastery versus •Self report disposition performance measures orientations •Appropriateness of orientation SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 34
    • Classification of Learning OutcomesTable 7-2 Category Learning Focus of Potential Constructs Measurement Evaluation Methods Affective Outcomes •Motivation •Self-efficacy •Perceived •Self-report performance measures capability •Free recall • Goal Setting •Level of goals measures •Complexity of •Free sorts goal structures •Goal Commitment SOURCE: K. Kraiger, J. K. Ford, & Salas, E. (1993). “Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation.” Journal of Applied Psychology, 78, table 1, 323. Copyright © 1993 by the American Psychological Association. Adapted with permission.CH-7 Copyright 2008 Werner, et al 35
    • Kirkpatrick’s Shortcomings• Lack of explicit causal relationships among the different levels• Lack of specificity in dealing with different types of learning outcomes• Lack of direction concerning which measures are appropriate to assess which outcome measuresCH-7 Copyright 2008 Werner, et al 36
    • Expanded Framework – 1• Reaction – Perceived usefulness/utility • What was the perceived relevance/usefulness of this training? – Post-training attitudes • How well did trainees like training?CH-7 Copyright 2008 Werner, et al 37
    • Expanded Framework – 2• Cognitive learning – How much did trainees learn from the training? – Post-training learning • How much learning does the trainee demonstrate immediately after training? – Retention • How much learning does the trainee demonstrate back on the job?CH-7 Copyright 2008 Werner, et al 38
    • Expanded Framework – 3• Behavior – What behavior change occurred as a result of training? – Training performance • How well can trainees demonstrate the newly acquired skills at the end of training? – Transfer performance • How well can trainees demonstrate the newly acquired skills back on the job?CH-7 Copyright 2008 Werner, et al 39
    • Expanded Framework – 4• Results – What tangible outcomes or results occurred as a result of training? – What was the return on investment (ROI) for this training? • (See ROI and utility sections below; this is Phillips “Level 5”) – What was the contribution of this training program to the community/larger society?CH-7 Copyright 2008 Werner, et al 40
    • A Stakeholder Approach• Stakeholder – a person or group with an interest in seeing an endeavor succeed and without whose support the endeavor would failCH-7 Copyright 2008 Werner, et al 41
    • A Stakeholder Scorecard Fig. 7-2 SENIOR MANAGEM ENT CONTRIBUTIONS INDUCEMENTS Measure 1 Measure 1 Measure 2 Measure 2 Measure 3 Measure 3 TRAINEES TRAINEES’ MANAGER S CONTRIBUTIONS INDUCEMENTS CONTRIBUTIONS INDUCEMENTS Measure 1 Measure 1 TRAINING Measure 1 Measure 1 Measure 2 Measure 2 Measure 2 Measure 2 Measure 3 Measure 3 Measure 3 Measure 3 TRAINERS CONTRIBUTIONS INDUCEMENTS Measure 1 Measure 1 Measure 2 Measure 2 Measure 3 Measure 3 SOURCE: Nickols, F. W. (2005). Why a stakeholder approach to evaluating training. Advances in Developing Human Resources, 7(1), 121–134.CH-7 Copyright 2008 Werner, et al 42
    • Data CollectionMethod Description2. Interview Conversation with one or more individuals to assess their opinions, observations, and beliefs2. Questionnaire A standardized set of questions intended to assess opinions, observations, and beliefs3. Direct Observation Observing a task or set of tasks as they are performed and recording what is seen4. Tests and simulations Structured situation to assess an individual’s knowledge or proficiency to perform some task or behavior5. Archival performance data Use of existing information, such as files or reportsCH-7 Copyright 2008 Werner, et al 43
    • Data CollectionFig 7-3Method Description•Interview •Conversation with one or more individuals to assess their opinions, observations, and beliefsCH-7 Copyright 2008 Werner, et al 44
    • Data CollectionFig 7-3 Method Description •Questionnaire •A standardized set of questions intended to assess opinions, observations, and beliefsCH-7 Copyright 2008 Werner, et al 45
    • Data CollectionFig 7-3Method Description•Direct Observation •Observing a task or set of tasks as they are performed and recording what is seenCH-7 Copyright 2008 Werner, et al 46
    • Data CollectionFig 7-3 Method Description •Tests and simulations •Structured situation to assess an individual’s knowledge or proficiency to perform some task or behaviorCH-7 Copyright 2008 Werner, et al 47
    • Data CollectionFig 7-3Method Description•Archival performance •Use of existing information,data such as files or reportsCH-7 Copyright 2008 Werner, et al 48
    • Participant Reaction Questionnaire• Measures immediate reaction to program – See Tables 7-3 & 7-4, pp. 204, 204• Transforms “feelings” into numbers – Likert scale• Allows for numerical analysis – Mean (Average) – Standard deviation (Spread)• Helps compare instructors and programsCH-7 Copyright 2008 Werner, et al 49
    • Choosing Data Collection Methods• Three vital issues with methods – Reliability • Consistency of measurement – Validity • Does it measure the desired target(s) – Practicality • Can it be done with existing resourcesCH-7 Copyright 2008 Werner, et al 50
    • Data Collection MethodsTable 7-4 Method Advantages Disadvantages •Interview •Flexible •High reactive •Opportunity for effects clarification •High cost •Depth possible •Face-to-face threat •Personal contact potential •Labor intensive •Trained observers needs SOURCE: Reprinted from Handbook of training evaluation and measurement methods, J. J. Phillips, p. 92, Copyright 1983, with permission from Elsevier.CH-7 Copyright 2008 Werner, et al 51
    • Data Collection MethodsTable 7-4 Method Advantages Disadvantages •Questionnaire •Low cost •Possible inaccurate •Honesty increased data if anonymous •On-job responding •Anonymity possible conditions not •Respondent sets controlled pace •Respondents set •Variety of options varying paces •Return rate beyond control SOURCE: Reprinted from Handbook of training evaluation and measurement methods, J. J. Phillips, p. 92, Copyright 1983, with permission from Elsevier.CH-7 Copyright 2008 Werner, et al 52
    • Data Collection MethodsTable 7-4 Method Advantages Disadvantages •Direct Observation •Non-threatening •Possibly disruptive •Excellent way to •Reactive effect measure behavior possible change •May be unreliable •Trained observers needed SOURCE: Reprinted from Handbook of training evaluation and measurement methods, J. J. Phillips, p. 92, Copyright 1983, with permission from Elsevier.CH-7 Copyright 2008 Werner, et al 53
    • Data Collection MethodsTable 7-4 Method Advantages Disadvantages •Written Test •Low purchase cost •May be threatening •Readily scored •Possible low •Quickly processed relation to job •Easily administered performance •Reliance on norms •Wide sampling possible may distort individual performance •Possible cultural bias SOURCE: Reprinted from Handbook of training evaluation and measurement methods, J. J. Phillips, p. 92, Copyright 1983, with permission from Elsevier.CH-7 Copyright 2008 Werner, et al 54
    • Data Collection MethodsTable 7-4 Method Advantages Disadvantages •Simulation/ •Reliable •Time consuming Performance Test •Objective •Simulation often •Close relation to difficult job performance •High development cost SOURCE: Reprinted from Handbook of training evaluation and measurement methods, J. J. Phillips, p. 92, Copyright 1983, with permission from Elsevier.CH-7 Copyright 2008 Werner, et al 55
    • Data Collection MethodsTable 7-4 Method Advantages Disadvantages •Archival •Reliable •Criteria for keeping or Performance Data •Objective discarding records •Easy to review •Info system •Minimal discrepancies reactive effects •Indirect •Needs to be converted to usable form •May be expensive to collect SOURCE: Reprinted from Handbook of training evaluation and measurement methods, J. J. Phillips, p. 92, Copyright 1983, with permission from Elsevier.CH-7 Copyright 2008 Werner, et al 56
    • Types of Data• At least three types of data are used – Individual performance – System-wide performance – EconomicCH-7 Copyright 2008 Werner, et al 57
    • Individual Performance Data• Employee’s test scores• Number of units produced• Timeliness of performance• Quality of performance• Attendance• AttitudesCH-7 Copyright 2008 Werner, et al 58
    • System-Wide Performance• Concerns business unit, team, division, etc.• Includes – Productivity – Rework – Scrap – Customer and client satisfaction – TimelinessCH-7 Copyright 2008 Werner, et al 59
    • Economic Data• Includes financial and economic performance of the organization or unit – the bottom line – profits – product liability – avoidance of penalties • such as fines for noncompliance with laws and regulations – market shareCH-7 Copyright 2008 Werner, et al 60
    • Use of Self-Report Data• Can provide – personality data – attitudes – perceptions• Can provide information to measure the effectiveness of HRD or other programsCH-7 Copyright 2008 Werner, et al 61
    • Two Serious Problems• Mono-method bias. – If both reports in a before-and-after evaluation come from the same person at the same time (say, after training), conclusions may be questionable • The respondents may be more concerned about being consistent in their answers than about providing accurate responses• 2. Socially desirable responses – Respondents may report what they think the researcher (or boss) wants to hear rather than the truth • Employees may be fearful or embarrassed to admit that they learned nothing in a training programCH-7 Copyright 2008 Werner, et al 62
    • Response-Shift Bias• Respondents’ perspectives of their skills before training change during the training program and affect their after-training assessmentCH-7 Copyright 2008 Werner, et al 63
    • Reliance of Self-Report Data• Can be problematic• Other methods may yield better results – Direct observation by trained observers • like supervisors – Tests – SimulationsCH-7 Copyright 2008 Werner, et al 64
    • Research• Research design – A plan for conducting an evaluation study – A complex subject – Critical to HRD evaluation – Specifies • expected results of the evaluation study • the methods of data collection • how the data will be analyzed – More in Appendix 7-1CH-7 Copyright 2008 Werner, et al 65
    • Specific Needs in HRD Research• Pretest and Posttest – allows the trainer to see what has changed after the training• Control Group – Group of employees similar to those who receive training • Don’t receive training at the same time as those who are trained. • Receives the same evaluation measures as the group that is trained • Allows for a comparison of their scores. – The ideal scenario • training group and the control group have similar scores before training – Scores for the training group increase after training, while those of the control group remain constant. – This provides fairly strong evidence that the training (and not some other factor) was responsible for the changes on the outcome measuresCH-7 Copyright 2008 Werner, et al 66
    • Strong Factors for Good Research• Pretest-posttest with control group• Random assignment between training and control group• Collection of data over time – Time-series design• Number of respondents should be over thirty to be accurateCH-7 Copyright 2008 Werner, et al 67
    • Ethical Issues in Evaluation Research – 1• Confidentiality• Informed consent• Withholding training – Use lottery to assign to control groups – Assure training will be provided if found to be effective – Train both groups but at different timeCH-7 Copyright 2008 Werner, et al 68
    • Ethical Issues in Evaluation Research – 2• Use of deception• Pressure to produce positive resultsCH-7 Copyright 2008 Werner, et al 69
    • Assessing HRD in Dollar Terms• Effect of an HRD program on the organization’s effectiveness• Measured in terms of increased • Productivity • Timeliness • Cost savingsCH-7 Copyright 2008 Werner, et al 70
    • A Major Goal• Making HRD programs investments – Leading to measurable payoffs in the future• Two practical Methods – Evaluation of training costs • Return on Investment (ROI) – Utility analysisCH-7 Copyright 2008 Werner, et al 71
    • Types of Cost Analysis• Cost-benefit analysis – comparing monetary costs of training to benefits received in non-monetary terms • improvements in attitudes, safety, and health• Cost-effectiveness analysis – financial benefits accrued from training • increases in quality and profits • reduction in waste and processing timeCH-7 Copyright 2008 Werner, et al 72
    • Return on Investment (ROI)• Most common business ratio for determining performance ROI = Results Training Costs• If ROI < 1, training costs more than benefits accrues• If ROI > 1 benefits accrue• Greater the ratio, the better the benefitCH-7 Copyright 2008 Werner, et al 73
    • Table 7-5 Types of Costs – 1 Direct Costs Directly associated with delivery of learning activities •Course materials reproduced or hired •Instructional aids •Equipment rental •Travel •Food •Instructor’s salary & benefits SOURCE: From Robinson, D. G., & Robinson, J. (1989). Training for impact. Training and Development Journal, 43(8), 39. Reprinted with the permission of American Society for Training & Development.CH-7 Copyright 2008 Werner, et al 74
    • Types of Costs – 2 Table 7-5 Indirect Costs Incurred in support of learning activities but not directly •Instructor prep •Clerical & admin support •Course materials already distributed and therefore not recoverable if program cancelled •Marketing the program SOURCE: From Robinson, D. G., & Robinson, J. (1989). Training for impact. Training and Development Journal, 43(8), 39. Reprinted with the permission of American Society for Training & Development.CH-7 Copyright 2008 Werner, et al 75
    • Table 7-5 Types of Costs – 3 Development •Development of videotapes, DVDs, Costs CBI •Design of program materials •Piloting the program •Any necessary redesign after piloting SOURCE: From Robinson, D. G., & Robinson, J. (1989). Training for impact. Training and Development Journal, 43(8), 39. Reprinted with the permission of American Society for Training & Development.CH-7 Copyright 2008 Werner, et al 76
    • Types of Costs – 4 Table 7-5 Overhead Not related directly to any training Costs program but essential for operating effort •Maintaining equipment •Heat, light •Cost of dedicated resources not in use for specific program SOURCE: From Robinson, D. G., & Robinson, J. (1989). Training for impact. Training and Development Journal, 43(8), 39. Reprinted with the permission of American Society for Training & Development.CH-7 Copyright 2008 Werner, et al 77
    • Types of Costs – 5 Table 7-5 Compensation Salaries and benefits paid to for participants participants for the time in a training program Individual data not available, but HR should provide average for all participants SOURCE: From Robinson, D. G., & Robinson, J. (1989). Training for impact. Training and Development Journal, 43(8), 39. Reprinted with the permission of American Society for Training & Development.CH-7 Copyright 2008 Werner, et al 78
    • Increasing ROI Credibility• Use conservative cost estimates – Error on high side• Find reliable estimate sources• Explain all assumptions and techniques used to calculate costs• Rely on hard data whenever possible• Use the “Balanced Scorecard” shown earlierCH-7 Copyright 2008 Werner, et al 79
    • Training Cost AnalysisTable 7-6• Calculate direct costs• Calculate indirect costs• Calculate development costs• Determine overhead costs• Determine compensation for participants• Sum total costs• Divide by number or trainees to get cost per participantCH-7 Copyright 2008 Werner, et al 80
    • Calculating ROI – 1 Table 7-7 Operational How Results Results Δ Expressed Results Measured Before After (+/-) in $ Training Training Quality of % 2% of 1/5% of .5% $720 per panels rejected 1440 per 1080/per 360 day day day panels $172,800 per year SOURCE: From Training for impact; Robinson, D. G., & Robinson, J. Copyright © 1989; Training and Development Journal, 43, 41. This material is used by permission of John Wiley & Sons, Inc.CH-7 Copyright 2008 Werner, et al 81
    • Calculating ROI – 2 Table 7-7 Operation How Results Results Δ Expressed al Results Measured Before After (+/-) in $ Training Training House- Visual 10 defects 2 defects 8 de- Not meas- Keeping inspection (average) (average) fects urable in $ using 20 point checklist SOURCE: From Training for impact; Robinson, D. G., & Robinson, J. Copyright © 1989; Training and Development Journal, 43, 41. This material is used by permission of John Wiley & Sons, Inc.CH-7 Copyright 2008 Werner, et al 82
    • Calculating ROI – 3Table 7-7 Operational How Results Results After Δ Expressed Results Measured Before Training (+/-) in $ Training Preventable Number 24 per year 16 per year 8 per $720 per Accidents of year Day accidents Direct $144,000 $96,000 per $48K $48K per cost of per year year year each accident Total Savings: $220.8K Per year SOURCE: From Training for impact; Robinson, D. G., & Robinson, J. Copyright © 1989; Training and Development Journal, 43, 41. This material is used by permission of John Wiley & Sons, Inc.CH-7 Copyright 2008 Werner, et al 83
    • Calculating ROI – 4ROI = Return = Operational Results Investment Training Costs SOURCE: From Training for impact; Robinson, D. G., & Robinson, J. Copyright © 1989; Training and Development Journal, 43, 41. This material is used by permission of John Wiley & Sons, Inc.CH-7 Copyright 2008 Werner, et al 84
    • Utility Analysis• Utility analysis provides a way to translatetraining results into dollar termsCH-7 Copyright 2008 Werner, et al 85
    • Calculating UtilityΔU = (N)(T)(dt)(SDv) – CWhere N = number of trainees T = Time benefit expected to last dt = Effect size True size of difference in Std Dev terms SDy = Dollar value of job performance in Std Dev terms C = costs of trainingCH-7 Copyright 2008 Werner, et al 86
    • Utility Analysis Approach• Compute minimum annual benefits needed to break even• Use break even analysis to determine minimum effect size (dy) that will yield required minimum benefit• Use the results from meta-analytic analysis to determine expected cost and expected payoffNOTE: Use the company statisticians and financial staff to help (and they take partial ownership of results)CH-7 Copyright 2008 Werner, et al 87
    • Goal of Using Cost-Benefit Analysis• Put HRD on equal footing as other managers• Language of business is money• Results must be quantifiable• Results need to be expressed statistically• Need to – Demonstrate expected gains of HRD programs – Compete with needs of other managers for equipment, facilities, personnel, etc.CH-7 Copyright 2008 Werner, et al 88
    • Constraints• While utility analysis can help to translate the benefits of training programs into dollar terms, there are concerns about the practicality of such efforts• Utility analysis (in addition to ROI and cost estimates) presents an opportunity for to provide information to decision makers in dollar terms they understandCH-7 Copyright 2008 Werner, et al 89
    • Increasing Managerial Acceptance – 1• Involve senior management in determining the utility model and procedures to be used• Train HR professionals and managers in the details of utility analysis• Offer an explanation of the components of the utility model – Focus on utility information as a communication tool to aid in decision makingCH-7 Copyright 2008 Werner, et al 90
    • Increasing Managerial Acceptance – 2• Involve management in arriving at estimates• Use credible and conservative estimates• Admit that the results of utility analysis are often based on fallible but reasonable estimates• Use utility analysis to compare alternatives, rather than to justify individual programsCH-7 Copyright 2008 Werner, et al 91
    • How Technology Impacts Evaluation• Reaction – – easy to gather continuous feedback online; – could use a discussion thread or “chat room” to allow trainees to discuss their experiences with online learning• Learning – very easy to test trainees electronically – can also link to a learning management system• Behavior – very hard to capture electronically – some relevant data may be available in other information systems, e.g., appraisals, promotions, turnover, and discipline data• Results – even harder to do online than traditionally, without face-to-face interaction, feedback, and buy-inCH-7 Copyright 2008 Werner, et al 92
    • How Evaluation SHOULD Be Conducted• Perform needs analysis• Develop explicit evaluation strategy• Have specific training objectives• Obtain participant reactions• Develop criterion instruments• Plan and execute evaluationCH-7 Copyright 2008 Werner, et al 93
    • HRD in the Organization• Not done in a vacuum• HRD impacts – Financial performance, – Turnover – Absenteeism – Organizational learning – Etc.• HRD interventions matter• Training alone never worksCH-7 Copyright 2008 Werner, et al 94