Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Training & development evaluation


Published on

Training & development evaluation is a continual and systematic process of assessing the value or potential value of a training program, course, activity or event. Results of the evaluation are used to guide decision-making around various components of the training (e.g. delivery, results) and its overall continuation, modification, or elimination.

Published in: Education
  • Login to see the comments

Training & development evaluation

  1. 1. Training & Development Evaluation - Manohar Prasad
  2. 2. Introduction Training Evaluation is necessary to determine if the time, money, and effort devoted to training actually made a difference. It is like in assessing the effectiveness of the training program. 6 -
  3. 3. Introduction (continued)  Training effectiveness refers to the benefits that the company and the trainees receive from training.  Training outcomes or criteria refer to measures that the trainer and the company use to evaluate training programs.  Training evaluation refers to the process of collecting the outcomes needed to determine if training is effective.  Evaluation design refers to from whom, what, when, and how information needed for determining the effectiveness of the training program will be collected. 6 -
  4. 4. Why Should A Training Program Be Evaluated?  To identify the program’s strengths and weaknesses.  To assess whether content, organization, and administration of the program contribute to learning and the use of training content on the job.  To identify which trainees benefited most or least from the program. 6 -
  5. 5. Why Should A Training Program Be Evaluated? (continued)  To gather data to assist in marketing training programs.  To determine the financial benefits and costs of the programs.  To compare the costs and benefits of training versus non-training investments.  To compare the costs and benefits of different training programs to choose the best program. 6 -
  6. 6. Reasons for Evaluating Training  Companies are investing millions of dollars in training programs to help gain a competitive advantage.  Training investment is increasing because learning creates knowledge which differentiates between those companies and employees who are successful and those who are not. 6 -
  7. 7. Reasons for Evaluating Training (continued) Because companies have made larger investments in training and education and view training as a strategy to be successful, they expect the outcomes or benefits related to training to be measurable. 6 -
  8. 8. Training evaluation involves:  Formative evaluation – evaluation conducted to improve the training process.  Summative evaluation – evaluation conducted to determine the extent to which trainees have changed as a result of participating in the training program. 6 -
  9. 9. The Evaluation Process 6 - 9 Conduct a Needs Analysis Develop Measurable Learning Outcomes Develop Outcome Measures Choose an Evaluation Strategy Plan and Execute the Evaluation
  10. 10. Training Outcomes: Kirkpatrick’s Four-Level Framework of Evaluation Criteria Leve l Criteria Focus 1 Reaction of Trainee Trainee satisfaction-thought & felt 2 Learning Acquisition of knowledge, skills, attitudes, behavior 3 Behavior Improvement of behavior on the job 6 - 10
  11. 11. level evaluation type (what is measured) evaluation description and characteristics Examples of evaluation tools and methods Relevance and practicability 1 reaction  reaction evaluation is how the delegates felt about the training or learning experience  eg., 'happy sheets', feedback forms  also verbal reaction, post- training surveys or questionnaires - analyzed  quick and very easy to obtain after training ends  not expensive to gather or to analyze for groups 2 learning  learning evaluation is the measurement of the increase in knowledge - before and after typically assessments or tests before and after the training interview or observation can also be used relatively simple to set up; clear-cut for quantifiable skills less easy for more complex learning 3 behaviour  behaviour evaluation is the extent of applied learning back on the job - implementation  observation and interview over time are required to assess change, relevance of change, and sustainability of change  measurement of behaviour change typically requires cooperation and skill of line-managers 4 results  results evaluation is the effect on the business or environment by the trainee  measures are already in place via normal management systems and reporting - the challenge is to relate to the trainee  individually not difficult; unlike entire organization  process must attribute clear
  12. 12. Outcomes Used in Evaluating Training Programs: 6 - 12 Cognitive Outcomes Skill-Based Outcomes Affective Outcomes Results Return on Investment
  13. 13. Outcomes Used in Evaluating Training programs: (continued)  Cognitive Outcomes  Determine the degree to which trainees are familiar with the principles, facts, techniques, procedures, or processes emphasized in the training program.  Measure what knowledge trainees learned in the program.  Skill-Based Outcomes  Assess the level of technical or motor skills.  Include acquisition or learning of skills and use of skills on the job. 6 -
  14. 14. Outcomes Used in Evaluating Training Programs: (continued) Affective Outcomes  Include attitudes and motivation.  Trainees’ perceptions of the program including the facilities, trainers, and content. Results  Determine the training program’s payoff for the company. 6 -
  15. 15. Outcomes Used in Evaluating Training Programs: (continued) Return on Investment (ROI) Comparing the training’s monetary benefits with the cost of the training. ○Direct costs ○Indirect costs ○Benefits 6 -
  16. 16. How do you know if your outcomes are good? Good training outcomes need to be:  Relevant  Reliable  Discriminate  Practical 6 -
  17. 17. Good Outcomes: Relevance  Criteria relevance – the extent to which training programs are related to learned capabilities emphasized in the training program.  Criterion contamination – extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions.  Criterion deficiency – failure to measure training outcomes that were emphasized in the training objectives. 6 -
  18. 18. 6 - 18 Criterion deficiency, relevance, and contamination: Relevance Outcomes Identified by Needs Assessment and Included in Training Objectives Outcomes Measured in Evaluation DeficiencyContamination Outcomes Related to Training Objectives
  19. 19. Good Outcomes (continued)  Reliability – degree to which outcomes can be measured consistently over time.  Discrimination – degree to which trainee’s performances on the outcome actually reflect true differences in performance.  Practicality – refers to the ease with which the outcomes measures can be collected. 6 -
  20. 20. Evaluation Designs: Threats to Validity  Threats to validity refer to a factor that will lead one to question either:  The believability of the study results (internal validity), or  The extent to which the evaluation results are generalizable to other groups of trainees and situations (external validity) 6 -
  21. 21. Threats to Validity  Threats To Internal Validity  Company  Persons  Outcome Measures  Threats To External Validity  Reaction to pretest  Reaction to evaluation  Interaction of selection and training  Interaction of methods 6 -
  22. 22. Methods to Control for Threats to Validity 6 - 22 Pre- and Post tests Use of Comparison Groups Random Assignment
  23. 23. Types of Evaluation Designs  Post test – only  Pretest / posttest  Post test – only with Comparison group  Pretest / post test with Comparison group 6 -
  24. 24. Factors That Influence the Type of Evaluation Design Factor How Factor Influences Type of Evaluation Design Change potential Can program be modified? Importance Does ineffective training affect customer service, product development, or relationships between employees? Scale How many trainees are involved? Purpose of training Is training conducted for learning, results, or both? Organization culture Is demonstrating results part of company norms and expectations? Expertise Can a complex study be analyzed? Cost Is evaluation too expensive? 6 - 24
  25. 25. Importance of Training Cost Information  To understand total expenditures for training, including direct and indirect costs.  To compare costs of alternative training programs.  To evaluate the proportion of money spent on training development, administration, and evaluation as well as to compare monies spent on training for different groups of employees.  To control costs. 6 -
  26. 26. To calculate return on investment (ROI) 1. Identify outcome(s) (e.g., quality, accidents) 2. Place a value on the outcome(s) 3. Determine the change in performance after eliminating other potential influences on training results. 4. Obtain an annual amount of benefits (operational results) from training by comparing results after training to results before training 6 -
  27. 27. To calculate return on investment (ROI) (continued) 5. Determine training costs (direct costs + indirect costs + development costs + overhead costs + compensation for trainees) 6. Calculate the total savings by subtracting the training costs from benefits (operational results) 7. Calculate the ROI by dividing benefits (operational results) by costs. The ROI gives you an estimate of the Rupee return expected from each Rupee invested in training. 6 -
  28. 28. Example of Return on Investment Industry Training Program ROI Bottling company Workshops on managers’ roles 15:1 Large commercial bank Sales training 21:1 Electric & gas utility Behavior modification 5:1 Oil company Customer service 4.8:1 Health maintenance organization Team training 13.7:1 6 - 28
  29. 29. CIRO’s Four Levels of Evaluation of Training Impact  ‘CIRO’, is also based on four measurement categories but differs from the Kirkpatrick model in several respects. It envisages four categories of data capture: • Context evaluation • Input evaluation • Reaction evaluation • Outcome evaluation.
  30. 30. Context Evaluation  Context evaluation seeks to measure the context.  It scrutinizes the way performance needs were identified, learning objectives were established.  linkage of objectives with necessary competencies.  Component of program and relation with culture and structure of organization
  31. 31. Input Evaluation  It focuses on the resources needed to meet performance needs (e.g. staff, facilities, equipment, catering, budget)  Inputs should be cost effective
  32. 32. Reaction Evaluation  Involves obtaining and using information about Trainee’s expressed, current or subsequent reactions in order to improve training.  Participants views may be extremely useful if collected and used systematically and objectively.
  33. 33. Out come Evaluation  The outcome evaluation should measure the training and development outcomes against the benchmark of the program’s objectives.  the learning outcomes of trainees (i.e. changes in their knowledge and skills)  the outcomes in the workplace (i.e. changes in actual job performance),  outcomes for the relevant areas of the organization (i.e. departments or specialist units),and finally,  the outcomes for the organization as a whole.
  34. 34. Thank you! 34