training evaluation


Published on

Published in: Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

training evaluation

  1. 1. Training Evaluation
  2. 2. Training Evaluation • Training evaluation refers to activities aimed at finding out effectiveness of training programs against the training objectives for which these programs were organized • It is a planned process that provides specific information about a selected topic/ session or program for purposes of determining value and/ or making decisions
  3. 3. Uses of Evaluation • To determine success in meeting program objectives • To identify strengths and weaknesses of training activities • To compare costs to benefits • To decide who should participate in future programs • To test clarity and validity of tests, cases and exercises • To determine if program was the appropriate solution for specific need • To establish database that can help management in decision making about future programs • To reinforce major points made to the participants
  4. 4. Different Evaluation Models • Donald Kirkpatrick’s Evaluation Model • CIRO
  5. 5. Level Evaluation type (what is measured) Evaluation description and characteristics Examples of evaluation tools and methods Relevance and Practicability 1 Reaction Reaction evaluation is how the delegates felt about the training or learning experience. 'Happy sheets', feedback forms. Verbal reaction, post-training surveys or questionnaires. Quick and very easy to obtain. Not expensive to gather or to analyse 2 Learning Learning evaluation is the measurement of the increase in knowledge - before and after. Typically assessments or tests before and after the training. Relatively simple to set up; clear-cut for quantifiable skills. Interview or observation can also be used. Less easy for complex learning. 3 Behaviour Behaviour evaluation is the extent of applied learning back on the job - implementation. Observation and interview over time are required to assess change, relevance of change, and sustainability of change. Measurement of behaviour change typically requires cooperation and skill of line-managers. 4 Results Results evaluation is the effect on the business or environment by the trainee. Measures are already in place via normal management systems and reporting - the challenge is to relate to the trainee. Individually not difficult; unlike whole organisation. Process must attribute clear accountabilities.
  6. 6. Jack Phillips – Five level Model
  7. 7. Calculating ROI • Once data is collected, ROI Analysis begins with deliberate attempts to isolate effects of training on data items; Some strategies to accomplish this are:
  8. 8. Calculating ROI • Next step is to convert the data to monetary value – Direct conversion of hard data – quantity, quality, cost or time – Conversion of soft data to place monetary value on improvements; Techniques are • Historical costs • Supervisor estimation • Management estimation • Expert opinion • Participation estimation • External studies • Next calculate costs of the program
  9. 9. Calculating ROI • ROI Formula is the annual net program benefits divided by the program costs; • Where, • Net benefits are monetary value of benefits minus costs of the program
  10. 10. CIRO Model • Context – collect information about organizational deficiency, identify needs and sets objectives at 3 levels – – Ultimate objectives (overcome particular deficiency) – Intermediate objectives (changes in work behavior require for ultimate objectives to be met) – Immediate objectives (new knowledge, attitude, skills or attitude employee requires to reach intermediate objectives) • Input – involves obtaining and using information about possible training resources to choose between alternative inputs to training • Reaction – involves obtaining and using information about participants reactions • Outcomes – involves obtaining and using information about results
  11. 11. Methods of data collection Method Advantages Limitation Interview Flexible, opportunity for clarification, depth possible, personal interaction High reactive effects, high cost, face to face threat potential, labour-intensive, time consuming Questionnaire Low cost, honesty increased if it is anonymous, respondent sets pace, variety of options Possible inaccurate data, on the job responding conditions are not controlled, respondent sets varying paces, return rate of questionnaire difficult to control Direct Observation Non-threatening, excellent way to measure behavior change Possibly disruptive, reactive effect possible, may be unreliable, trained observers needed Written test Low purchase cost, readily scored, quickly processes, easily administered, wide sampling possible May be threatening, possible low relation to job performance, reliance on norms may distort individual performance Performance Test Reliable, objective, close relation to job performance Time consuming, simulation often difficult, high development cost Performance Data Reliable, objective, job based, easy to review, minimal reactive effects Lack of knowledge of criteria for keeping or discarding records, information system discrepancies
  12. 12. Designs of training evaluation • One group pre-test, post test design • Randomized non-equivalent control group design ; Group that undergoes training – experimental group, the one that doesn’t is control group • Randomized equivalent control group design • Post test only control group design – prevents effects of pre-test sensitivities
  13. 13. Suggestions for better evaluation • Plan your metrics before writing survey questions • Ensure the measurement is replicable and scalable • Ensure measurements are internally and externally comparable • Use industry accepted measurement approaches • Define value in eyes of stakeholders • Leverage automation and IT • Manage the change associated with measurement • Ensure your metrics have flexibility