Kirkpatrick's model
Upcoming SlideShare
Loading in...5
×
 

Kirkpatrick's model

on

  • 1,393 views

 

Statistics

Views

Total Views
1,393
Views on SlideShare
1,393
Embed Views
0

Actions

Likes
2
Downloads
69
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Welcome to our tutorial on Kirkpatrick's Four Levels of Evaluation.
  • Donald Kirkpatrick developed a model of training evaluation in 1959 that has served the training community like no other. This 4-level model is arguably the most widely used approach in the world-even today. It's simple. Flexible. Complete. It presents four types of evaluations.those for reaction, learning, behavior and results.
  • It is important to remember that each level is different. Level 1 evaluates Reaction ; Level 2 measures Learning; Level 3 refers to Behavior. Level 4 looks at Results. Let’s take a look at each of Kirkpatrick’s level in detail.
  • Go from top down What if we did nothing? Applications of percentage – level 1, level 2, level 3 etc.
  • A Level One evaluation measures audience reaction . Kirkpatrick says that evaluating reaction is the same thing as measuring customer satisfaction in that: It gives you feedback so you know how customers are reacting right now ; and It shows you care how they react. Level One also provides quantitative data on customer satisfaction.
  • Most evaluators use the questionnaire for Level Two evaluations. A questionnaire can identify what to keep, delete, or improve . As a result, changes may be made in the areas of content, methods, media, trainer style, facilities, and/or course materials.
  • In order to perform a Level One evaluation, first determine what you want to find out. Then, design a form to collect or quantify learner reactions. You want to do this immediately, so that you capture everyone’s responses. Develop acceptable scoring standards. Know what you will accept, and what you won’t. Follow-up as appropriate, communicating the results of your evaluation with those who need to know.
  • Let’s review Level Two --”Learning”. Instructors can affect three cognitive areas during training--they can teach new knowledge, skills, and attitudes. Therefore, when we measuring learning, we determine if one or more of the following has occurred: Was knowledge learned or gained? Were new skills or ways to improve existing skills gained? Were attitudes changed?
  • Media used to help measure learning include the written word, voice, and demonstration. Specific method s that may be employed to check for learning include interviews, surveys, pre- and post-tests, observations, or any combinations of these. To view actual examples of learning evaluation tools, be sure to view the QuestionMark web site examples.
  • If practical, use a control group to compare knowledge changes. Design the tests around your objectives. You may want to evaluate learning both before and after the training, too. Try to survey 100% of the participants to get useful data. If that’s not feasible, use a statistical sampling. Finally, follow through and communicate with those who need to know.
  • Let’s move to Level Three, Behavior. Kirkpatrick says this level identifies behavior change. Level Three focuses on the learner’s ability to transfer the learning to where it is actually needed--the real world. Therefore, evaluators measure achievement of performance objectives in real-world settings.
  • To measure behavior changes, Level Three evaluators collect data from the setting where the learner must exhibit the new behavior. Evaluators observe the performer and might gather data from other key individuals, such as subordinates, supervisors, and any others who are credible witnesses of the trainee’s behavior. Besides observation checklists, evaluators might use questionnaires or interviews to gather data.
  • Observation is the primary method used in a Level Three evaluation, whether you choose to: Evaluate before and after training, or Survey key people who observe the trainee in action Allow ample time before observing, and consider the cost-to-benefits of your tactics, such as: 100 percent participation or a sampling Repeated evaluations Use of a control group
  • Let’s turn our attention to Level Four--”Evaluate Results.” Here we are looking at the final results that occur because the participants attended and participated in the training. Of course, when we speak of the term “results” we must often times consider the goal of the training program. Therefore we must be careful to identify what results we’re seeking as we evaluate different programs. Results can be financial. Results should be quantifiable.
  • Level Four evaluations capture quantifiable results. There are two ways to get the numbers: Prove it, or find strong evidence. Proof is concrete . The results relate to the training in direct, indisputable ways. In some situations, the cost of getting proof is monumental. Therefore, evidence-- a softer form of correlating results to training --will suffice.
  • To perform this measurement, use a control group. The control group can provide important data, such as the effect of that advertising on increased sales levels mentioned in the last slide. Allow ample time for results to be achieved. Collect data both before and after the training program to get additional perspective on the trainee’s performance. Consider the cost versus the benefits of performing this evaluation. Finally, be prepared to accept evidence when proof is not possible.
  • Congratulations. You’ve made it through Kirkpatrick’s Four Levels of evaluation. Now you know that: Level One measures participant reaction Level Two measures new learning Level Three measures behaviors in the real world; and Level Four measures the results. But we’re not going to measure your learning today--We’ll leave that up to you!

Kirkpatrick's model Kirkpatrick's model Presentation Transcript

  • Orlando V. Griego, PhDAssessments and the Kirkpatrick Model 1
  • Then he looked at wisdom andappraised it; he confirmed it and tested it. Job 28:27 2
  • Class Objective • Write questions by the end of class the represent Kirkpatrick’s four levels of training. 3
  • Activity Objective • Using an activity, be able to measure my effectiveness from individual to team by increasing my score 4
  • Behind the 8 ball 5
  • Donald Kirkpatrick• Kirkpatrick developed a model of training evaluation in 1959• Arguably the most widely used approach• Simple, Flexible and Complete• 4-level model 6
  • The Four Levels• Level I: Evaluate Reaction• Level II: Evaluate Learning• Level III: Evaluate Behavior• Level IV: Evaluate Results• Fifth level was recently “added” for return on investment (“ROI”) but this was not in Kirkpatrick’s original model 7
  • Relationship Between Levels Level 4 - Results Was it worth it? • Each subsequent level is predicated upon Level 3 - Behavior doing evaluation at KSA being used on the job? lower level • A Level 3 will be of Level 2 - Knowledge marginal use, if a Level Did they learn anything 2 evaluation is not Level 1 - Reaction conductedWas the environment suitable for learning? 8
  • Only by assessing each level can we yield actionable results Level 4 - Results Check Requirements, Was it worth it? Systems and Processes Level 3 - Behavior Check KSA being used on the job? Performance Environment Level 2 - Knowledge Improve Did they learn anything Knowledge/Skill transfer Level 1 - Reaction Improve Learning EnvironmentWas the environment suitable for learning? 9
  • Types of Assessments Used at Each Level Type Form Level 4 - Results Correlation of business Was it worth it? Summative results with other assessment results Level 3 - Behavior Observation of KSA being used on the job? Summative Performance 360° Survey Level 2 - Knowledge Self-assessment Diagnostic Did they learn anything Test Summative Level 1 - Reaction Survey Reaction Real-time PollingWas the environment suitable for learning? Formative Quizzing 10
  • Reaction - What Is It?• How favorably participants react to the training (“Customer satisfaction”) – Collects reactions to instructor, course, and learning environment – Communicates to trainees that their feedback is valued – Can provide quantitative information 11
  • Reaction - What It Looks Like• Questionnaire - Most common collection tool – Content: I enjoyed the content. SD: 1 2 3 4 5 6 :SA – Methods: The seminar approach helped me learn. – Media: The AVs were helpful to me. – Trainer style: I liked the instructor. – Facilities: The room was useful for my learning. – Course materials: The materials provided make my learning better. 12
  • Reaction - How to Perform • Determine what you want to find out • Design a form to collect/quantify reactions • Do Immediately • Develop acceptable scoring standards • Follow-up as appropriate 13
  • Learning - What Is It? • Knowledge • Skills • Attitudes 14
  • Learning - What It Looks Like• Media used to measure learning: – Text: I felt the textbook help me learn. – Voice: The audio materials increased my knowledge. – Demonstration: I learned well from the demonstration.• Methods used to measure learning: – Interviews – Surveys – Tests (pre-/post-) – Observations – Combinations 15
  • Learning - How to Perform• Use a control group, if feasible• Evaluate knowledge, skills, and/or attitudes before and after• Get 100% participation or use statistical sample• Follow-up as appropriate 16
  • Behavior - What Is It?• Transfer of knowledge, skills, and/or attitude to the real world – Measure achievement of performance objectives 17
  • Behavior - What It Looks Like• Observe performer, first- hand• Survey key people who observe performer• Use checklists, questionnaires, interviews, or combinations – I believe coming in early is helpful. – I work better on the new production system. 18
  • Behavior - How to Perform• Evaluate before and after training• Allow ample time before observing• Survey key people• Consider cost vs. benefits – 100% participation or a sampling – Repeated evaluations at appropriate intervals – Use of a control group 19
  • Results - What Is It? • Assesses “bottom line,” final results • Definition of “results” dependent upon the goal of the training program 20
  • Results - WhatIt Looks Like• Depends upon objectives of training program – Quantify: Bottom line, productivity, improvement• Proof vs. Evidence – Proof is concrete – Evidence is soft• Sample Questions: – I believe the new system has improved productivity. – My training has allowed me to be more productive. – My boss is a better leader after my training. 21
  • Results - How to Perform• Use a control group• Allow time for results to be realized• Measure before and after the program• Consider cost versus benefits• Be satisfied with evidence when proof is not possible 22
  • Summary & Assignment• Level I: Evaluate Reaction• Level II: Evaluate Learning• Level III: Evaluate Behavior• Level IV: Evaluate Results• Write five questions for each level using 4 different training scenarios. – Use “I” questions. – Insure they apply to the content – Tricks: Thesaursus, book TOC, 23 ask each other
  • Readings to Consider• Kirkpatrick, Donald L. (1998). Evaluating Training Programs: The Four Levels. Berrett-Koehler Publishers.• Worthen, Blaine R, James R. Sanders, Jody L. Fitzpatrick (1997). Program Evaluation: Alternative Approaches and Practical Guidelines (Second Edition).Addison, Wesley, Longman, Inc.• Kirkpatrick, Donald L. (1998). Another Look at Evaluating Training Programs. American Society for Training & Development.• Sieloff, Debra A. (1999). The Bridge Evaluation Model. International Society for Performance Improvement. 24