Adlt 606 class 11 kirkpatrick's evaluation model short version


Published on

Published in: Education, Business, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • The first level of training program evaluation in Kirkpatrick’s model is the reaction or critique evaluation; at this level you are asking about how satisfied individuals are with the program. Why do we want to know if people are satisfied? If people are not satisfied, they will probably not use what they’ve learned. If they are REALLY unhappy about the program, they may have been so turned off that they didn’t learn anything Sometimes you’ve probably heard this level dismissed as irrelevant - called a “smile sheet” or “happiness index” or “whoopie sheet.” I think this is a mistake, no matter what Jerry Harvey says. If you are dissatisfied with the quality of info obtained from a reaction eval, check the kinds of questions you are asking and how good THEY are.
  • How are we using a reaction eval in this class? In our class, the CIQ serves as a class-by-class Level 1 evaluation, although it differs in one aspect in that it engages you in a process of ongoing reflection that is a little more sophisticated that most Level 1 evaluations. Level 2 asks where or not participants learned what you intended them to learn. This is your quality assurance index for your teaching session. Typical ways of evaluating at this level paper and pencil tests and observed simulations or skill demonstrations. Level 2 in this class? In our class, your written assignments are used as level 2 evaluations, as are your teaching demos and presentations. K. breaks behavior into 2 parts in Level 3-- observable behavior chg and non-observable behavior change. Why is this important? This question asks are participants using what they learned on the job? Are you using what you’ve learned in class in your departments and when you teach? How can we tell?
  • These are the four types of knowledge tests most often used in evaluation - we’ll discuss the advantages / disadvantages of each. Essay & open ended answer - Describe the five teaching perspectives according to Pratt and give examples of how each is used in medical education. Advantages :Easy to construct; allows freedom in answering and adapts well to why and how questions that require higher level cognitive skills. Disadvantages: Must be read and scored by a knowledgeable person; manually scored ; writing ability may affect score. Write in or short answer - Test items are sentences with key words missing. Ex: The theoretical orientation that attempts to account for differences between adults and children as learners is called ………………… Advantages : limited number of correct answers; can be scored by a person with a list of correct answers (little interpretation) Disadvantages : Format does not adapt well to how or why questions and tests cannot be machine scored. Binary (including true-false): Humanistic learning theory is concerned with a) the development of the whole person; (b) creating an environment that will elicit the desired response from a learner. Advantages : Easy to score; can be machine or computer scored; instructions easy to understand; Disadvantages: Questions limited in scope; test writer must have high content knowledge and be able to construct unambiguous statements; there is a tendency for people to view these as “trick” questions and read something into them that is not there.
  • Multiple choice Tests: Vary from binary questions up to as many as five choices. Advantages : Easy to score by machine or computer; Questions can be more complex than binary questions. When constructed with a penalty for wrong answers, participants are less tempted to guess at answers, because sheer guesses will result in a greater number of incorrrect answers. Disadvantages: Do not adapt well to complex answers, such as those dealing with rationales; take more time to develop than binary questions, because the test writer needs to be able to develop logical wrong answers.
  • Adlt 606 class 11 kirkpatrick's evaluation model short version

    1. 1. Kirkpatrick’s 4 Levels of Evaluation <ul><li>Level 1 - Reaction </li></ul><ul><li>Level 2 - Learning </li></ul><ul><li>Level 3 - Behavioral Results </li></ul><ul><ul><li>(A) Observable (skills) </li></ul></ul><ul><ul><li>(B) Non-Observable (attitudes) </li></ul></ul><ul><li>Level 4 - Organizational Results </li></ul>
    2. 2. If you want to know … Then you must ask at this level…. <ul><li>Did participants like the program? </li></ul><ul><li>Did participants learn the content intended? </li></ul><ul><li>Are they applying skills and behaviors taught? </li></ul><ul><li>Are they applying non-observable outcomes to the job? </li></ul><ul><li>Has there been any impact on the organization? </li></ul><ul><li>Level 1 </li></ul><ul><li>Level 2 </li></ul><ul><li>Level 3 (A) </li></ul><ul><li>Level 3 (B) </li></ul><ul><li>Level 4 </li></ul>
    3. 3. What do you want to find out with Level 1 questions? <ul><li>Did the program meet the expectations of trainees? </li></ul><ul><li>What aspects were most helpful? Interesting? Informative? </li></ul><ul><li>What aspects were least helpful, interesting, or informative? </li></ul><ul><li>What were participants’ reactions to the program’s design? Pacing? Materials? Precourse work? Instructor? </li></ul><ul><li>Do people intend to use what they have learned? How? </li></ul><ul><li>What barriers, if any, do people believe will inhibit their ability to use what they have learned? </li></ul>
    4. 4. Level 2 - Evaluating Learning <ul><li>Measures knowledge, skills, and attitudes </li></ul><ul><li>Use a control group, if practical </li></ul><ul><li>Pre-test can serve as a needs assessment </li></ul><ul><li>Strive for 100% response on questionnaires </li></ul>
    5. 5. Knowledge Tests <ul><li>Essay or open-ended answer tests tests </li></ul><ul><li>Write in or short answer tests </li></ul><ul><li>Binary true-false tests </li></ul><ul><li>Multiple choice tests </li></ul>
    6. 6. Competency Demonstrations: Testing for Skill <ul><li>Learners are demonstrate competencies while being observed by a trained evaluator. </li></ul><ul><li>Can use simulations or demonstrations can be part of the learning activity (role plays, etc) </li></ul><ul><li>Need consistency in observers; multiple observers need to be trained </li></ul>
    7. 7. Level 3 Focuses on Transfer: Are They Using What They Learned? <ul><li>Affective outcomes focus on attitudes, values, and beliefs of learners </li></ul><ul><li>Cognitive outcomes are the concepts, principles, and knowledge used on the job </li></ul><ul><li>Behavioral or skill outcomes address what learners are able to do that can be observed by others </li></ul>
    8. 8. Decisions in Evaluating Level 3 - Behavior <ul><li>When to evaluate </li></ul><ul><li>How often to evaluate </li></ul><ul><li>How to evaluate </li></ul><ul><li>Costs vs. Benefits : When is it worth evaluating at Levels 3 & 4? </li></ul>
    9. 9. Guidelines for Level 3 Evaluations <ul><li>Use a control group if practical </li></ul><ul><li>Allow time for behavior change to take place </li></ul><ul><li>Evaluate before and after program </li></ul><ul><li>Survey / interview at least one or more: </li></ul><ul><ul><li>Trainees </li></ul></ul><ul><ul><li>Immediate supervisors </li></ul></ul><ul><ul><li>Trainees’ direct reports </li></ul></ul>
    10. 10. Behavior Change without Reinforcement
    11. 11. Example of Reinforced Behavior Measured 3 to 6 Months after Training
    12. 12. Evaluating Behavior: Questions to Ask <ul><li>Ask if trainees if they are doing anything differently on the job as a result of training? </li></ul><ul><li>If so, ask them to describe </li></ul><ul><li>If not, ask why </li></ul><ul><li>Explore effects of management support, organizational barriers, etc </li></ul>
    13. 13. Level 4 Questions: What’s the Organizational Impact? <ul><li>How much did quality improve? </li></ul><ul><li>How much did productivity increase? </li></ul><ul><li>How much did did we save or prevent? (accidents, turnover, wasted time, etc) </li></ul><ul><li>What tangible benefits has the organization seen from the money spent on training? </li></ul>
    14. 14. Evaluating Results - Level 4 <ul><li>Look for evidence, not proof </li></ul><ul><li>Use a control group, if possible, to isolate the effects of training </li></ul><ul><li>Measure before and after the program </li></ul><ul><li>Repeat measurement at appropriate times </li></ul><ul><li>Consider cost vs. benefits </li></ul><ul><ul><li>Not as expensive as Level 3 to collect </li></ul></ul><ul><ul><li>Organizational data often available </li></ul></ul>
    15. 15. Four Major Categories of Hard Data <ul><li>Output increases </li></ul><ul><ul><li>Units produced, items sold or assembled </li></ul></ul><ul><ul><li>Students graduated, patients visited, applications processed </li></ul></ul><ul><li>Quality improvement </li></ul><ul><ul><li>Scrap, rework, rejects, error rates, complaints </li></ul></ul><ul><li>Cost savings </li></ul><ul><ul><li>Unit costs, overhead, operating, program costs </li></ul></ul><ul><li>Time savings </li></ul><ul><ul><li>Cycle time, overtime, equipment downtime </li></ul></ul>
    16. 16. Categories of Soft Data <ul><li>Work habits </li></ul><ul><ul><li>Absenteeism, tardiness, communication breakdowns, first aid treatments </li></ul></ul><ul><li>Climate </li></ul><ul><ul><li>Turnover, discrimination charges, job satisfaction, number of grievances </li></ul></ul><ul><li>New skills </li></ul><ul><ul><li>Decisions made, problems solved, grievances resolved,intention to use new skills, frequency of use of new skills, importance of new skills </li></ul></ul>
    17. 17. Categories of Soft Data, con’t <ul><li>Development </li></ul><ul><ul><li>Number promotions, pay increases, training programs attended, PA ratings </li></ul></ul><ul><li>Satisfaction </li></ul><ul><ul><li>Favorable reactions, attitude changes, increased confidence, customer satisfaction </li></ul></ul><ul><li>Initiative </li></ul><ul><ul><li>Successful completion of new projects, number of suggestions implemented, new ideas implemented </li></ul></ul>
    18. 18. Sources of Data <ul><li>Organizational Performance Records </li></ul><ul><li>Participants </li></ul><ul><li>Supervisors of Participants </li></ul><ul><li>Direct Reports of Participants </li></ul><ul><li>Team / Peer Groups </li></ul><ul><li>Internal or External evaluators as observers </li></ul>