Understanding the Role of Evaluation


Published on

Prepared for Instructional Design Students at UTT

Published in: Education, Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Understanding the Role of Evaluation

  1. 1. EVALUATION Determining the effect of the intervention or training
  2. 2. MOVING ALONG NICELY… Once you have decided on the goals and objectives of the instruction and have organized the instructional environment and activities, you will need to decide how to measure the learner’s development and evaluate success of your overall instructional design. In other words, CAN YOU & DID YOU REALLY DO what you claimed you could.
  3. 3. REVIEW SOME BASIC ENGLISH FIRST Assessment refers to procedures or techniques used to obtain data about a learner or product. Measurement refers to the data collected, which is typically expressed quantitatively. The physical devices that are used to collect the data are referred to as instruments.
  4. 4. EVALUATION  Evaluation refers to a process that includes assessment and measurement.  Worthen, Sanders, & Fritzpatrick (2004) wrote that the purpose of evaluation is the “ identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit), quality, utility, effectiveness or significance in relation to those criteria” (p.5)  Evaluation is used to determine the success level of something.
  5. 5. T YPES OF EVALUATION  FORMATIVE:  Morrison, Ross & Kemp (2007) “Formative evaluation is quality control of the development process” (p.249).  Formative evaluation is used throughout the instructional design process to ensure that the intervention being developed is tested and revised. During a lesson, you may conduct formative assessment to ensure that learners are on task/on track and that the “intervention” or “intended training” is EN ROUTE.  SUMMATIVE:  As the word would suggest, this evaluation is conducted at the end of the instructional design process to determine how successful the entire project was in helping to meet the major goal. Summative evaluation can be at the end of a lesson, at the end of the term, at the end of the semester. The GRAND FINALE.
  6. 6. THE GOAL OF LEARNER EVALUATION  Determining if a learner has reached a high level of success is accomplished through learner evaluation .  Learner evaluation helps determine the level of performance or achievement that an individual has attained as a result of instruction.  An effective and appropriate learner evaluation is BASED DIRECTLY ON the instructional goals and objectives.
  7. 7. VALIDIT Y  A learner evaluation has validity if it helps determine whether the outcomes of instruction (based on the objectives) were actually met.  In other words, did the learners meet the instructional objectives?  Face Validity (concerned with how the learner evaluation appears: is it reasonable, well -designed, capable of gathering appropriate data)  Content Validity (concerned with the extent to which the specific intended domain of content is addressed in the evaluation).
  8. 8. RELIABILIT Y Reliability is the extent to which a learner evaluation will provide similar results when conducted on multiple occasions. In other words, if a test is given to the same learner at different times without the learner receiving any additional prep, and the same results occur then the test is reliable.
  9. 9. STARTING WITH INSTRUCTIONAL OBJECTIVES  It is extremely important to understand that instructional objectives are a key element in the development of an effective learner evaluation.  The Learner Evaluation is derived from the instructional objectives.  Well-written instructional objectives describe the outcome a learner should be able to achieve after instruction has been completed.
  10. 10. EVALUATING CHANGE  Three typical outcomes are possible.  A change in KNOWLEDGE, SKILL, OR ATTITUDE.  PULL BACK OUT YOUR 3 OBJECTIVE DOMAINS and their taxonomies.  (Cognitive, Psychomotor & Affective)  Assessment varies across the domains.
  11. 11. COGNITIVE (TESTING A CHANGE IN KNOWLEDGE)  Objective tests (questions are referred to as ‘items’, such as true/false, multiple choice, and matching, that have ONE CORRECT ANSWER)  Constructed-response tests (include short answer and essay items, which focus on the learner constructing an answer rather than selecting one that is provided).  Objective tests deal with low -level cognitive abilities  Constructed-response tests deal with higher -level cognitive abilities.
  12. 12. PSYCHOMOTOR (TESTING A CHANGE IN SKILL)  Evaluating whether a learner has had a change in SKILL is done through examining actions or behaviours that can be directly observed.  Process (evaluate the proper series of steps, use of appropriate tools or instrument in an acceptable manner, completion of the skill in a certain timeframe.)  Product (the quality and quantity of the product)  DIRECT TESTING, PERFORMANCE RATINGS, e -Portfolios, RUBRICS
  13. 13. GUIDELINES FOR DIRECT TESTING • Start by reviewing the task analysis. This will determine the steps of the skill the learner needs to perform. • Those steps form the criteria. • Determine the level of proficiency that is considered acceptable. • Determine where the test will take place, what equipment, materials and personnel are needed. • Write the instructions that inform the learner of how the test will be conducted. • Establish how the results will be recorded. • Conduct the test, judge the proficiency, provide feedback.
  14. 14. PERFORMANCE RATINGS.  Two common techniques used are checklists and rating scales.  A checklist is used to determine the sequence of actions. NO qualitative judgment is made on how well the actions were performed, simply if the action was performed or not.  A rating scale (Likert Scale) provides a rating of how well a learner performs different actions. Typically, it is a rating on a numerical scale that indicates performance from low to high, poor to excellent etc.
  15. 15. GUIDELINES FOR THE RATING PROCESS • Review the task analysis. This helps to determine the steps of the skill the learner will need to perform. The steps will serve as the individual critique on which the learner will be evaluated. • Establish the 5 levels on the rating scale. • When defining the numerical rating points, verbal descriptions should not be overlapping. POOR BELOW AVERAGE AVERAGE ABOVE AVERAGE EXCELLENT
  16. 16. A NOTE ABOUT RUBRICS  Rubrics can be used in conjuction with various learner evaluation techniques  A rubric is an assessment instrument that is used to evaluate a learner based on his performance of specific criteria.  Rubrics can be used with KNOWLEDGE, SKILL and ATTITUDE-related instructional objectives.  Rubrics are a more holistic approach.  Rubrics allow evaluation to be more objective and consistent by clarifying the criteria to be evaluated in specific and descriptive terms.
  17. 17. AFFECTIVE (TESTING A CHANGE IN ATTITUDE)  It is well-accepted that when conducting a learner evaluation to determine a change in attitude, there are problems that can occur. The most common problems are: social-desirability responses, self-deception, and semantic problems.  Self-deception is a common adjustment phenomenon in human behaviour where the tendency to want to like what we see when we look at ourselves impacts how we respond (Hopkins, 1998).  Observations & Anecdotal Records, Surveys & Questionnaires, Self-Reporting Inventories, Interviews