Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Assessing learning in Instructional Design

4,583 views

Published on

Published in: Education, Technology
  • Be the first to comment

Assessing learning in Instructional Design

  1. 1. Prepared by Leesha Roberts, Instructor II University of Trinidad and Tobago – Valsayn Campus Evaluating Learner Success and the Instructional Design
  2. 2. Rationale for using Evaluating The Learner in Instruction Design
  3. 3. Overview <ul><li>What is Assessment and Evaluation? </li></ul><ul><li>How do they differ? </li></ul><ul><li>What role does it play in the ID Process </li></ul><ul><li>When should learner performance be assessed? </li></ul><ul><li>How can assessment be made reliable and valid </li></ul><ul><li>Matching Assessment to Objectives </li></ul><ul><li>How does an instructional designer determine when a learner evaluation has been successful </li></ul>
  4. 4. What is Assessment and Evaluation? <ul><li>What is Assessment? </li></ul><ul><ul><li>Procedures or techniques used to obtain data about a learner or a product. </li></ul></ul><ul><li>What is Evaluation? </li></ul><ul><ul><li>The process for determining the success level of an individual or a product on the basis of data </li></ul></ul><ul><li>How are they different? </li></ul><ul><ul><li>Assessment collects of information, while Evaluation is the analysis of the assessment pieces </li></ul></ul>
  5. 5. What is Assessment and Evaluation <ul><li>Measurement – refers to the data collected which is typically expressed quantitatively (i.e. Numbers) </li></ul><ul><li>Instruments - The physical devices used to collect the data e.g. Rating scales, observation sheets, checklists, objectives tests) </li></ul>
  6. 6. What role does Assessment and Evaluation play in the ID Process <ul><li>Assessment serves as a pedagogical function to: </li></ul><ul><ul><li>Measuring </li></ul></ul><ul><ul><li>Diagnosing </li></ul></ul><ul><ul><li>Instructing </li></ul></ul><ul><li>Information from Assessment can be used as a secondary function in evaluation. </li></ul>
  7. 7. Developing Performance Measurements <ul><li>Instructional Designers should be capable of developing: </li></ul><ul><ul><li>Tests </li></ul></ul><ul><ul><li>Written questionnaires </li></ul></ul><ul><ul><li>Interviews </li></ul></ul><ul><ul><li>Other methods of measuring performance </li></ul></ul>
  8. 8. Approaches to Assessment <ul><li>Cognitive Assessment </li></ul><ul><li>Affective Assessment </li></ul><ul><li>Psychomotor Assessment </li></ul>
  9. 9. Basic Principles of Measurement <ul><li>Tests that measure what a person has learned to do are called achievement tests. </li></ul><ul><li>There are two types of achievement tests </li></ul><ul><ul><li>Criterion-referenced tests (CRTs), also known as minimum competency or mastery. Basically it allows everyone to know exactly how well students stand relative to a standard. </li></ul></ul><ul><ul><li>Norm-referenced tests (NRTs). Basically these tests are designed to “reliably” select the best performers </li></ul></ul>
  10. 10. Reliability and Validity <ul><li>What is Reliability? </li></ul><ul><ul><li>Learner evaluation will provide similar results when it is conducted on multiple occasions. </li></ul></ul><ul><li>What is Validity? </li></ul><ul><ul><li>Determines whether the learners have achieved the intended outcomes of instruction (based on the intended outcomes of the instruction) </li></ul></ul>
  11. 11. Characteristics of Reliability in Tests <ul><li>Reliable tests have: </li></ul><ul><ul><li>Consistency </li></ul></ul><ul><ul><li>Temporal Dependency </li></ul></ul><ul><li>Consistency </li></ul><ul><ul><li>To increase the consistency of NRT, developers simply increase the number of items on the test. </li></ul></ul><ul><ul><li>To increase the consistency of CRT, assess each competency a test covers. </li></ul></ul>
  12. 12. Characteristics of Reliability in Tests (Cont’d) <ul><li>The following are factors that affect the number of items developed when ensuring consistency on a test: </li></ul><ul><ul><li>Consequences of misclassification </li></ul></ul><ul><ul><li>Specificity of the competency </li></ul></ul><ul><ul><li>Resources available for testing </li></ul></ul>
  13. 13. Characteristics of Reliability in Tests (Cont’d) <ul><li>Temporal Dependency – each time a test is administered, it should produce similar results. </li></ul>
  14. 14. Characteristics of Validity in Tests <ul><li>There can be no validity without reliability </li></ul><ul><li>The performance on a CRT must be exactly the same as the performance specified by the objective. </li></ul><ul><li>Achieving validity is not always straightforward. </li></ul>
  15. 15. Matching Assessment to Objectives <ul><li>Instructional objectives are a key element in the development of effective learner assessment. </li></ul><ul><li>A direct relationship must exist between the instructional objectives and the learner assessment. </li></ul><ul><li>How can you determine whether the intended outcome of an instructional objective is a change in knowledge, skill or attitude? </li></ul>
  16. 16. Matching Assessment to Objectives <ul><li>Example: </li></ul><ul><li>Read the following sentence and identify the action. </li></ul><ul><ul><li>The learner will be able to list the three major warning signs of a heart attach. </li></ul></ul><ul><li>The action in the instructional objective is to list – more specifically, to list the three major warning signs of a heart attack . </li></ul>
  17. 17. Cognitive Tests <ul><li>Measures acquisition of knowledge </li></ul><ul><li>Paper and Pencil tests </li></ul><ul><li>Recitation </li></ul><ul><li>The six types of test that apply to cognitive tasks: </li></ul><ul><ul><li>Multiple-choice </li></ul></ul>
  18. 18. Matching Assessment to Objectives <ul><ul><li>True-false </li></ul></ul><ul><ul><li>Fill-in, </li></ul></ul><ul><ul><li>Matching </li></ul></ul><ul><ul><li>Short answer </li></ul></ul><ul><ul><li>essay </li></ul></ul>
  19. 19. Performance Tests <ul><li>Measures a student’s ability to do something. </li></ul><ul><li>There are five types of Performance Assessment: </li></ul><ul><ul><li>Performance (examination of actions or behaviours, that can be directly observed) </li></ul></ul>
  20. 20. Performance Tests <ul><ul><li>Process (learning ways of doing things such as problem solving and discussing) </li></ul></ul><ul><ul><li>Product (outcome of a procedure is the product which is evaluated against a standard.) </li></ul></ul>
  21. 21. Performance Tests <ul><ul><li>Portfolios (provides the basis for a product and process review) </li></ul></ul><ul><ul><li>Projects (a product assessment which is an in-depth investigation of a topic worth learning more about, according to Katz(1994) </li></ul></ul>
  22. 22. Performance Process Product Portfolios Projects
  23. 23. Authentic Assessment <ul><li>Focus is on “real” tasks </li></ul><ul><li>Achieves validity and reliability by emphasizing and standardizing the appropriate criteria for scoring such (varied) Products </li></ul><ul><li>“ test validity” depends on whether the test simulates real-world tests of ability </li></ul><ul><li>Involves “ill-structured” challenges and roles that help students rehearse for the complex ambiguities of the “game” of adult and professional life. </li></ul>
  24. 24. Attitudinal Tests
  25. 25. Appropriateness of Items <ul><li>How do you go about writing valid criterion items? </li></ul><ul><li>The taxonomies of objectives include sample test items that can be used as models. </li></ul>
  26. 26. Appropriateness of Items <ul><li>An assessment procedure may be more appropriate for some learning outcomes than others. </li></ul><ul><li>There are several bases on which the logical consistency between assessment and other aspects of design can be determined </li></ul>
  27. 27. Appropriateness of Items <ul><li>These are: </li></ul><ul><ul><li>Matching the objectives to the criterion </li></ul></ul><ul><ul><li>Matching the type of assessment to the type of learning </li></ul></ul><ul><ul><li>Matching the data collection method to the purpose of the assessment </li></ul></ul>
  28. 28. <ul><li>A successful learner evaluation provides: </li></ul><ul><ul><li>Data for instructional intervention </li></ul></ul><ul><ul><li>Data as to whether the learner has met the instructional objectives </li></ul></ul><ul><ul><li>Recommendations based of data gathered </li></ul></ul>Determination of the success of Learner Evaluation

×