Week 7 Rubrics And Rating Scales

2,962
-1

Published on

Published in: Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,962
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
48
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Week 7 Rubrics And Rating Scales

  1. 1. Assessment in Schools Complex Achievement: Scoring Performance Based Assessments
  2. 2. Question (choose the best answer) <ul><li>Which statement provides the best description of an analytic scoring rubric? </li></ul><ul><li>Rating is based on the overall performance compared to exemplars. </li></ul><ul><li>Rating is based on a sum of scores for the individual parts of the performance. </li></ul><ul><li>Rating is based on the student’s analysis of their performance. </li></ul><ul><li>All the above. </li></ul>
  3. 3. Types of Performance <ul><li>Alternative – something other than traditional paper and pencil tests requiring students to demonstrate </li></ul><ul><li>Authentic – practical application of a task in real world conditions/setting </li></ul><ul><li>(usually only approximated/simulated) </li></ul><ul><li> </li></ul>
  4. 4. Assessing Performance <ul><li>Why use performances in assessment? </li></ul><ul><li>Why do we score/measure performance? </li></ul><ul><li>Communication </li></ul><ul><li>Comparison </li></ul>
  5. 5. Assessing Performance <ul><li>All Claims about the value of performance assessments rest on the assumption that performance can be accurately observed and reliably rated. </li></ul>
  6. 6. Conducting Music Assessment
  7. 7. Scoring Challenge <ul><li>No one correct or best answer/solution </li></ul><ul><li>Many different performances or solutions might be judged as excellent (or poor) </li></ul><ul><li>Requires expert judgment and clearly specified criterion to assess properly. </li></ul>
  8. 8. Scoring Limitations <ul><li>Scoring can be inconsistent (unreliable Rating) </li></ul><ul><li>To compare scores fairly, </li></ul><ul><ul><li>Task (learning outcome) must be clearly defined and communicated to students </li></ul></ul><ul><ul><li>Scoring criteria/rubrics must be well defined. </li></ul></ul><ul><li>Time consuming to complete </li></ul><ul><ul><li>Must have reasonable amount of time to do </li></ul></ul><ul><ul><li>Limits the number of tasks that can be done </li></ul></ul>
  9. 9. Scoring Issues <ul><li>What are you assessing? </li></ul><ul><ul><li>Process – approach used, methods & procedures, instrument use, etc. </li></ul></ul><ul><ul><li>Product – complete performance or resulting artifact </li></ul></ul>
  10. 10. Scoring Decisions <ul><li>What assessment instruments will be used? </li></ul><ul><ul><li>Rubrics, Rating scales, Checklists … </li></ul></ul><ul><li>How will the results be used/reported? </li></ul><ul><ul><li> </li></ul></ul><ul><li>What will you do to make sure the results are accurate (reliable)? </li></ul>
  11. 11. Scoring Decisions <ul><li>Who will do the assessment? </li></ul><ul><ul><li>Teacher, student, peers, others </li></ul></ul><ul><li>How will they be trained? </li></ul>
  12. 12. Guidelines and Suggestions <ul><li>Focus on the learning outcomes that require complex cognitive skills and performances </li></ul><ul><li>Select tasks that represent important content and skills </li></ul><ul><li>Minimize the dependence of irrelevant skills not directly related to learning outcome </li></ul>
  13. 13. Guidelines and Suggestions <ul><li>Provide scaffolding as needed </li></ul><ul><li>Construct task directions that clearly explain what students are expected to do </li></ul><ul><li>Clearly communicate performance expectations (how performance will be judged) </li></ul>
  14. 14. Scoring Issues <ul><li>What are your expectations? </li></ul><ul><ul><li>Criteria – ideas about what is good or desirable when we judge adequacy; also used to defend that judgment. </li></ul></ul>
  15. 15. Scoring Criteria Issues <ul><li>Floating Criteria – wait until you see the performance to determine acceptability </li></ul><ul><ul><li>Ask yourself – </li></ul></ul><ul><ul><li>Do you know what your are looking for? </li></ul></ul><ul><ul><li>Can you define and describe the quality of a performance (both good and bad)? </li></ul></ul><ul><ul><li>Can you provide a defensible basis for rating good and bad performance? </li></ul></ul>
  16. 16. Scoring Criteria Issues <ul><li>Criteria – </li></ul><ul><li>define what is acceptable and unacceptable in ways the student can understand </li></ul><ul><li>communicate the goal or standards </li></ul><ul><li>not useful when vague or ambiguous </li></ul><ul><li>make public what is considered important </li></ul><ul><li>[9-30] characteristics </li></ul>
  17. 17. Instruments <ul><li>Rubrics </li></ul><ul><li>Rating Scales </li></ul><ul><li>Checklists </li></ul>
  18. 18. Scoring Rubrics <ul><li>Rubrics are a set of guidelines that explain the criteria by which performance will be judged or rated (may include a rating scale) . </li></ul><ul><li>Rubrics outline performance standards </li></ul>
  19. 19. Scoring Rubrics <ul><li>Rubrics can be </li></ul><ul><ul><li>Analytic – individual aspect of the task are judged and used to determine overall score </li></ul></ul><ul><ul><li>Holistic – the performance or product is judged as a whole, compared to models or exemplars </li></ul></ul><ul><ul><li>[see chapter 10] </li></ul></ul>
  20. 20. Scoring Rubrics <ul><li>Rubrics typically provide a description of how the rater should determine the quality of various performances at specific levels. </li></ul><ul><li>Examples [9-29,9-33, pg 272] </li></ul>[9-32 rubric development]
  21. 21. Group Task <ul><ul><li>Create an rubric you might use to rate or score the Leading Music performance. </li></ul></ul>
  22. 22. Rating Scales <ul><li>They provide a convenient recording method, common frame of reference, and focus the raters attention on specific important aspects of the performance </li></ul><ul><li>Used for (limited to) make quality judgments </li></ul><ul><li>Requires additional information regarding performance expectations </li></ul><ul><li>Examples [9-34, pg 274] </li></ul>
  23. 23. Rating Scales <ul><li>These take many forms (numerical and descriptive) but are used to provide a uniform way to score performances along a continuum (at least ordinal, preferably interval). </li></ul>[9-42 types of rating scales]
  24. 24. How often do you (meant to record frequency) <ul><li>Response scale 1 </li></ul><ul><li>Daily </li></ul><ul><li>2-3 times per week </li></ul><ul><li>Once a week </li></ul><ul><li>2-3 times a month </li></ul><ul><li>Once a month </li></ul><ul><li>Less than once a month </li></ul><ul><li>Response scale 2 </li></ul><ul><li>Once a day </li></ul><ul><li>Once a week </li></ul><ul><li>More than once a day </li></ul><ul><li>More than once a week </li></ul><ul><li>As seldom as possible </li></ul>Not Ordinal, Not Interval Ordinal but Not Interval
  25. 25. Question (choose the best answer) <ul><li>Which of the following is NOT a good principle for constructing a graphic rating scales? </li></ul><ul><li>Characteristics should be directly observable. </li></ul><ul><li>Use 3 to 7 points on the scale. </li></ul><ul><li>Points on the scale must form an ordinal continuum. </li></ul><ul><li>Each point on the scale must be defined clearly. </li></ul>
  26. 26. Rating vs. Ranking <ul><li>Ranking requires a person to place in relative order </li></ul><ul><li>Raters assigns a specific score </li></ul><ul><li>Why might you rank instead of rate? </li></ul>
  27. 27. Checklists <ul><li>More appropriate for analytic rubrics where you can easily divide the task into a series of specific actions that must be present. </li></ul><ul><li>Reduces the amount of subjectivity in the judgment (dichotomous decision) </li></ul><ul><li>Can be problematic when aspects of the performance are valued but not represented in the criteria. (e.g., esthetically pleasing, interesting) </li></ul><ul><li>Examples [pg 282] </li></ul>
  28. 28. Common Rating Errors <ul><li>Personal Bias – </li></ul><ul><ul><li>Generosity error – too easy, grade inflation </li></ul></ul><ul><ul><li>Severity error – too hard, no perfect papers </li></ul></ul><ul><ul><li>Central tendency – rating everyone about average </li></ul></ul><ul><ul><li>Halo Effect – general impression of individual (positive or negative) influences an individual rating </li></ul></ul><ul><li>Logical error – rating alike or different based on the belief that factors are related (e.g., studious and able) </li></ul><ul><ul><li>[see 9-44, pg. 277] </li></ul></ul>
  29. 29. Effective Rating Review <ul><li>Focus on educationally significant outcomes </li></ul><ul><li>Characteristics should be directly observable </li></ul><ul><li>Clearly define key points on scale </li></ul><ul><li>Select most appropriate type of instrument </li></ul><ul><li>Use an appropriate scale (# of points) </li></ul>
  30. 30. Effective Rating Review <ul><li>Rate all performances on one task before going on to next. </li></ul><ul><li>When possible rate performances without knowing the raters name </li></ul><ul><li>If the assessment has significant impact, several ratings should be used. </li></ul><ul><li>Example practice [7-22,7-23] </li></ul>

×