Rubric Design
Workshop
OverviewOf Session
What is a rubric?
• Definition: “a set of criteria specifying the characteristics
of an outcome and the levels of achievement in each
characteristic.”
• Benefits
- Provides consistency in evaluation and
performance
- Gathers rich data
- Mixed-method
- Allows for direct measure of learning
Why use rubrics?
• Provides both qualitative descriptions of student
learning and quantitative results
• Clearly communicates expectations to students
• Provides consistency in evaluation
• Simultaneously provides student feedback and
programmatic feedback
• Allows for timely and detailed feedback
• Promotes colleague collaboration
• Helps us refine practice
Types of Rubrics - Analytic
Analytic rubrics articulate levels of performance for each
criteria used to assess student learning.
Advantages
• Provide useful feedback on areas of strength and weakness.
• Criterion can be weighted to reflect the relative importance of
each dimension.
Disadvantages
• Takes more time to create and use than a holistic rubric.
• Unless each point for each criterion is well-defined raters may
not arrive at the same score
Analytic Rubric Example
Types of Rubrics - Holistic
A holistic rubric consists of a single scale with all
criteria to be included in the evaluation being considered
together.
Advantages
• Emphasis on what the learner is able to demonstrate, rather than
what s/he cannot do.
• Saves time by minimizing the number of decisions raters make.
• Can be applied consistently by trained raters increasing reliability.
Disadvantages
• Does not provide specific feedback for improvement.
• When student work is at varying levels spanning the criteria points it
can be difficult to select the single best description.
• Criteria cannot be weighted.
Ω
Holistic Rubric Example
Steps for Implementation
Identify the outcome ✔
Determine how you will collect the evidence ✔
Develop the rubric based on observable criteria
Train evaluators on rubric use
Test rubric and revise if needed
Collect Data
Analyze and report
1
2
3
4
5
6
7
Rubric Development – Pick your Scale
Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
Rubric Development – Pick your Dimensions
Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
Creating you Rubric
Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
Writing Descriptors
University of Florida Institutional Assessment: Writing Effective Rubrics
Describe each level of mastery for each characteristic
Describe the best work you could expect
Describe an unacceptable product
Develop descriptions of intermediate level products for
intermediate categories
Each description and each category should be mutually
exclusive
Be specific and clear; reduce subjectivity
1
2
3
4
5
6
Next Steps…
Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
Training for Consistency
1. Inter-rater reliability: Between-rater consistency
Affected by:
• Initial starting point or approach to scale (assessment
tool)
• Interpretation of descriptions
• Domain / content knowledge
• Intra-rater consistency
2. Intra-rater reliability: Within-rater consistency
Affected by:
• Internal factors: mood, fatigue, attention
• External factors: order of evidence, time of day, other
situations
• Applies to both multiple-rater and single rater situations
Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
Testing your Rubric
• Use a Meta-rubric to review your work
• Peer review- ask one of your peers to review the rubric
and provide feedback on content
• Test with students - use student work or observations to
test the rubric
• Revise as needed
• Test again
• Multiple raters – norm with other raters if appropriate
Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
Allen, M.J. (2004). Assessing academic programs in higher education. Bolton, MA:
Anker.
Brophy, Timothy S. University of Florida Institutional Assessment: Writing Effective
Rubrichttp://assessment.aa.ufl.edu/Data/Sites/22/media/slo/writing_effective_rubri
cs_guide_v2.pdf
Jon Mueller. Professor of Psychology, North Central College, Naperville, IL. Authentic
Assessment Toolbox http://jfmueller.faculty.noctrl.edu/toolbox/rubrics.htm
Teaching Commons, Deparul University
http://teachingcommons.depaul.edu/Feedback_Grading/rubrics/types-of-
rubrics.html

Rubric design workshop

  • 1.
  • 2.
  • 3.
    What is arubric? • Definition: “a set of criteria specifying the characteristics of an outcome and the levels of achievement in each characteristic.” • Benefits - Provides consistency in evaluation and performance - Gathers rich data - Mixed-method - Allows for direct measure of learning
  • 4.
    Why use rubrics? •Provides both qualitative descriptions of student learning and quantitative results • Clearly communicates expectations to students • Provides consistency in evaluation • Simultaneously provides student feedback and programmatic feedback • Allows for timely and detailed feedback • Promotes colleague collaboration • Helps us refine practice
  • 5.
    Types of Rubrics- Analytic Analytic rubrics articulate levels of performance for each criteria used to assess student learning. Advantages • Provide useful feedback on areas of strength and weakness. • Criterion can be weighted to reflect the relative importance of each dimension. Disadvantages • Takes more time to create and use than a holistic rubric. • Unless each point for each criterion is well-defined raters may not arrive at the same score
  • 6.
  • 7.
    Types of Rubrics- Holistic A holistic rubric consists of a single scale with all criteria to be included in the evaluation being considered together. Advantages • Emphasis on what the learner is able to demonstrate, rather than what s/he cannot do. • Saves time by minimizing the number of decisions raters make. • Can be applied consistently by trained raters increasing reliability. Disadvantages • Does not provide specific feedback for improvement. • When student work is at varying levels spanning the criteria points it can be difficult to select the single best description. • Criteria cannot be weighted. Ω
  • 8.
  • 9.
    Steps for Implementation Identifythe outcome ✔ Determine how you will collect the evidence ✔ Develop the rubric based on observable criteria Train evaluators on rubric use Test rubric and revise if needed Collect Data Analyze and report 1 2 3 4 5 6 7
  • 10.
    Rubric Development –Pick your Scale Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
  • 11.
    Rubric Development –Pick your Dimensions Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
  • 12.
    Creating you Rubric Levy,J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
  • 13.
    Writing Descriptors University ofFlorida Institutional Assessment: Writing Effective Rubrics Describe each level of mastery for each characteristic Describe the best work you could expect Describe an unacceptable product Develop descriptions of intermediate level products for intermediate categories Each description and each category should be mutually exclusive Be specific and clear; reduce subjectivity 1 2 3 4 5 6
  • 14.
    Next Steps… Levy, J.D.Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
  • 15.
    Training for Consistency 1.Inter-rater reliability: Between-rater consistency Affected by: • Initial starting point or approach to scale (assessment tool) • Interpretation of descriptions • Domain / content knowledge • Intra-rater consistency 2. Intra-rater reliability: Within-rater consistency Affected by: • Internal factors: mood, fatigue, attention • External factors: order of evidence, time of day, other situations • Applies to both multiple-rater and single rater situations Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
  • 16.
    Testing your Rubric •Use a Meta-rubric to review your work • Peer review- ask one of your peers to review the rubric and provide feedback on content • Test with students - use student work or observations to test the rubric • Revise as needed • Test again • Multiple raters – norm with other raters if appropriate Levy, J.D. Campus Labs: Data Driven Innovation. Using rubrics in student affairs: A direct assessment of learning.
  • 17.
    Allen, M.J. (2004).Assessing academic programs in higher education. Bolton, MA: Anker. Brophy, Timothy S. University of Florida Institutional Assessment: Writing Effective Rubrichttp://assessment.aa.ufl.edu/Data/Sites/22/media/slo/writing_effective_rubri cs_guide_v2.pdf Jon Mueller. Professor of Psychology, North Central College, Naperville, IL. Authentic Assessment Toolbox http://jfmueller.faculty.noctrl.edu/toolbox/rubrics.htm Teaching Commons, Deparul University http://teachingcommons.depaul.edu/Feedback_Grading/rubrics/types-of- rubrics.html

Editor's Notes

  • #14 Focus your descriptions on the presence of the quantity and quality that you expect, rather than on the absence of them. However, at the lowest level, it would be appropriate to state that an element is “lacking” or “absent” (Carriveau, 2010) Keep the elements of the description parallel from performance level to performance level. In other words, if your descriptors include quantity, clarity, and details, make sure that each of these outcome expectations is included in each performance level descriptor.
  • #15 Focus your descriptions on the presence of the quantity and quality that you expect, rather than on the absence of them. However, at the lowest level, it would be appropriate to state that an element is “lacking” or “absent” (Carriveau, 2010) Keep the elements of the description parallel from performance level to performance level. In other words, if your descriptors include quantity, clarity, and details, make sure that each of these outcome expectations is included in each performance level descriptor.
  • #16 When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects). To produce dependable scores, each faculty member needs to interpret the rubric in the same way. The process of training faculty members to apply the rubric is called "calibration." It's a way to calibrate the faculty members so that scores are accurate and reliable. Reliability here means that the scorers apply the rubric consistently, not only to each piece of student work (called intrarater reliability), but among themselves (called interrater reliability).