This document summarizes a presentation on assessing online writing. It finds that rubrics for online discussions often focus on non-academic criteria like participation over academic discourse. Recommendations include assessing the unique affordances of online writing using high-fidelity rubrics that incorporate criteria like critical thinking and learning objectives. However, challenges remain in embracing online writing and creating new assessment approaches and rubrics that properly evaluate students in online environments.
8. Definition of assessment “Assessment is an on-going process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analysing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance.”
10. Fidelity “Fidelity is the extent to which elements that contribute to a course grade are correctly identified as academic achievement”. “Many academics cannot help but be impressed by the prodigious time and persistence that’s some students apparently invest in producing responses to an assessment task. However, effort is clearly an input variable and therefore does not fall within the definition of academic achievement”. “Assessment in which early understandings are assessed, recorded and counted, misrepresents the level of achievement reached at the end of the course.”
11. New Affordances Large-scale projects Developed co-operatively and collaboratively Improved authenticity Real life value Large audience Richer content More reflective More transparent Peer reviewed/assessed
12. Transforming Assessment Project An Australian Learning and Teaching Council Fellowship specifically looking at the use of e-assessment within online learning environments, particularly those using one or more Web 2.0 or virtual world technologies. http://www.transformingassessment.com/ geoffrey.crisp@adelaide.edu.au
13. Challenges “Most of its advocates [of writing using Web 2.0 tools] offer no guidance on how to conduct assessment that comes to grips with its unique features, its difference from previous forms of student writing, and staff marking or its academic administration.” “The few extant examples appear to encourage and assess superficial learning, and to gloss over the assessment opportunities and implications of Web 2.0’s distinguishing features.”
14. Research Question What is the current practice in the construction of rubrics to assess online discussions?
15. Methodology 20 papers 128 criteria/statements 33 different criteria 10 categories of criterion
16. Small Print Small sample Not statistically robust Not generalisable No claims about validity or reliability “However, a clear pattern emerged from the rubrics to give confidence that a more systematic approach may not have produced a significantly different outcome. The pattern of rubrics in the literature was established early in the review and was re-inforced as the sample was increased.”
17. Findings: Expressing Criteria Expression varied considerably from rubric to rubric. Some were expressed as criteria, some were written as statements, and some were simply examples of contribution at a particular performance point. Little consistency in terminology For example, the quality of academic discourse was expressed in terms of “depth of insight”, “intellectual challenge”, “quality of knowledge”, and “much thought”. Subjective language was common Words such as “good”, “substantive” and “regular” (without further definition) were often used. Language used varied in formality While the majority of rubrics used formal English, some used a conversational style of language including such phrases as “Your messages didn’t leave me jazzed”.
21. Findings: Commentary Of the 10 categories, five broadly relate to academic standards and five relate to such things as the number of messages posted, how often students visited the forum or the attitude of the student (such as her “being positive”). Eight rubrics made direct or indirect reference to the associated learning objectives. 12 rubrics made no reference to the learning outcomes whatsoever. Eighteen rubrics rewarded participation, which was the single most common category. Four rubrics did not include any criteria relating to academic discourse whatsoever, preferring to focus solely on participation or etiquette or other non-academic criteria. Only one rubric made explicit reference to “learning objectives”.
22. Case studies Rovai: Online and traditional assessment: what is the difference? (2000) 14 criteria for the assessment of online discussions, aligned against three grades (A-C) 10relate to the number of messages posted or the style of the message (“caring”, “compassionate”, “rude”) or other non-achievement variables. 4 criteria relate to academic standards (including one that relates to the standard of spelling and grammar). Hazari: Strategy for Assessment of Online Course Discussions, (2004) 20 criteria (for awarding points ranging from 1 to 5). 10relate to non-achievements such as “Demonstrates leadership in discussions” and “Posted regularly throughout the week”. Some criteria were crude: “Time between posting indicated student had read and considered substantial number of student postings before responding”
23. Conclusions We’re not sure what online writing is We don’t know how it’s different We don’t know if it’s different We’re unclear about what we’re assessing We’re assessing the wrong things We don’t know how to mark it We don’t use rubrics We write rubrics differently We write rubrics badly We’re being unfair to students
29. Recommendations Assess learner contributions to online discussions Use a rubric to carry out the assessment Design rubrics to have high fidelity Incorporate the unique affordances of online writing
30. Challenges Embracing online writing Acknowledging the new affordances Reconsidering how we assess students Creating new assessment activities Designing new marking rubrics Better quality Improved fidelity Embedding the new affordances
Bobby Elliott from SQA.Happy to receive e-mails after the event.Small piece of research done as part of a larger research project.Interesting results.30 slides in 30 minutes...
By Angelo, 1995
Criticism of continuous assessment. Maybe particularly harmful to boys.
I looked at 20 rubrics.Consisting of 128 separate criteria (around 6-7 per rubric).Many criteria were basically the same thing expressed in different ways…So I grouped them into 33 different criteriaThen I coded these criteria using a coding system with 10 types of criteria.
This is the table that shows the original 128 criteria grouped into different criteria (similar criteria are grouped).It reduced the 128 criteria to 33.For example, of the 128 criteria in the 20 rubrics, 12 related to quantity of messages – probably 12 of the 20 rubrics had one criterion relating to quantity.Notice some interesting categories: frequency of posting (9 criteria), attitude (7 criteria), showing respect (5 criteria).
Here are the 10 codes that I used.So, for example, 18 rubrics (out of 20) made mention of participation (e.g. the number of posts) and this accounted for 36 of the 128 criteria (around one quarter).Also, 8 rubrics had criteria relating to etiquette (manners) – and this accounted for 10 (of 128) criteria.
This slide shows the top five criteria.Notice that two relate to “non-achievements” to use Sadler’s terminology.
Rovai’s paper is probably the best known and most widely use.But Sadler would give him a poor grade for fidelity… 4/14… 30%Another well known author in this area is Hazari. He does better achieving 50%.But these papers are sort of seminal works on writing rubrics… but see the last bullet!