CAA conference 2010: Assessing online writing


Published on

How to assess online writing, such as blogs and wikis. Good and bad practice in the construction of rubrics.

Published in: Education
1 Comment
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Bobby Elliott from SQA.Happy to receive e-mails after the event.Small piece of research done as part of a larger research project.Interesting results.30 slides in 30 minutes...
  • Paper is in the book.Also available online.Paper focuses on online discussion boards but my presentation is more general and relates to online writing.
  • 3 rules of speaking…Never do a lecture for the first time.Never tell jokes.Never use technology – especially online tech.Serious message in this slide… teachers are becoming more accountable.
  • Quick survey before we begin.
  • Text, phone, tweet or web.This is an example of a free web 2 service.
  • High on criticism. Low on solution. My speciality.
  • Before we begin, some background stuff…
  • The original Latin derivative emphasises the formative nature of assessment.The second definition is more modern and is a nice, simple explanation of the word.
  • Best definition of assessment that I have came across.By Angelo, 1995.Notice the bits in bold…“making expectations explicit”, “having criteria”, “systematically analysing”…Support for the use of a rubric.
  • You will be familiar with two of these characteristics (validity and reliability).But fidelity is new.The idea was introduced by a guy called Sadler (2009) in a paper about assessment.
  • Definition in bold.It’s very similar to validity but focuses on grading.It’s a moot point if we need it but I think it adds something to our vocabulary.Mention effort – we are guilty of this. He call this a “non-achievement”.Give audience time to read the slide.Criticism of continuous assessment. Maybe particularly harmful to boys.
  • Several recent papers have referred to the new “affordances” of online writing.This slide shows what’s special about writing online compared to traditional writing.The major initiative is an Australian project entitled “Transforming Assessment”. See next slide.The new affordances are the new capabilities that online writing provides.Go thru each one.
  • In summary: we’re not using the new capabilities that Web 2 provides.We’re assessing it in a business-as-usual way.Give example of online discussions assessed through “your best two posts” – essay-type contributions.We’re assessing online writing like we assess offline writing.
  • My research was focused on online discussion boards.
  • I looked at 20 rubrics.Consisting of 128 separate criteria (around 6-7 per rubric).Many criteria were basically the same thing expressed in different ways…So I grouped them into 33 different criteriaThen I coded these criteria using a coding system with 10 types of criteria.
  • My sample was small.And the selection of the rubrics are not random or systematic.So I’m not saying much about validity or reliability of my findings.BUT the more I looked at, the more a pattern emerged.In fact, the pattern emerged very early in the review.
  • So here are my findings.
  • This is the table that shows the original 128 criteria grouped into different criteria (similar criteria are grouped).It reduced the 128 criteria to 33.For example, of the 128 criteria in the 20 rubrics, 12 related to quantity of messages – probably 12 of the 20 rubrics had one criterion relating to quantity.Notice some interesting categories: frequency of posting (9 criteria), attitude (7 criteria), showing respect (5 criteria).
  • Here are the 10 codes that I used.So, for example, 18 rubrics (out of 20) made mention of participation (e.g. the number of posts) and this accounted for 36 of the 128 criteria (around one quarter).Also, 8 rubrics had criteria relating to etiquette (manners) – and this accounted for 10 (of 128) criteria.
  • This slide shows the top five criteria.Notice that two relate to “non-achievements” to use Sadler’s terminology.The most common criterion related to participation: how often students posted or how many replies they made or how many times they logged in.Criteria relating to etiquette were the 3rd most common! This means being polite or supportive or “nice”!We would never reward “being nice” in an essay – so why online? Is “being nice” a learning objective?
  • Let audience read my findings.18/20 rubrics rewarded participation.Few (1) were directly related to learning objectives.
  • Rovai’s paper is probably the best known and most widely use.But Sadler would give him a poor grade for fidelity… 4/14… 30%Another well known author in this area is Hazari. He does better achieving 50%.But these papers are sort of seminal works on writing rubrics… but see the last bullet!It’s not “time between posting” that measures quality.
  • The present position is not satisfactory.
  • I’ve tried to illustrate best practice with this list of characteristics.But I haven’t actually created a rubric!Some characteristics are obvious e.g. 3.Nos 2 and 4 and 6 and 8 are more interesting.
  • These recommendations are from my paper.
  • The current position is not good.We have to try to improve how we assess online writing.We need to be more consistent.We need to embrace the new affordances of digital writing.We need to reconsider the assessment tasks we set students.And how to mark these – using rubrics.
  • CAA conference 2010: Assessing online writing

    1. 1. Assessing Online Writing<br />CAA Conference, 2010<br />20-21 July, 2010<br /><br />
    2. 2.<br />
    3. 3.
    4. 4. Question<br />Do you feel confident about assessingstudents’ online writing?<br />
    5. 5. Text message<br />Text keycode to 07624806527<br />Smartphone<br />Go to and enter keyword<br />Twitter<br />Tweet keyword with comment to @poll<br />Web<br /><br />How confident are you when assessing students’ online writing?<br />Very: 24697<br />Somewhat: 25157<br />Not at all: 25192<br />
    6. 6. Summary of presentation<br />Research question<br />Methodology<br />Findings<br />Recommendations<br />
    7. 7. Some background stuff<br />Definition of assessment<br />Characteristics of assessment<br />New affordances<br />
    8. 8. Assessment<br />Latin root: assidere: “to sit beside”<br />“Observing learning”<br />
    9. 9. Definition of assessment<br />“Assessment is an on-going process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analysing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance.” <br />
    10. 10. Characteristics of assessment<br />Validity<br />Reliability<br />Fidelity<br />
    11. 11. Fidelity<br />“Fidelity is the extent to which elements that contribute to a course grade are correctly identified as academic achievement”.<br />“Many academics cannot help but be impressed by the prodigious time and persistence that’s some students apparently invest in producing responses to an assessment task. However, effort is clearly an input variable and therefore does not fall within the definition of academic achievement”.<br />“Assessment in which early understandings are assessed, recorded and counted, misrepresents the level of achievement reached at the end of the course.”<br />
    12. 12. New Affordances<br />Large-scale projects<br />Developed co-operatively and collaboratively<br />Improved authenticity<br />Real life value<br />Large audience<br />Richer content<br />More reflective<br />More transparent<br />Peer reviewed/assessed<br />
    13. 13. Transforming Assessment Project<br />An Australian Learning and Teaching Council Fellowship specifically looking at the use of e-assessment within online learning environments, particularly those using one or more Web 2.0 or virtual world technologies.<br /><br /><br />
    14. 14. Challenges<br />“Most of its advocates [of writing using Web 2.0 tools] offer no guidance on how to conduct assessment that comes to grips with its unique features, its difference from previous forms of student writing, and staff marking or its academic administration.”<br />“The few extant examples appear to encourage and assess superficial learning, and to gloss over the assessment opportunities and implications of Web 2.0’s distinguishing features.”<br />
    15. 15. Research Question<br />What is the current practice in the construction of rubrics to assess online discussions?<br />
    16. 16. Methodology<br />20 papers<br />128 criteria/statements<br />33 different criteria<br />10 categories of criterion<br />
    17. 17. Small Print<br />Small sample<br />Not statistically robust<br />Not generalisable<br />No claims about validity or reliability<br />“However, a clear pattern emerged from the rubrics to give confidence that a more systematic approach may not have produced a significantly different outcome. The pattern of rubrics in the literature was established early in the review and was re-inforced as the sample was increased.”<br />
    18. 18. Findings: Expressing Criteria<br />Expression varied considerably from rubric to rubric.<br />Some were expressed as criteria, some were written as statements, and some were simply examples of contribution at a particular performance point.<br />Little consistency in terminology<br />For example, the quality of academic discourse was expressed in terms of “depth of insight”, “intellectual challenge”, “quality of knowledge”, and “much thought”.<br />Subjective language was common<br />Words such as “good”, “substantive” and “regular” (without further definition) were often used.<br />Language used varied in formality<br />While the majority of rubrics used formal English, some used a conversational style of language including such phrases as “Your messages didn’t leave me jazzed”.<br />
    19. 19. Findings:Frequency of criteria<br />
    20. 20. Findings:Criteria categorised<br />
    21. 21. Findings:Most common criteria<br />Participation<br />Academic discourse<br />Etiquette<br />Learning objectives<br />Critical thinking<br />
    22. 22. Findings: Commentary<br />Of the 10 categories, five broadly relate to academic standards and five relate to such things as the number of messages posted, how often students visited the forum or the attitude of the student (such as her “being positive”).<br />Eight rubrics made direct or indirect reference to the associated learning objectives. 12 rubrics made no reference to the learning outcomes whatsoever.<br />Eighteen rubrics rewarded participation, which was the single most common category.<br />Four rubrics did not include any criteria relating to academic discourse whatsoever, preferring to focus solely on participation or etiquette or other non-academic criteria.<br />Only one rubric made explicit reference to “learning objectives”.<br />
    23. 23. Case studies<br />Rovai: Online and traditional assessment: what is the difference? (2000)<br />14 criteria for the assessment of online discussions, aligned against three grades (A-C)<br />10 relate to the number of messages posted or the style of the message (“caring”, “compassionate”, “rude”) or other non-achievement variables.<br />4 criteria relate to academic standards (including one that relates to the standard of spelling and grammar).<br />Hazari: Strategy for Assessment of Online Course Discussions, (2004)<br />20 criteria (for awarding points ranging from 1 to 5).<br />10 relate to non-achievements such as “Demonstrates leadership in discussions” and “Posted regularly throughout the week”.<br />Some criteria were crude: “Time between posting indicated student had read and considered substantial number of student postings before responding”<br />
    24. 24. Conclusions<br />We’re not sure what we’re assessing<br />We’re assessing the wrong things<br />We don’t know how to assess online writing<br />We’re not sure what online writing is<br />We’re unclear about how it’s different<br />We don’t use rubrics<br />We write rubrics differently<br />The rubrics we use are pretty poor<br />We’re being unfair to students<br />
    25. 25. Characteristics of a good rubric<br />
    26. 26. Recommendations<br />Assess learner contributions to online discussions.<br />Use a rubric to carry out the assessment.<br />Design rubrics to have high fidelity.<br />Incorporate the unique affordances of online writing into rubrics.<br />
    27. 27. Challenges<br />Embracing online writing<br />Acknowledging the new affordances<br />Reconsidering how we assess students<br />Creating new assessment activities<br />Designing new marking rubrics<br />Better quality<br />Improved fidelity<br />Embedding the new affordances<br />
    28. 28.<br />