• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
CAA conference 2010:  Assessing online writing
 

CAA conference 2010: Assessing online writing

on

  • 1,435 views

How to assess online writing, such as blogs and wikis. Good and bad practice in the construction of rubrics.

How to assess online writing, such as blogs and wikis. Good and bad practice in the construction of rubrics.

Statistics

Views

Total Views
1,435
Views on SlideShare
1,419
Embed Views
16

Actions

Likes
1
Downloads
30
Comments
1

3 Embeds 16

http://sqacomputing.blogspot.co.uk 9
http://sqacomputing.blogspot.com 6
http://storify.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Bobby Elliott from SQA.Happy to receive e-mails after the event.Small piece of research done as part of a larger research project.Interesting results.30 slides in 30 minutes...
  • Paper is in the book.Also available online.Paper focuses on online discussion boards but my presentation is more general and relates to online writing.
  • 3 rules of speaking…Never do a lecture for the first time.Never tell jokes.Never use technology – especially online tech.Serious message in this slide… teachers are becoming more accountable.
  • Quick survey before we begin.
  • Text, phone, tweet or web.This is an example of a free web 2 service.
  • High on criticism. Low on solution. My speciality.
  • Before we begin, some background stuff…
  • The original Latin derivative emphasises the formative nature of assessment.The second definition is more modern and is a nice, simple explanation of the word.
  • Best definition of assessment that I have came across.By Angelo, 1995.Notice the bits in bold…“making expectations explicit”, “having criteria”, “systematically analysing”…Support for the use of a rubric.
  • You will be familiar with two of these characteristics (validity and reliability).But fidelity is new.The idea was introduced by a guy called Sadler (2009) in a paper about assessment.
  • Definition in bold.It’s very similar to validity but focuses on grading.It’s a moot point if we need it but I think it adds something to our vocabulary.Mention effort – we are guilty of this. He call this a “non-achievement”.Give audience time to read the slide.Criticism of continuous assessment. Maybe particularly harmful to boys.
  • Several recent papers have referred to the new “affordances” of online writing.This slide shows what’s special about writing online compared to traditional writing.The major initiative is an Australian project entitled “Transforming Assessment”. See next slide.The new affordances are the new capabilities that online writing provides.Go thru each one.
  • In summary: we’re not using the new capabilities that Web 2 provides.We’re assessing it in a business-as-usual way.Give example of online discussions assessed through “your best two posts” – essay-type contributions.We’re assessing online writing like we assess offline writing.
  • My research was focused on online discussion boards.
  • I looked at 20 rubrics.Consisting of 128 separate criteria (around 6-7 per rubric).Many criteria were basically the same thing expressed in different ways…So I grouped them into 33 different criteriaThen I coded these criteria using a coding system with 10 types of criteria.
  • My sample was small.And the selection of the rubrics are not random or systematic.So I’m not saying much about validity or reliability of my findings.BUT the more I looked at, the more a pattern emerged.In fact, the pattern emerged very early in the review.
  • So here are my findings.
  • This is the table that shows the original 128 criteria grouped into different criteria (similar criteria are grouped).It reduced the 128 criteria to 33.For example, of the 128 criteria in the 20 rubrics, 12 related to quantity of messages – probably 12 of the 20 rubrics had one criterion relating to quantity.Notice some interesting categories: frequency of posting (9 criteria), attitude (7 criteria), showing respect (5 criteria).
  • Here are the 10 codes that I used.So, for example, 18 rubrics (out of 20) made mention of participation (e.g. the number of posts) and this accounted for 36 of the 128 criteria (around one quarter).Also, 8 rubrics had criteria relating to etiquette (manners) – and this accounted for 10 (of 128) criteria.
  • This slide shows the top five criteria.Notice that two relate to “non-achievements” to use Sadler’s terminology.The most common criterion related to participation: how often students posted or how many replies they made or how many times they logged in.Criteria relating to etiquette were the 3rd most common! This means being polite or supportive or “nice”!We would never reward “being nice” in an essay – so why online? Is “being nice” a learning objective?
  • Let audience read my findings.18/20 rubrics rewarded participation.Few (1) were directly related to learning objectives.
  • Rovai’s paper is probably the best known and most widely use.But Sadler would give him a poor grade for fidelity… 4/14… 30%Another well known author in this area is Hazari. He does better achieving 50%.But these papers are sort of seminal works on writing rubrics… but see the last bullet!It’s not “time between posting” that measures quality.
  • The present position is not satisfactory.
  • I’ve tried to illustrate best practice with this list of characteristics.But I haven’t actually created a rubric!Some characteristics are obvious e.g. 3.Nos 2 and 4 and 6 and 8 are more interesting.
  • These recommendations are from my paper.
  • The current position is not good.We have to try to improve how we assess online writing.We need to be more consistent.We need to embrace the new affordances of digital writing.We need to reconsider the assessment tasks we set students.And how to mark these – using rubrics.

CAA conference 2010:  Assessing online writing CAA conference 2010: Assessing online writing Presentation Transcript

  • Assessing Online Writing
    CAA Conference, 2010
    20-21 July, 2010
    bobby.elliott@sqa.org.uk
  • http://www.scribd.com/bobbyelliott
  • Question
    Do you feel confident about assessingstudents’ online writing?
  • Text message
    Text keycode to 07624806527
    Smartphone
    Go to poll4.com and enter keyword
    Twitter
    Tweet keyword with comment to @poll
    Web
    http://www.polleverywhere.com/multiple_choice_polls/MTA0ODQwOTAwOA/web
    How confident are you when assessing students’ online writing?
    Very: 24697
    Somewhat: 25157
    Not at all: 25192
  • Summary of presentation
    Research question
    Methodology
    Findings
    Recommendations
  • Some background stuff
    Definition of assessment
    Characteristics of assessment
    New affordances
  • Assessment
    Latin root: assidere: “to sit beside”
    “Observing learning”
  • Definition of assessment
    “Assessment is an on-going process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analysing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance.”
  • Characteristics of assessment
    Validity
    Reliability
    Fidelity
  • Fidelity
    “Fidelity is the extent to which elements that contribute to a course grade are correctly identified as academic achievement”.
    “Many academics cannot help but be impressed by the prodigious time and persistence that’s some students apparently invest in producing responses to an assessment task. However, effort is clearly an input variable and therefore does not fall within the definition of academic achievement”.
    “Assessment in which early understandings are assessed, recorded and counted, misrepresents the level of achievement reached at the end of the course.”
  • New Affordances
    Large-scale projects
    Developed co-operatively and collaboratively
    Improved authenticity
    Real life value
    Large audience
    Richer content
    More reflective
    More transparent
    Peer reviewed/assessed
  • Transforming Assessment Project
    An Australian Learning and Teaching Council Fellowship specifically looking at the use of e-assessment within online learning environments, particularly those using one or more Web 2.0 or virtual world technologies.
    http://www.transformingassessment.com/
    geoffrey.crisp@adelaide.edu.au
  • Challenges
    “Most of its advocates [of writing using Web 2.0 tools] offer no guidance on how to conduct assessment that comes to grips with its unique features, its difference from previous forms of student writing, and staff marking or its academic administration.”
    “The few extant examples appear to encourage and assess superficial learning, and to gloss over the assessment opportunities and implications of Web 2.0’s distinguishing features.”
  • Research Question
    What is the current practice in the construction of rubrics to assess online discussions?
  • Methodology
    20 papers
    128 criteria/statements
    33 different criteria
    10 categories of criterion
  • Small Print
    Small sample
    Not statistically robust
    Not generalisable
    No claims about validity or reliability
    “However, a clear pattern emerged from the rubrics to give confidence that a more systematic approach may not have produced a significantly different outcome. The pattern of rubrics in the literature was established early in the review and was re-inforced as the sample was increased.”
  • Findings: Expressing Criteria
    Expression varied considerably from rubric to rubric.
    Some were expressed as criteria, some were written as statements, and some were simply examples of contribution at a particular performance point.
    Little consistency in terminology
    For example, the quality of academic discourse was expressed in terms of “depth of insight”, “intellectual challenge”, “quality of knowledge”, and “much thought”.
    Subjective language was common
    Words such as “good”, “substantive” and “regular” (without further definition) were often used.
    Language used varied in formality
    While the majority of rubrics used formal English, some used a conversational style of language including such phrases as “Your messages didn’t leave me jazzed”.
  • Findings:Frequency of criteria
  • Findings:Criteria categorised
  • Findings:Most common criteria
    Participation
    Academic discourse
    Etiquette
    Learning objectives
    Critical thinking
  • Findings: Commentary
    Of the 10 categories, five broadly relate to academic standards and five relate to such things as the number of messages posted, how often students visited the forum or the attitude of the student (such as her “being positive”).
    Eight rubrics made direct or indirect reference to the associated learning objectives. 12 rubrics made no reference to the learning outcomes whatsoever.
    Eighteen rubrics rewarded participation, which was the single most common category.
    Four rubrics did not include any criteria relating to academic discourse whatsoever, preferring to focus solely on participation or etiquette or other non-academic criteria.
    Only one rubric made explicit reference to “learning objectives”.
  • Case studies
    Rovai: Online and traditional assessment: what is the difference? (2000)
    14 criteria for the assessment of online discussions, aligned against three grades (A-C)
    10 relate to the number of messages posted or the style of the message (“caring”, “compassionate”, “rude”) or other non-achievement variables.
    4 criteria relate to academic standards (including one that relates to the standard of spelling and grammar).
    Hazari: Strategy for Assessment of Online Course Discussions, (2004)
    20 criteria (for awarding points ranging from 1 to 5).
    10 relate to non-achievements such as “Demonstrates leadership in discussions” and “Posted regularly throughout the week”.
    Some criteria were crude: “Time between posting indicated student had read and considered substantial number of student postings before responding”
  • Conclusions
    We’re not sure what we’re assessing
    We’re assessing the wrong things
    We don’t know how to assess online writing
    We’re not sure what online writing is
    We’re unclear about how it’s different
    We don’t use rubrics
    We write rubrics differently
    The rubrics we use are pretty poor
    We’re being unfair to students
  • Characteristics of a good rubric
  • Recommendations
    Assess learner contributions to online discussions.
    Use a rubric to carry out the assessment.
    Design rubrics to have high fidelity.
    Incorporate the unique affordances of online writing into rubrics.
  • Challenges
    Embracing online writing
    Acknowledging the new affordances
    Reconsidering how we assess students
    Creating new assessment activities
    Designing new marking rubrics
    Better quality
    Improved fidelity
    Embedding the new affordances
  • bobby.elliott@sqa.org.uk