Exploring student and tutor perceptions of nuance in written and audio feedback
1. Shades of Meaning: An exploration of
student and tutor perceptions of nuance
within written and audio feedback
Jane Jones
Sandy Stockwell
Dr. Ellie Woodacre
Nick Purkis
2. Audio Feedback
Audio feedback is when a lecturer talks (using a mic) to a student essay
whilst reading it on a screen. The audio file is then sent to the student
who can open and hear the lecturer’s comments.
Benefits for the student include (phase 1):
●Highly personal (Voelkel and Mello 2014, Carruthers et al 2015, Dixon 2015)
●Developmental (Lunt and Curran 2010,)
●Detailed (Gould and Day 2012, Martini and DiBattista 2014)
3. Aim of phase 2
The project 2nd phase delved deeper through a critical
analysis and linguistic comparison of tutor comments
made in both written and audio feedback. This was in
order to identify similarities and differences focussing on
text level features, using an analysis model adapted from
categories identified by Chalmers, MacCallum, Mowat and
Fulton (2014)
Chalmers, C.MacCallum, J. Mowat, E. & Fulton, N. (2014)
4. Examples of Categories (Chalmers et al 2014)
Identification of
errors
Correcting
errors/Explaining
misunderstandings &
misconceptions
Giving praise
Demonstrating good
practice
Suggestions for future
study/suggestions for approaches
to future work
Justifying
marks
Other
Subject Knowledge:
Understanding of concepts,
facts, correct interpretation
Academic Practice:
Range of reading, synthesis of
literature, date and
appropriateness of reading;
quantity versus quality; linking
theory and practice…
Written Standard English:
Grammar, punctuation,
referencing, expression,
spelling,
structure/paragraphing
5. Minimum Maximum Total Average
Written
feedback
18 47 253 31.6
Audio
feedback
17 38 259 32.4
Number of comments
8. Student perception of comment categories
1. Students do not appear to interpret comments in the same way as their peers
2. Students and tutors appear to identify a similar number of comments which ‘give
praise and demonstrate good practice’
3. Both students and tutors agree that there were few comments which ‘justify
marks’
4. ‘Correcting errors’ and ‘Suggestions for further study’ were often interpreted
differently by tutors and students
5. Students appear to interpret more comments as focusing on ‘subject knowledge’
than tutors
6. Students’ interpretation of categories appeared to align more closely with tutor
intentions when the comments were heard
9. Student response to how feedback is worded
● One word responses perceived as being ambiguous
● Students saw questions as daunting but recognised they encourage
thinking
● Students prefer very specific guidance rather than general statements
● ‘OK’ was seen as a particularly controversial word
● Ticks were considered to be positive, but caused doubt as to what
they were referring to
● Due to phrasing used, it was difficult to understand whether certain
feedback provided suggestions or required correction
10. References
Carruthers, C., McCarron, B., Bolan, P., Devine, A., McHahon-Beattie, U., and Burns, A. (2015) ‘I like the sound of that/-an
evaluation of providing audio feedback via the virtual learning environment for summative assessment. Assessment and
Evaluation in Higher Education. Vol. 40. No. 3.
Chalmers, C. MacCallum, J. Mowat, E. and Fulton, N. (2014) Audio feedback: richer language but no measurable impact on
student performance. Practitioner Research in Higher Education., 8. (1) 64-73.
Dixon, S. (2015) The pastoral potential of audio feedback: a review of the literature. Pastoral care in Education. Vol. 33. No. 2.
Gould, J. and Day, P. (2013) Hearing you loud and clear: student perspectives of audio feedback in higher education.
Assessment and Evaluation in higher Education. Vol. 38. No. 5.
Lunt, T., and Curran, J. (2010) ‘Are you listening please?’ The advantages of electronic audio feedback compared to written
feedback. Assessment and Evaluation in Higher Education. Vol. 35. No. 7.
Martini, T., and DiBattista, D. (2014) the transfer of learning associated with audio feedback on written work. The Canadian
Journal for the Scholarship of Teaching and Learning. Vol. 5.
Voelkel, S., and Mello, L.V. (2014) Audio feedback-better feedback? Bioscience Education, Vol. 22. No. 1.
Editor's Notes
Introduce ourselves:
UofW staff
UoP staff
Name and role
Nick
Phase one explored student experiences of audio and written feedback
From phase 1 of this ongoing research we identified that audio feedback for the student was:
More personal in that tone and inflection can be used to enhance the student experience. Like having a conversation
Could focus on both positives in more depth and offer suggestions for improvement/next steps
Can discuss points in more depth (6 times more words conveyed per minute than written feedback) which helps for future work
The 3 year FAST project: Brown and Glover (2006) found that tutors believed that they were providing plenty of good quality feedback , however, when interviewed, students argued strongly that much feedback was neither plentiful, not particularly helpful.
So, In order to establish any credence to the views articulated Brown and Glover (2006) carried out an analysis of tutor feedback on a number of randomly selected student assignments at both universities.
FAST found that there were many inconsistencies between tutor feedback. Some tutors would heavily correct and others much less so
Willingham (1990) discusses that often assignments are covered with such detailed, often pedantic interventions. In one university over 20% of feedback was concerned with such minutiae of grammar and spelling.
With regard to grades, it is often assumed that the higher the student grade, the less tutor comments. And, lower marks warrant more feedback comments, however, this was not evident within their research.
There was differing perceptions about what students wanted and what the tutors provided in terms of feedback
.
Nick
Based on Brown and Glover (2006) and Chalmers et al (2016) we based our critical analysis and linguistic comparison of tutor comments (text, sentence and word level) and audio commentary on these criteria.
Jane Number of comments
0.) Clarify what we are counting as a comment.
1.) At this stage we decided not to focus our analysis on ‘word count’ because we were aware that we used significantly more words when providing audio feedback, as opposed to written feedback. Merry and Orsmond (2008) found this to be the case and Chalmers et al (2014) identified that on average the word count was just over 5 times higher for audio feedback than for written feedback. Chalmers et al (2014) surmised that this was due to the number of ‘filler’ words used in audio feedback.
2.) Interestingly the minimum (i.e. script with the fewest comments) for both written and audio feedback had broadly the same number of comments (17 and 18). However, the maximum for the written feedback was significantly higher. This may be due to the time constraints which are enforced when using audio recording software such as Jing.
3.) It is useful to note that the total number of comments (for all 8 scripts) for both written and audio feedback were broadly the same. This has led to a very similar average number of comments for both audio and written feedback.
Jane Comparison of comment types
1.) For both audio and written feedback ‘correcting errors and explaining misunderstandings’ were the most frequently commented on (104 comments for written and 133 for audio). This was the most frequently commented on for 3 out of 8 scripts when giving written feedback and 5 out of the 8 scripts when giving audio feedback. Even though it was the most frequently commented on for both audio and written, it is useful to note that when giving audio feedback a greater number of comments about this aspect were made.
2.) In contrast, ‘identification of errors’ occurred more frequently when giving written feedback as opposed to audio feedback (42 for written and 19 for audio). This may be due to the brief annotations in the margin which are used when providing written feedback e.g. grammar, spelling, referencing requires attention)
3.) Just under 1/3 of the comments for both audio and written were focused on ‘giving praise or demonstrating good practice’ (82 for written and 80 for audio). This differs from Chalmers et al (2014) findings who found this occurred almost twice as often when giving audio feedback as opposed to written feedback.
4.) Lizzio and Wilson (2008) investigated student perceptions of feedback; students identified the most effective feedback as being ‘developmental’, i.e. that which could help towards development in future work. In our study, feedback categorised under ‘suggestions for future work or study’ did not feature highly in either audio or written feedback. This was the case with Merry and Orsmond (2008) and Charmers et al (2014). As lecturers we feel many of our constructive comments are facilitating the opportunity to be ‘developmental’ but it would be interesting to consider if students perceive it in the same way.
5.) There were no comments which ‘justified marks’ within the written feedback compared with 5 for audio feedback. This was the lowest category for both audio and written. Charmers et al (2014) also found this category to be higher for audio with hardly any written comments about this.
6.) Developing the ‘other’ category-remarks that did not fit into the other categories in our analysis grids
Often opening and concluding remarks, also ‘filler’ (um, ok)
Examples:
Polite comments: ‘Thank you for submitting this essay’
General praise: ‘Good’, ‘I see you’ve incorporated previous feedback here’, ‘This is coming along really well’
Interjections: ‘Yuk!’ (reaction to a description of a massacre, not the student’s work!)
Queries to student: ‘Is this something you intend to discuss in chapter 2?’
General discussion of the subject-theory, historiography etc.
Discussion of the parameters of the assignment-challenge of word limits
Explaining the process of feedback: ‘I want to talk about some specific points but also general things that your writing could benefit from…there’s loads of strengths as well so I’ll try and pin point those as well on the way’
Also running out of time (Jing=5 minutes long)
Wrapping up: ‘I hope this has been helpful and if you need any clarification on the points just give me an email. Thanks’
Far more frequent in audio feedback
These supplementary comments-part of what makes audio feedback ‘richer’ and more personal?
Developing the ‘other’ category-remarks that did not fit into the other categories in our analysis grids
Often opening and concluding remarks, also ‘filler’ (um, ok)
Examples:
Polite comments: ‘Thank you for submitting this essay’
General praise: ‘Good’, ‘I see you’ve incorporated previous feedback here’, ‘This is coming along really well’
Interjections: ‘Yuk!’ (reaction to a description of a massacre, not the student’s work!)
Queries to student: ‘Is this something you intend to discuss in chapter 2?’
General discussion of the subject-theory, historiography etc.
Discussion of the parameters of the assignment-challenge of word limits
Explaining the process of feedback: ‘I want to talk about some specific points but also general things that your writing could benefit from…there’s loads of strengths as well so I’ll try and pin point those as well on the way’
Also running out of time (Jing=5 minutes long)
Wrapping up: ‘I hope this has been helpful and if you need any clarification on the points just give me an email. Thanks’
Far more frequent in audio feedback
These supplementary comments-part of what makes audio feedback ‘richer’ and more personal?
Jane
Comparison of comment focus
1.) This data illustrates that written feedback appears to have a fairly consistent spread across the 3 categories of SK, AP and WSE. Analysis of individual scripts also finds this to be the case with the greatest number of comments for written feedback being 3 for SK, 2 for AP, 1 for SK and AP equally and 2 for WSE. This does not appear to be the case with audio feedback.
2.) The most frequently commented on for audio feedback were comments relating to academic practice (111 out of 259 comments). This was also the most frequently commented on for 7 out of the 8 scripts which were marked using audio feedback (the 8th on commented on SWE more frequently). AP was commented on 81 times within the written feedback.
3.) The least frequently commented on for audio feedback were comments relating to subject knowledge (39 out of 259 comments). This was also the least frequently commented on for 7 out of the 8 scripts which were marked using audio feedback ( the 8th commented SWE less frequently). SK was commented on 72 times within the written feedback.
4.) Comments relating to written standard English were most similar in terms of the amount of comments for written and audio (87 for written and 75 for audio).
5.) Ellie will explore the other category in greater depth.
Jane
Before slide content explain what we did next: interviewing students – read written feedback, listened to audio feedback, read audio feedback transcript
The points above could be due to students’/ tutors’ interpretations of the categories.
The points above could be because the assignment was not their own assignment and was about a different discipline to the one they are studying
Bullet 7: unpick re audio read and audio heard
Sandy
OK: it was misleading. When seen in written feedback, it was difficult for students to assess whether it was used as praise, filler or a vague commentary on the nature of the work, suggesting that it was average or mediocre rather than good or bad.
A particular couple of students advised that they generally understood a tick to be a strong point of argument, but they considered it may refer to something else: i.e grammar, spelling, structure. Other couples felt that despite it was helpful to know their work was received in a positive manner, it was still ambiguous as to why a tick would be used instead of an explanation for the good work done. It felt just as generic and ambiguous as the ‘ okay’.
Bullet 6: does this relate to different interpretations of corrections/feedforward categories?