1. The TESTA process: key ingredients
in its spread to over 70
programmes in more than 20
universities
Dr Tansy Jessop, Senior Fellow L&T, TESTA Project Leader
Yaz El Hakim, Director of L&T, TESTA Co-Leader
UNIVERSITY OF WINCHESTER
HEDG Spring Meeting 15 March 2013
2. What is TESTA
What is special about TESTA findings
Why it has had a wide impact
Where TESTA is operating
How it may be useful to you
Any questions?
Exploring what, why, where and how
3. HEA funded research project (2009-12)
Seven programmes in four partner universities
Mapping programme assessment
Engaging with Quality Assurance processes
Diagnosis – intervention – cure
What is TESTA?
Transforming the Experience of Students through Assessment
4. TESTA
“…is a way of thinking
about assessment and
feedback”
Graham Gibbs, sailor and assessment aficionado
5. Captures and distributes sufficient student time and
effort - time on task
Challenging learning with clear goals and standards,
encouraging deep learning
Sufficient, high quality feedback, received on time,
with a focus on learning
Students pay attention to the feedback and it guides
future studies – feeding-forward
Students are able to judge their own performance
accurately, self-regulating
Based on conditions of learning
6. TESTA Research Methods
(Drawing on Gibbs and Dunbar-Goddet, 2008,2009)
ASSESSMENT
EXPERIENCE
QUESTIONNAIRE
FOCUS GROUPS
PROGRAMME AUDIT
Programme
Team
Meeting
7. Number of assessment tasks
Summative/formative
Variety
Proportion of exams
Oral feedback
Written feedback
Speed of return of feedback
Specificity of criteria, aim and learning outcomes.
Audit in a nutshell
8. Quantity of Effort
Coverage of content and knowledge
Clear goals and standards
Quantity and Quality of Feedback
Use of feedback
Appropriate assessment
Learning from exams
Deep and surface learning
Assessment Experience
Questionnaire
9. Student voice and narrative
Corroboration and contradiction
Compelling evidence with the stats
Focus Groups
10. tells a good story
raises a thought-provoking issue
has elements of conflict
promotes empathy with the central characters
lacks an obvious, clear-cut answer
takes a position, demands a decision &
is relatively concise (Gross-Davis 1993)
Case Study
11. Committed and innovative lecturers
Lots of coursework, varied forms
No exams
Masses of written feedback (15,000 words)
Learning outcomes and criteria clearly specified
….looks like a ‘model’ assessment environment
But students:
Don’t put in a lot of effort and distribute effort across few
topics
Don’t think there is a lot of feedback or that it very useful, and
don’t make use of it
Don’t think it is at all clear what the goals and standards
…are unhappy
Case Study X: what’s going on…
12. 1. Many students disregard formative tasks
2. Many students work in peaks and troughs, mainly for
summative tasks, with relatively middling effort levels
3. Giving feedback is hard work but often doesn’t
enhance the learning process
4. Many students are confused about goals and
standards
TESTA Headlines
(from 23 programmes in 8 universities)
13. If there weren’t loads of other assessments going on I’d do it.
I would probably work for tasks, but for a lot of people, if it’s
not going to count towards your degree, why bother?
If there are no actual consequences of not doing it, most
students are going to sit in the bar.
It’s good to know you’re being graded because you take it
more seriously.
The lecturers do formative assessment but we don’t get any
feedback on it.
Formative tasks don’t ‘count’
14. We could do with more assessments over the course of the year
to make sure that people are actually doing stuff.
We get too much of this end or half way through the term essay
type things. Continual assessments would be so much better.
So you could have a great time doing nothing until like a month
before Christmas and you’d suddenly panic. I prefer steady
deadlines, there’s a gradual move forward, rather than bam!
Student effort levels
15. It was about nine weeks… I’d forgotten what I’d written.
The feedback is generally focused on the module.
It’s difficult because your assignments are so detached from the
next one you do for that subject. They don’t relate to each other.
You’ll get really detailed, really commenting feedback from one
tutor and the next tutor will just say ‘Well done’.
Getting feedback from other students in my class helps. I can
relate to what they’re saying and take it on board. I’d just shut
down if I was getting constant feedback from my lecturer.
Feedback issues
16. There are criteria, but I find them really strange. There’s
“writing coherently, making sure the argument that you present
is backed up with evidence”
They have different criteria, build up their own criteria. Some of
them will mark more interested in how you word things.
I get the impression that they don't even look at the marking
criteria. They read the essay and then they get a general
impression, then they pluck a mark from the air.
It’s such a guessing game.... You don’t know what they expect
from you.
I don’t have any idea of why it got that mark.
(Un)clear goals and standards
17. 1. Changes in how degree programmes design assessment
– as teams, and according to evidence and principles
2. …especially in pressing home the value of formative
processes
3. Linking up and sequencing tasks across modules
4. Meta-language with students about feedback
5. Improvements in NSS scores at Winchester (bottom to
top quartile in A&F)
6. Improvements in post-intervention TESTA process
TESTA’s impact
18. Programmatic evidence is useful and brings the team
together
Leader can deal with variations of standards
The module vs greater good of the programme
Lego piece modules vs whole thing
Helps to confront protectionism and silos
Develops collegiality and conversations about
pedagogy
TESTA is about the team
20. The TESTA report back was by far the most significant meeting I have
attended in ten years of sitting through many meetings at this
university. For the first time, I felt as though I was a player on the
pitch, rather than someone watching from the side-lines. We were
discussing real issues (Senior Lecturer).
The faculty were blown away by the TESTA findings" (Researcher).
I had a long discussion about whether every subject should do it
before re-approval - my gut reaction is yes" (Programme Leader).
It has got people thinking in new ways.. (Partner Project Leader).
What people say…
23. Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students'
learning. Learning and Teaching in Higher Education. 1(1): 3-31.
Gibbs, G. & Dunbar-Goddet, H. (2009). Characterising programme-level assessment
environments that support learning. Assessment & Evaluation in Higher Education. 34,4:
481-489.
Jessop, T, McNab, N and Gubby, L. (2012) Mind the gap: An analysis of how quality
assurance processes influence programme assessment patterns. Active Learning in
Higher Education. 13(3). 143-154.
Jessop, T., El Hakim and Gibbs (2011) Research Inspiring Change. Educational
Developments. 12(4).
Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in
mass higher education, Assessment & Evaluation in Higher Education, 35: 5, 501 – 517
Nicol, D. (2012) Assessment Principles Webinar on JISC.
Sadler, D.R. (1989) Formative assessment and the design of instructional systems,
Instructional Science, 18, 119-144.
Sambell, K (2011) Rethinking Feedback in Higher Education. Higher Education Academy
Escalate Subject Centre Publication.
References
Editor's Notes
40 audits; 2000 AEQ returns; 50 focus groups
Yaz: Quote 1: Formative competes for student attention and effort with summative; and often no linkage between the two.
Quote 2 and 3: Most students give priority to graded work or having fun. Students like the stick of consequences…
Quote 4: flaws in design mean that lots of formative tasks don’t elicit feedback.
Yaz: Student workloads often concentrated around two summative points per module. Sequencing, timing, bunching issues, and ticking off modules so students don’t pay attention to feedback at the end point.
Quote 1: late feedback is when it’s too late to be of use. It needs to get back to them when it still matters. Quote 2 and 3: modular silos impede the transfer and use of feedback, and students are looking for more relationship between tasks within and across modules; Quote 4: Marker variation is rife, and creates wariness/distrust about using feedback; Quote 5: students exposed to lots of carefully scaffolded peer feedback find it invaluable.
Limitations of explicit criteria, marker variation is huge, particularly in humanities, arts and professional courses (non science ones) Students haven’t internalised standards which are often tacit.