TESTA
Interactive Masterclass
Tansy Jessop
Visiting Professor, BILT
25 January 2019
This session
• Rationale for TESTA
• Brief description of TESTA
• Rationale for taking a programme approach
• Methods
• Themes
Why take a programme approach?
1. A modular problem
2. A curriculum problem
3. An alienation problem
4. An engagement solution
A modular problem
A curriculum problem
An alienation problem
Image, "Alienation Nightmare" © 1996 by Sabu
Motorways to alienation
• M1: Modules
• M2: Markets
• M3: Metrics
• M4: Mass higher education
An engagement solution?
More
engaging
formative
Less
measuring
Students
learning
more
Curriculum
less stuffed
Programme approach seems to
improve things
60%
65%
70%
75%
80%
85%
90%
95%
Q5 Q6 Q7 Q8 Q9 OS
AVERAGENSSSCORES
COMPARISON OF 32 PROGS IN 13 UNIVERSITIES WITH SECTOR SCORES
NSS 2015 SCORES TESTA SCORES
0
10
20
30
40
50
60
70
80
90
The red line of improvement?
TESTA@Solent 2017-18
NSS 17 NSS 18
Research and change process
Programme
Team
Meeting
Assessment
Experience
Questionnaire
(AEQ)
TESTA
Programme
Audit
Student
Focus Groups
TESTA definitions
Summative:
graded assessment which counts towards the degree
Formative:
Does not count: ungraded, required task with
feedback
TESTA Audit: the nuts and bolts…
How do I do a
TESTA audit?
What will the data
tell me about the
assessment on the
programme?
Activity One: mock audit
Programme
Team
Meeting
Assessment
Experience
Questionnaire
(AEQ)
TESTA
Programme
Audit
Student
Focus Groups
The Audit: Caveats
1. Audit is not everything
2. Official discourse
3. Planned curriculum
4. Some better data, some weaker, some gaps
Mock Audit
• Some context
• Number of summative
• Number of formative
• Varieties of assessment
• Proportion of exams
• Written feedback
• Speed of return of
feedback
Summary of audit data
Assessment features across a 3 year UG degree (n=73)
Characteristic Range
Summative 12 -227
Formative 0 - 116
Varieties of assessment 5 - 21
Proportion of examinations 0% - 87%
Time to return marks & feedback 10 - 42 days
Volume of oral feedback 37 -1800 minutes
Volume of written feedback 936 - 22,000 words
Typical A&F patterns
73 programmes in 14 unis (Jessop and Tomas 2017)
Characteristic Low Medium High
Volume of summative
assessment
Below 33 40-48 More than 48
Volume of formative only Below 1 5-19 More than 19
% of tasks by examinations Below 11% 22-31% More than
31%
Variety of assessment
methods
Below 8 11-15 More than 15
Written feedback in words Less than
3,800
6,000-7,600 More than
7,600
Make sense of audit data
1) What is striking about the data?
2) What surprises or puzzles you?
3) What would you want to know
more about?
Each table look at:
– 1 x university types
OR
– 1 x discipline
– Briefly discuss in
relation to questions
Assessment loads: how much is too much?
(Tomas and Jessop 2018)
Summative
Exams
Characteristic Medium
(Research-
Intensive)
Medium
(Teaching
Intensive)
Mann-
Whitney U
test results
Summative 41-79
(Median 50)
34-41
(Median 35)
RI*
Formative 1-26
(Median 3)
3-17
(Median 7)
n.s.
Proportion of
Examinations
27-42%
(Median 30%)
5-19%
(Median 10%)
RI*
Variety assessment
methods
8-10
(Median 8)
12-14
(Median 15)
TI*
Theme 1:
High summative with low formative
• Low formative to summative ratio of 1:8 (UK,
NZ, Ireland)
• Summative as ‘pedagogy of control’
• Formative weakly practised and understood
Assessment Arms Race
A lot of people don’t do wider
reading. You just focus on your
essay question.
In Weeks 9 to 12 there is hardly
anyone in our lectures. I'd rather
use those two hours of lectures
to get the assignment done.
It’s been non-stop
assignments, and I’m now
free of assignments until
the exams – I’ve had to
rush every piece of work
I’ve done.
CONSEQUENCES
OF HIGH
SUMMATIVE
It was really useful. We
were assessed on it but we
weren’t officially given a
grade, but they did give us
feedback on how we did.
It didn’t actually count so
that helped quite a lot
because it was just a
practice and didn’t really
matter what we did and we
could learn from mistakes
so that was quite useful.
The benefits
of formative
If there weren’t loads
of other assessments,
I’d do it.
It’s good to know you’re
being graded because
you take it more
seriously.
BUT… If there are no actual
consequences of not doing
it, most students are going
to sit in the bar.
The lecturers do formative
assessment but we don’t get
any feedback on it.
Formative is the hardest nut to crack…
Go to www.menti.com and use the code 97 97 66
Type in three reasons why students may be
reluctant to invest time and energy in completing
formative assessment tasks
1) Low-risk way of learning from feedback (Sadler, 1989)
2) Fine-tune understanding of goals (Boud 2000, Nicol 2006)
3) Feedback to lecturers to adapt teaching (Hattie, 2009)
4) Cycles of reflection and collaboration (Biggs 2003; Nicol &
McFarlane Dick 2006)
5) Encourages and distributes student effort (Gibbs 2004).
Yet formative is vital
How you encourage formative
Go to www.menti.com and use the code 23 86 17
Choose your top three strategies for engaging
students in formative assessment
…Or talk to each other about successful strategies
Case Study 1
• Systematic reduction of summative across
whole business school
• Systematic ramping up of formative
• All working to similar script
• Whole department shift, experimentation,
less risky together
Case Study 2
• Problem: silent seminar, students not reading
• Public platform blogging
• Current academic texts
• In-class
• Threads and live discussion
• Linked to summative
Case Study 3
• Problem: lack of discrimination about sources
• Students bring 1 x book, 1 x chapter, 1 x
journal article, 2 x pop culture articles to
seminar
• Justify choices to group
• Reach consensus about five best sources
• Add to reading list
Case study 4
https://www.youtube.com/watch?v=ZVFwQzlVFy0
Case Study 5
Your task
• In groups, identify five principles for making
formative work. Write them down on flipchart
paper.
• How could you use or adapt this on your
course?
Principles to encourage formative
1. Rebalance summative and formative
2. Whole programme approach
3. Link formative and summative
4. Authentic, public domain tasks
5. Creative, collaborative, challenging tasks
6. Relational and conversational feedback
Break time! Before more nuts and
bolts…
How does the AEQ
work? What will it
tell me about the
programme?
What will I learn
about students’
views of assessment
from the focus
group?
Activity Two: AEQ
Programme
Team
Meeting
Assessment
Experience
Questionnaire
(AEQ)
TESTA
Programme
Audit
Student
Focus Groups
Why a questionnaire about assessment?
• Weaker NSS scores
• Weak NSS?
• Quick and big data
• Quant/qualitative
AEQ 3.3 (2003)
• Designed to measure ‘conditions under which
assessment supports learning’
• Based on theory and evidence + selected CEQ scales
• Robust enough factor structure and scale coherence
– measures what it’s meant to be measuring?
AEQ 4.0 (2018)
• Fill in the AEQ 4.0 from the vantage point of
being a student in one of your classes
• Paper or online?
• https://educ.sphinxonline.net/v4/s/ha2fbs
Comparing Audit and AEQ data
from one programme
In pairs or groups, explore programme
audit and AEQ data from one programme.
Does anything stack up? Are there loose
ends, questions, contradictions?
Theme 2: Disconnected feedback
The feedback is
generally focused
on the module
Because it’s at the end
of the module, it doesn’t
feed into our future
work.
If It’s difficult because your
assignments are so detached
from the next one you do for
that subject. They don’t
relate to each other.
I read it and think “Well,
that’s fine but I’ve already
handed it in now and got the
mark. It’s too late”.
STRUCTURAL
It was like ‘Who’s
Holly?’ It’s that
relationship where
you’re just a student.
Because they have to mark so
many that our essay becomes
lost in the sea that they have
to mark.
Here they say ‘Oh yes, I don’t
know who you are. Got too
many to remember, don’t
really care, I’ll mark you on
your assignment’.
RELATIONAL
A feedback dialogue
Irretrievable breakdown…
Your essay lacked structure and
your referencing is problematic
Your classes are boring and I
don’t really like you 
A way of thinking about assessment and
feedback?
Ways to be dialogic
• Conversation: who starts the dialogue?
• Cycles of reflection across modules
• Quick generic feedback
• Feedback synthesis tasks
• Peer feedback (especially on formative)
• Technology: audio, screencast and blogging
• From feedback as ‘telling’…
• … to feedback as asking questions
And human….
I use first & second person in feedback
A real person marked this!
You are known
I use plain, imaginative English
No techno-bot-speak allowed!
So last
century!
The key to dialogue
Students to lecturers:
Critical Incident Questionnaire
Stephen Brookfield’s Critical Incident Questionnaire http://bit.ly/1loUzq0
Activity Three: AEQ
Programme
Team
Meeting
Assessment
Experience
Questionnaire
(AEQ)
TESTA
Programme
Audit
Student
Focus Groups
Have a go at triangulating data
• Read through audit, AEQ and focus group data from one
programme
• Quick abstract/bullet points of what seems to be going on
• Discuss with your group/flipchart
a) What are the stand out themes?
b) What jigsaw pieces fit together?
c) What unresolved issues remain?
Main pointers for focus group
• Questions are broad themes
• Easy to complicated
• Sit in a circle
• It’s the discussion that matters
• Go with the flow
• But steer when off topic, direct, pass the ball
• Troubleshooting
• Ethics
Role play (5 minutes)
Getting students to attend…
• Get the support of lecturers, programme team
• Explore using student researchers
• Use vouchers
• Food
• Between 3 and 8 students for one hour
• Ethics and confidentiality
What the data looks like:
…and the intelligent transcript
texttoMP3
https://transcribe.wreally.com/
Coding 101
CODES
• Marker variation
• Written criteria
• Peer feedback
• Subjectivity
• Formative feedback
• Exemplars
• Multi-stage assessment
• Marking exercises
CATEGORY
Internalising standards
The tone of the case study
• Build a narrative thread
• Descriptive, non-evaluative tone
• Empathetic
• Surprises, puzzles, contradictions
• Balancing weak and strong features
• Admitting gaps, interpretation, errors
• Not prescriptive, but give a steer & create
options
Theme 3: Confusion about goals and
standards
• Consistently low scores on the AEQ for clear
goals and standards
• Alienation from the tools
• Perceptions of marker variation, unfair
standards and inconsistencies in practice
We’ve got two
tutors- one marks
completely differently
to the other and it’s
pot luck which one
you get.
They read the essay and then
they get a general impression,
then they pluck a mark from
the air.
It’s like Russian
roulette – you may
shoot yourself and
then get an A1.
They have different
criteria, they build up their
own criteria.
There are criteria, but I find them really
strange. There’s “writing coherently,
making sure the argument that you
present is backed up with evidence”.
Implicit
Criteria
Explicit
Written
I justify
Co-creation
and
participation
Active
engagement
by students
O'Donovan, B , Price, M. and Rust, C. 2008
Taking action: internalising goals and
standards
• Regular calibration exercises
• Team discussion and dialogue
Lecturers
• Rewrite/co-create criteria
• Discussing exemplars
Lecturers
and students
• Enter secret garden - peer review
• Engage in drafting processes
Students
Shifting paradigms from this…
…to the adult equivalent of this
References
Barlow, A. and Jessop, T. 2016. “You can’t write a load of rubbish”: Why blogging works as formative
assessment. Educational Developments. 17(3), 12-15. SEDA.
Boud, D. and Molloy, E. (2013) ‘Rethinking models of feedback for learning: The challenge of
design’, Assessment & Evaluation in Higher Education, 38(6), pp. 698–712.
Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students' learning. Learning
and Teaching in Higher Education. 1(1): 3-31.
Harland, T., McLean, A., Wass, R., Miller, E. and Sim, K. N. (2014) ‘An assessment arms race and its fallout:
High-stakes grading and the case for slow scholarship’, Assessment & Evaluation in Higher Education.
Jessop, T. and Tomas, C. 2017. The implications of programme assessment on student learning. Assessment
and Evaluation in Higher Education.
Jessop, T. and Maleckar, B. 2016. The Influence of disciplinary assessment patterns on student learning: a
comparative study. Studies in Higher Education. Published Online 27 August 2014
Jessop, T. , El Hakim, Y. and Gibbs, G. (2014) The whole is greater than the sum of its parts: a large-scale
study of students’ learning in response to different assessment patterns. Assessment and Evaluation in
Higher Education. 39(1) 73-88.
Nicol, D. 2010. From monologue to dialogue: improving written feedback processes in mass higher
education, Assessment & Evaluation in Higher Education, 35: 5, 501 – 517.
O'Donovan, B , Price, M. and Rust, C. (2008) 'Developing student understanding of assessment standards: a
nested hierarchy of approaches', Teaching in Higher Education, 13: 2, 205 -217.
Sadler, D. R. 1989. ‘Formative assessment and the design of instructional systems’, Instructional Science,
18(2), pp. 119–144.
Tomas, C and Jessop, T. 2018. Struggling and juggling: A comparison of student assessment loads across
research and teaching-intensive universities. Assessment and Evaluation in Higher Education. 18 April.
Wu, Q. and Jessop, T. 2018. Formative assessment: missing in action in both research-intensive and teaching-
focused universities. Assessment and Evaluation in Higher Education. Published online 15 January.

TESTA Interactive Masterclass

  • 1.
  • 2.
    This session • Rationalefor TESTA • Brief description of TESTA • Rationale for taking a programme approach • Methods • Themes
  • 3.
    Why take aprogramme approach? 1. A modular problem 2. A curriculum problem 3. An alienation problem 4. An engagement solution
  • 4.
  • 5.
  • 6.
    An alienation problem Image,"Alienation Nightmare" © 1996 by Sabu
  • 7.
    Motorways to alienation •M1: Modules • M2: Markets • M3: Metrics • M4: Mass higher education
  • 8.
  • 9.
    Programme approach seemsto improve things 60% 65% 70% 75% 80% 85% 90% 95% Q5 Q6 Q7 Q8 Q9 OS AVERAGENSSSCORES COMPARISON OF 32 PROGS IN 13 UNIVERSITIES WITH SECTOR SCORES NSS 2015 SCORES TESTA SCORES
  • 10.
    0 10 20 30 40 50 60 70 80 90 The red lineof improvement? TESTA@Solent 2017-18 NSS 17 NSS 18
  • 12.
    Research and changeprocess Programme Team Meeting Assessment Experience Questionnaire (AEQ) TESTA Programme Audit Student Focus Groups
  • 14.
    TESTA definitions Summative: graded assessmentwhich counts towards the degree Formative: Does not count: ungraded, required task with feedback
  • 15.
    TESTA Audit: thenuts and bolts… How do I do a TESTA audit? What will the data tell me about the assessment on the programme?
  • 16.
    Activity One: mockaudit Programme Team Meeting Assessment Experience Questionnaire (AEQ) TESTA Programme Audit Student Focus Groups
  • 17.
    The Audit: Caveats 1.Audit is not everything 2. Official discourse 3. Planned curriculum 4. Some better data, some weaker, some gaps
  • 18.
  • 19.
    • Some context •Number of summative • Number of formative • Varieties of assessment • Proportion of exams • Written feedback • Speed of return of feedback Summary of audit data
  • 20.
    Assessment features acrossa 3 year UG degree (n=73) Characteristic Range Summative 12 -227 Formative 0 - 116 Varieties of assessment 5 - 21 Proportion of examinations 0% - 87% Time to return marks & feedback 10 - 42 days Volume of oral feedback 37 -1800 minutes Volume of written feedback 936 - 22,000 words
  • 21.
    Typical A&F patterns 73programmes in 14 unis (Jessop and Tomas 2017) Characteristic Low Medium High Volume of summative assessment Below 33 40-48 More than 48 Volume of formative only Below 1 5-19 More than 19 % of tasks by examinations Below 11% 22-31% More than 31% Variety of assessment methods Below 8 11-15 More than 15 Written feedback in words Less than 3,800 6,000-7,600 More than 7,600
  • 22.
    Make sense ofaudit data 1) What is striking about the data? 2) What surprises or puzzles you? 3) What would you want to know more about? Each table look at: – 1 x university types OR – 1 x discipline – Briefly discuss in relation to questions
  • 23.
    Assessment loads: howmuch is too much? (Tomas and Jessop 2018) Summative Exams
  • 24.
    Characteristic Medium (Research- Intensive) Medium (Teaching Intensive) Mann- Whitney U testresults Summative 41-79 (Median 50) 34-41 (Median 35) RI* Formative 1-26 (Median 3) 3-17 (Median 7) n.s. Proportion of Examinations 27-42% (Median 30%) 5-19% (Median 10%) RI* Variety assessment methods 8-10 (Median 8) 12-14 (Median 15) TI*
  • 25.
    Theme 1: High summativewith low formative • Low formative to summative ratio of 1:8 (UK, NZ, Ireland) • Summative as ‘pedagogy of control’ • Formative weakly practised and understood
  • 26.
  • 27.
    A lot ofpeople don’t do wider reading. You just focus on your essay question. In Weeks 9 to 12 there is hardly anyone in our lectures. I'd rather use those two hours of lectures to get the assignment done. It’s been non-stop assignments, and I’m now free of assignments until the exams – I’ve had to rush every piece of work I’ve done. CONSEQUENCES OF HIGH SUMMATIVE
  • 28.
    It was reallyuseful. We were assessed on it but we weren’t officially given a grade, but they did give us feedback on how we did. It didn’t actually count so that helped quite a lot because it was just a practice and didn’t really matter what we did and we could learn from mistakes so that was quite useful. The benefits of formative
  • 29.
    If there weren’tloads of other assessments, I’d do it. It’s good to know you’re being graded because you take it more seriously. BUT… If there are no actual consequences of not doing it, most students are going to sit in the bar. The lecturers do formative assessment but we don’t get any feedback on it.
  • 30.
    Formative is thehardest nut to crack… Go to www.menti.com and use the code 97 97 66 Type in three reasons why students may be reluctant to invest time and energy in completing formative assessment tasks
  • 31.
    1) Low-risk wayof learning from feedback (Sadler, 1989) 2) Fine-tune understanding of goals (Boud 2000, Nicol 2006) 3) Feedback to lecturers to adapt teaching (Hattie, 2009) 4) Cycles of reflection and collaboration (Biggs 2003; Nicol & McFarlane Dick 2006) 5) Encourages and distributes student effort (Gibbs 2004). Yet formative is vital
  • 32.
    How you encourageformative Go to www.menti.com and use the code 23 86 17 Choose your top three strategies for engaging students in formative assessment …Or talk to each other about successful strategies
  • 33.
    Case Study 1 •Systematic reduction of summative across whole business school • Systematic ramping up of formative • All working to similar script • Whole department shift, experimentation, less risky together
  • 34.
    Case Study 2 •Problem: silent seminar, students not reading • Public platform blogging • Current academic texts • In-class • Threads and live discussion • Linked to summative
  • 35.
    Case Study 3 •Problem: lack of discrimination about sources • Students bring 1 x book, 1 x chapter, 1 x journal article, 2 x pop culture articles to seminar • Justify choices to group • Reach consensus about five best sources • Add to reading list
  • 36.
  • 37.
  • 38.
    Your task • Ingroups, identify five principles for making formative work. Write them down on flipchart paper. • How could you use or adapt this on your course?
  • 39.
    Principles to encourageformative 1. Rebalance summative and formative 2. Whole programme approach 3. Link formative and summative 4. Authentic, public domain tasks 5. Creative, collaborative, challenging tasks 6. Relational and conversational feedback
  • 40.
    Break time! Beforemore nuts and bolts… How does the AEQ work? What will it tell me about the programme? What will I learn about students’ views of assessment from the focus group?
  • 41.
  • 42.
    Why a questionnaireabout assessment? • Weaker NSS scores • Weak NSS? • Quick and big data • Quant/qualitative
  • 43.
    AEQ 3.3 (2003) •Designed to measure ‘conditions under which assessment supports learning’ • Based on theory and evidence + selected CEQ scales • Robust enough factor structure and scale coherence – measures what it’s meant to be measuring?
  • 44.
    AEQ 4.0 (2018) •Fill in the AEQ 4.0 from the vantage point of being a student in one of your classes • Paper or online? • https://educ.sphinxonline.net/v4/s/ha2fbs
  • 45.
    Comparing Audit andAEQ data from one programme In pairs or groups, explore programme audit and AEQ data from one programme. Does anything stack up? Are there loose ends, questions, contradictions?
  • 46.
  • 47.
    The feedback is generallyfocused on the module Because it’s at the end of the module, it doesn’t feed into our future work. If It’s difficult because your assignments are so detached from the next one you do for that subject. They don’t relate to each other. I read it and think “Well, that’s fine but I’ve already handed it in now and got the mark. It’s too late”. STRUCTURAL
  • 48.
    It was like‘Who’s Holly?’ It’s that relationship where you’re just a student. Because they have to mark so many that our essay becomes lost in the sea that they have to mark. Here they say ‘Oh yes, I don’t know who you are. Got too many to remember, don’t really care, I’ll mark you on your assignment’. RELATIONAL
  • 49.
  • 50.
    Irretrievable breakdown… Your essaylacked structure and your referencing is problematic Your classes are boring and I don’t really like you 
  • 51.
    A way ofthinking about assessment and feedback?
  • 52.
    Ways to bedialogic • Conversation: who starts the dialogue? • Cycles of reflection across modules • Quick generic feedback • Feedback synthesis tasks • Peer feedback (especially on formative) • Technology: audio, screencast and blogging • From feedback as ‘telling’… • … to feedback as asking questions
  • 53.
    And human…. I usefirst & second person in feedback A real person marked this! You are known I use plain, imaginative English No techno-bot-speak allowed! So last century!
  • 54.
    The key todialogue
  • 55.
    Students to lecturers: CriticalIncident Questionnaire Stephen Brookfield’s Critical Incident Questionnaire http://bit.ly/1loUzq0
  • 56.
  • 57.
    Have a goat triangulating data • Read through audit, AEQ and focus group data from one programme • Quick abstract/bullet points of what seems to be going on • Discuss with your group/flipchart a) What are the stand out themes? b) What jigsaw pieces fit together? c) What unresolved issues remain?
  • 58.
    Main pointers forfocus group • Questions are broad themes • Easy to complicated • Sit in a circle • It’s the discussion that matters • Go with the flow • But steer when off topic, direct, pass the ball • Troubleshooting • Ethics
  • 59.
    Role play (5minutes)
  • 60.
    Getting students toattend… • Get the support of lecturers, programme team • Explore using student researchers • Use vouchers • Food • Between 3 and 8 students for one hour • Ethics and confidentiality
  • 61.
    What the datalooks like: …and the intelligent transcript texttoMP3 https://transcribe.wreally.com/
  • 62.
    Coding 101 CODES • Markervariation • Written criteria • Peer feedback • Subjectivity • Formative feedback • Exemplars • Multi-stage assessment • Marking exercises CATEGORY Internalising standards
  • 63.
    The tone ofthe case study • Build a narrative thread • Descriptive, non-evaluative tone • Empathetic • Surprises, puzzles, contradictions • Balancing weak and strong features • Admitting gaps, interpretation, errors • Not prescriptive, but give a steer & create options
  • 64.
    Theme 3: Confusionabout goals and standards • Consistently low scores on the AEQ for clear goals and standards • Alienation from the tools • Perceptions of marker variation, unfair standards and inconsistencies in practice
  • 65.
    We’ve got two tutors-one marks completely differently to the other and it’s pot luck which one you get. They read the essay and then they get a general impression, then they pluck a mark from the air. It’s like Russian roulette – you may shoot yourself and then get an A1. They have different criteria, they build up their own criteria.
  • 66.
    There are criteria,but I find them really strange. There’s “writing coherently, making sure the argument that you present is backed up with evidence”.
  • 67.
  • 68.
    Taking action: internalisinggoals and standards • Regular calibration exercises • Team discussion and dialogue Lecturers • Rewrite/co-create criteria • Discussing exemplars Lecturers and students • Enter secret garden - peer review • Engage in drafting processes Students
  • 69.
  • 70.
    …to the adultequivalent of this
  • 71.
    References Barlow, A. andJessop, T. 2016. “You can’t write a load of rubbish”: Why blogging works as formative assessment. Educational Developments. 17(3), 12-15. SEDA. Boud, D. and Molloy, E. (2013) ‘Rethinking models of feedback for learning: The challenge of design’, Assessment & Evaluation in Higher Education, 38(6), pp. 698–712. Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education. 1(1): 3-31. Harland, T., McLean, A., Wass, R., Miller, E. and Sim, K. N. (2014) ‘An assessment arms race and its fallout: High-stakes grading and the case for slow scholarship’, Assessment & Evaluation in Higher Education. Jessop, T. and Tomas, C. 2017. The implications of programme assessment on student learning. Assessment and Evaluation in Higher Education. Jessop, T. and Maleckar, B. 2016. The Influence of disciplinary assessment patterns on student learning: a comparative study. Studies in Higher Education. Published Online 27 August 2014 Jessop, T. , El Hakim, Y. and Gibbs, G. (2014) The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different assessment patterns. Assessment and Evaluation in Higher Education. 39(1) 73-88. Nicol, D. 2010. From monologue to dialogue: improving written feedback processes in mass higher education, Assessment & Evaluation in Higher Education, 35: 5, 501 – 517. O'Donovan, B , Price, M. and Rust, C. (2008) 'Developing student understanding of assessment standards: a nested hierarchy of approaches', Teaching in Higher Education, 13: 2, 205 -217. Sadler, D. R. 1989. ‘Formative assessment and the design of instructional systems’, Instructional Science, 18(2), pp. 119–144. Tomas, C and Jessop, T. 2018. Struggling and juggling: A comparison of student assessment loads across research and teaching-intensive universities. Assessment and Evaluation in Higher Education. 18 April. Wu, Q. and Jessop, T. 2018. Formative assessment: missing in action in both research-intensive and teaching- focused universities. Assessment and Evaluation in Higher Education. Published online 15 January.

Editor's Notes

  • #5 Disconnected seeing the whole degree in silos – my module, lecturer perspective (Elephant, trunk, ears, tusks etc) compared to student perspective of the whole huge beast. I realise that what we were saying is two per module
  • #6 Language of ‘covering material’ Should we be surprised?
  • #9 The TESTA report back of programme findings was by far the most significant meeting I have attended in ten years of sitting through many meetings at this university. For the first time, I felt as though I was a player on the pitch, rather than someone watching from the side-lines. We were discussing real issues. (Senior Lecturer, Education
  • #27 Summative as a ‘pedagogy of control’ Teach Less, learn more. Assess less, learn more.
  • #43 In the UK, assessment and feedback are primary areas of disquiet in the NSS. Provide very little diagnostic information to help course teams adopt more effective assessment strategies. Every year, routine charts red/green/orange – visual representation, accompanied by ritual humiliation of programmes, but not sure why or how to change! AEQ has been used in many countries
  • #44 Looked at in the overview – student effort; intellectual challenge, focused on understanding rather than memorising or ‘sufficing’; clear about goals and stds; feedback is effective – students read it, understand it, use it to improve what they do next. Cronbach’s Alpha – sounds like a disease to me but a test to measure the internal consistency of items – do all items measure the same construct?
  • #56 Is anyone listening?
  • #58 Use flip chart
  • #63 Codes look at small units of meaning – a student says – it takes 4 weeks to get it back, so you’ve already handed in the next one, or the tasks are different one after the other, or I never bother at the end of the course because it’s over. All of these are partly to do with timing, and they contribute to the theme of students not using their feedback. Another sub reason is that students don’t use feedback is because they don’t trust it so they say things like If x marks it you’ll get a good grade, if y a bad, or if you get her on a good day, or it’s so subjective etc
  • #64 What does it feel like to be a student? What does it feel like to be a lecturer at the end of this? An empathetic balanced reporting.
  • #69 Students can increase their understanding of the language of assessment through their active engagement in: ‘observation, imitation, dialogue and practice’ (Rust, Price, and O’Donovan 2003, 152), Dialogue, clever strategies, social practice, relationship building, relinquishing power.