1) The document discusses potential metrics for measuring teaching excellence and learning gain in higher education.
2) It reports the results of a session where participants ranked various proposed metrics. The highest ranked metrics for teaching excellence included teaching awards, qualifications/fellowships, and evidence of sustained impact on student learning.
3) For learning gain, the highest ranked metrics included student grades, surveys measuring changes in attitudes/behaviors, and data on student engagement and experiences.
4) The document concludes that while NSS student feedback appears suitable for the Teaching Excellence Framework, other high ranked metrics could provide better measures, and contextualization is important.
4. Conclusions
• Current TEF metrics - only NSS appears fit-4-purpose (DLHE and HESA non-
continuation rank much lower)
• Preferred metrics for Teaching Excellence include teaching qualifications or
recognition (HEA fellowships), awards for excellence and evidence of impact
on the student learning experience
• In terms of Learning Gain, grades are useful to measure the ‘distance
travelled,’ whilst longitudinal surveys of attitudes and behaviours (eg self-
efficacy) including the broader student experience (co-curricular) better reflect
the ‘student journey’
• Recognition that metrics must be contextualised (eg student ‘stories’ and
‘anecdotes’; institutional TEF submissions)
• Interestingly, employability/employment was not ranked highly –
• does this mean that the purpose of HE is about ‘learning for life’ not a specific job?
• does this reflect the complexity of comparing vocational with more academic disciplines?
• does this reflect the difficulties of benchmarking with complex geographical, demographical,
socio-economic and cultural impacts (beyond University)?
The Holy Grail for HE - teaching excellence and learning gain
Editor's Notes
Learning Gain
RAND p64-68
Mix of both qualitative and quantitative methodologies (these are some current indicators, but you will have others):
Grades – problem with comparability across sector/ relating tariff entry to outputs not always direct (not a good predictor)
PDP – students reflect on development but highly individualised and not comparable even within institutions
Progress tracking- diagnostics skills audit – measures only skills aspect of LG and often done at discipline not institutional level (except Med Schools) – how can this be standardised? Mixed methods eg ALIS – Advanced Level Information System combines grade and test results as predictor of students’ performance – provides score of likely achievement, rather than Learning Gain
NSS administered nationally (UK-wide) but it is about ‘satisfaction’ with teaching & learning environment/ does not necessarily equate to quality or excellence – not a measure of LG (taken only at end of study)
UKES not originally a measure of LG, questionable association btw students scores and levels of engagement (attitudes , intrinsic motivation?); low response rates and sector engagement (~38 institutions) – 3 questions added to NMMLGP pilot
Careers registration – ask student to measure readiness for career; potential cultural, demographic and geographical impacts – would required extensive validation
Take 10 mins to talk about this at your table – compare the ways in which learning gain and teaching excellence are measured within your own institutions (adding Others as appropriate) and then identify areas of similarity or difference.
Invite feedback on similarities or differences (eg other) in each category – feedback from each group