Do q

423 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
423
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Do q

  1. 1. HEA ReportDimensions of Quality by Graham Gibbs: a summary of key points Neal Sumner
  2. 2. Aims of the report• To influence senior managers and academic staff on raising quality in undergraduate education• To provide an evidential base for what constitutes effective practice• ‘To contribute to current debates about educational quality’• Author: Graham Gibbs, formerly Director of the Oxford Learning Institute. Produced by and on behalf of the HEA
  3. 3. The ‘3P model’ (Biggs 1993)• Presage – the context before students start learning (e.g. funding, selection, reputation)• Process – what goes on when students learn (e.g. class size, face to face contact hours, staff student ratios, quality of student engagement and feedback etc..)• Product – the outcomes of the learning – e.g. Degree classification, employability(Similar to the input-environment–output model favoured in the USA)
  4. 4. The wider context• Recent reports (House of commons select committee 2009, QAA 2009 and HEPI 2006,07 ff) have expressed concern that UK students work less hard and for less hours than European counterparts• Does it matter if there are less student contact hours?• Are total student study hours an indicator of quality?• Why are the University of Oxford and the Open University nearly always at the top of NSS rankings despite their fundamentally different models?
  5. 5. Quality – what is it?• A relative, not an absolute term• Relative to institutional purposes, customer, satisfaction(NSS)?• What gains do students make through their study (transformation)? Harvey and Green (1993) i.e. is their education effective? How is the student enhanced?• Product can be the measure of student performance before and after their experience of HE
  6. 6. Some limitations of the modelClass size – this may affect student performance (process) but is a result (perhaps) of funding levels (presage) But who decides how to allocate the funds – HoDs? Programme Directors?Any others?
  7. 7. Presage dimensions of quality: FundingCohort and class size are predictors of student performanceTeacher quality is also a result of funding decisions (buy better teachers!)The best students go to the best resourced institutions (Oxbridge)However a series of large scale US studies found no easy correlation between institutional levels of funding and measures of educational gain, i.e. universities with similar level of funding can have very different outcomes, and vice versa, institutions with very similar performance in terms of graduation rates and student satisfaction can have very different levels of funding – why?The more effective institutions used funding ‘to produce a campus ethos devoted to student success’ (Gansemer-Topf et al 2004)
  8. 8. Student: Staff ratios (SSRs)Low SSRs mean potentially more contact between teachers and students – this is a key predictor of educational gains (Pascarella and Terenzini 2005), though low SSRs do not always result in increased contactThe volume, quality and timeliness of student feedback are also good predictors of educational gains –this can be related to SSR but low SSR does not by itself guarantee good or timely feedbackSmall class size is also a good predictor of student performance but low SSR doesn’t necessarily guarantee thisOften a low SSR is only achieved in the third year (when NSS is administered)
  9. 9. Quality of teaching staff• What is the balance between teaching delivered by tenured staff and research students?• In most Russell group and pre-1992 universities most small group teaching is carried out by teachers other than academics (HEPI)• How far are these ‘adjunct’ staff (VLs, p*art timers, research students, involved in module or curriculum development, meet students out of class, attend departmental meetings – US evidence indicates that use of such staff negatively impacts student performance
  10. 10. Quality of Students• Selection/ tariffs – how important are these as predictors of student performance?• Evidence from both the UK and the USA suggest that A level point scores tell us almost nothing about the quality of the educational process at university, or the degree of student engagement with their studies.• However it is also clear that there are benefits to students ‘being surrounded by other able students’ – raises student expectations of themselves. In group work it ids the previous educational attainment of the best student in the group that best predicts the group grade, not the average level of prior attainment or the level of the weakest student• How far does the educational process engage the student in collaborative learning? This is important as it is a key predictor of educational gains“Students bring more to higher education than their A -level scores. It is likely thattheir cultural capital, their aspirations, self-confidence and motivations all influencetheir performance and interact with teaching and course design variables”
  11. 11. Process Dimensions 1: Class sizeLarge class sizes have a negative impact on student performance and engagement – leads to ‘surface’ learning – also leads to a clear and negative impact on NSS – same teachers het higher scores when teaching smaller classes ( so it’s not the teachers!Large classes also negatively impacts on library and other resources, promptness and quality of feedback and nature of assessments. Also close contact with staff may be more limited.Large classes also associated with weak social cohesion, alienation, poor in class behaviour, hiding library books etc..ButWhere out of class study is a major component of a course enrolment may be amore crucial variable than class sizeIn the OU a course may have an enrolment of 10,000, but will have an average class size of 24 – one factor in their high NSS scoreLarger classes also limit the amount of time students can access specialist resources e.g. Labs, studiosHowever there is a puzzle – in the UK overall student performance has increased in recent years at the same time as class size has increased – why is this???
  12. 12. Process dimension 2 – class contact hours, independent study and total hours• The number of class contact hours per se has little to do with educational quality – it’s what happens in those hours• A 1997 review (Gardiner) found an average of 0.7 out of class study for every hour on in class contact. At Oxford the average is 11 hours of independent study for every contact hour – Oxford students work harder than students at other UK universities despite less contact hours – it’s the nature of the class contact which counts – it tends to be up close and personal!• The OU has less contact hours but remains amongst the highest in NSS – has this anything to do with the 7 principles?
  13. 13. Process dimension 2 - continuedThis doesnt mean class contact hours can be cut and quality ensured – it is essential to change the pedagogic model- if students read to prepare for a seminar and then the seminar is removed they will read less and learn lessWhat matters most is the hours students put in, whether in or out of class, although hard evidence for how much independent study students actually do is hard to come byAre the students who study longer hours the ones that perform best? There is no straightforward answer here (Stinebreckner and Stinebreckner 2008) because able students may do well on less hours than less able students who study a lot, but without a clear focus – these latter are likely to become disenchanted and take an increasingly surface approachIf the question is rephrased – If average study hours on a degree programme were higher would average performance be better the answer is more clearly YES! This is confirmed in both US and EU studies – under the Bologna process it is estimated that total student effort (in and out of class is between 4,500 and 5,200 hours for a 3 year undergraduate programme at Bachelor levelMultiple studies have shown that UK students work less hours than their EU counterparts – degree programmmes in the UK have one third of the hours of EU universities EU students find UK degrees ‘less demanding’Number of student effort hours (in and out of class) within and between programmes and institutions can vary very widely - nor is it the case that weaker students in weaker institutions study for more hours – often it is less!
  14. 14. Why are student study hours declining?• 1. Where programmes rely heavily on coursework assessment they only focus resources on assessed areas (there is more study where more marks are awarded from a final exam)• Where learning outcomes and assessment criteria are full and clear students are likely to take this as indicating what they can safely ignore and focus solely on assessment components – research on study diaries show students work progressively less hours as they get through their 3 year degree – they become more strategic and focus solely on what will be assessed• Increase in part time work reduces course effort and grades – in the USA students take longer to complete their courses and may study at several institutions• Students who live at home and travel to local universities (often in urban conurbations) have the lowest average study hours• Universities with low study hours are also those with fewest resources, e.g. library/space per student and there is a direct correlation between resource allocation per student and average response to NSS question on the quality of learning resources• Many of these findings also relate to Masters programmes
  15. 15. Teaching Quality• Teachers with a teaching qualification have been rated more highly than those without Little of no relationship between measures of the quantity or quality of teacher’s research and measures of the quality of their teaching ( Hattie and Marsh 1996) ‘the common belief that teaching and research are inextricably intertwined is an enduring myth. At best teaching and research are loosely coupled.’
  16. 16. Teaching quality – judged by studentsStudent rating of teachers is often disparaged by academics (rate your professor!)But can be very reliable indicator because:Students agree with each other on who the best teachers are, agree with teachers peers and make similar judgements on different occasions. They also can and do distinguish between teachers they like and teachers who they think are good. So, student feedback is not just a popularity paradeHowever students may have different conceptions of what constitutes ‘good’ teaching and their conception may change over timeThus an unsophisticated student might consider good someone who delivers all the content in lectures and then tests for memory of that content (surface) whereas a more sophisticated student might prefer someone who promotes independent learning and the development of a personal stance towards knowledgeHow can this inform our student voice awards criteria?
  17. 17. Research environmentAn active research environment can be one of the presage factors but the best research departments aren’t necessarily the best teaching environments – a college whose faculty is research-orientated increases student dissatisfaction (Astin 1993) BUTWhere undergraduate students are engaged with a real research project this can really benefit student learning e.g. At MIT and Oxford, where there is a deliberate policy to engage undergraduate learning with research active staff and this can lead to a deep approach to learningHowever it is evident that RAE (and now REF) scores have no correlation with improved educational quality
  18. 18. Level of intellectual challengeThere are three elements to this:1. Level of the curriculum – usually determined by the department unless there are external professional requirements2. Depth of approach to studyingAlthough many students take a surface approach to learning it is widely accepted that a deep approach is essential to long term and meaningful outcomes from higher education3. Student engagement
  19. 19. Surface/Deep learningStudents are not surface or deep learners – this is a context dependent response to perceived demands of the learning contextWhat factors can promote deep learning?Good feedback, clear sense of learning outcomes and goals of course, clear understanding of expected standardHowever none of these criteria appear on the NSS
  20. 20. Student engagementThis is currently a key focus of interest in US studies (National Survey of student engagement) NSSE – large scale studies over three decades so results important – a recent example is a study of 774 universities carried out in 2008Findings indicate that the level of academic challenge, of active and collaborative learning and extent and quality of student/faculty interaction are prominent among the important factors. These and other factors are encapsulated in the 7 principles
  21. 21. Chickering and Gamson’s Seven Principles of Good practice in undergraduate educationGood practice encourages Student faculty contactGood practice encourages cooperation among studentsGood practice encourages active learningGood practice encourages prompt feedbackGood practice encourages time on taskGood practice communicates high expectationsGood practice respects diverse talents and ways of learning
  22. 22. Formative assessment and feedbackEnhanced feedback increases student retention (Yorke 2001)Greater use of Formative assessment increases the deep approach to learning ( Gibbs and Dunbar-Goddet 2007)Enormous variety in the amount of formative activity and assessment both within and between universities from twice in 3 years in one institution to 130 times in anotherVolume of written feedback varies from 3,00 words over 3 years to 15, 00 words, for oral feedback from 12 minutes per year per student to over 10 hours per year. These are greater variables than SSRs, class contact hours, independent study hours or funding per student. In the era of massification it is formative activity and feedback which has taken the biggest hit
  23. 23. Other dimensions of qualityReputation – THES rankings – only marginally useful in terms of indicating educational gains Peer ratings and TQA (Teaching Quality Assessment ) is marginally better but still relies on reputational factors and pays little attention to processStudent Support – difficult to measure as provision very varied between, and sometimes within institutions e.g. Level of personal tutor support, effectiveness can also depend on extent of demand and this in turn on presage factors (e.g. tariff, % of students whose first language is not English)QAA relies on external examiner system and student evaluation of teaching – has some useful impact but not of itself sufficient to enhance qualityCollecting student feedback on teaching does not by itself lead to improvement in teaching unless accompanied by other processes such as the teacher consulting with an educational expert, especially when preceded by the expert observing the teaching and interaction with students (Weimer and Lenze 1997 and Piccinin 1999)Institutional learning and teaching strategies have not, thus far, been able to demonstrate any perceptible impact on educational gains.
  24. 24. Product Dimensions of Quality% of Students getting firsts and upper second class degrees has increased markedly over time, at the same time as SSRs, funding per student, amount of feedback, class size close contact with teachers has declined. How to explain this counter intuitive result? A major factor may be decline in the robustness of the external examiner system We now inhabit a bizarre world where Maths graduates are three times more likely to get a first than History graduates. It is argued that comparing degree standards is no longer meaningful (Brown 2010) and degree classifications are not a sound basis for indicating the quality of educational outcomes of a UK university
  25. 25. Student retention and persistenceOxford has a 90% retention rate the OU about 50%, yet both score high in NSS – age seems to be a key indicator here – the broader the age and ability range of students the lower the retention rateIn the US students increasingly plan to drop out of and move between institutions as they complete their degrees. Other factors affecting UK retention rates include living on campus, working part time, but no clear pattern so far in the researchIn the US, and to some extent in the OU, research into student readiness for learning before they start their programmes identifies those who are likely to need the greater support so that scarce resources can be better targeted.UK studies confirm that collaborative and interactive learning together with close contact with teachers increases retention rates, especially among less able students.However it is very difficult to predict levels of student motivation , either extrinsic or intrinsic.
  26. 26. Employability and graduate destinationsThe mp=lyability of graduates is not a real measure of educational grains or quality as there are too many variables such as location, degree type, qualifications and age on entry, institutional reputation, age and social class of student body, etc..
  27. 27. Conclusions...• Those institutions that do best in NSS are those which have evolved institutional pedagogies, however radically different they may be – take for example the OU commitment to openness and Oxford’s to academic excellence. The focus in the UK has been on individual teachers – the National Teaching Fellow scheme rather than teaching teams or departments – other countries have taken a different approach• Do different subjects require different pedagogies? And therefore different measures of quality? Both of these would seem to be true, even though there may be broad institutional guidelines and general principles as revealed in this report.
  28. 28. Additional quality factors• Is teaching valued and rewarded?• Do teachers talk to each other about teaching and its improvement?• Is innovation supported and funded?• Is educational effectiveness evaluated?• How good is departmental leadership?• Is there a departmental community of practice?

×