• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
The Reliability Programme: Leading the way to better tests and assessments
 

The Reliability Programme: Leading the way to better tests and assessments

on

  • 806 views

This is the presentation from "The Reliability Programme: Leading the way to better tests and assessments" event.

This is the presentation from "The Reliability Programme: Leading the way to better tests and assessments" event.

Statistics

Views

Total Views
806
Views on SlideShare
806
Embed Views
0

Actions

Likes
1
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Because strand 1 is technically very complicated, we wanted to appoint a heavy-weight Technical Advisory Group. And we’re proud to have achieved that! We’ve got a team of five, including three professors, representing expertise from the ranks of: awarding body research teams academia, and educational testing agencies and only one of them is English. And we’ve got both critics and defenders of the system on board too. Paul Black, in particular, has been one of the most vociferous critics of the system: specifically challenging assessment agencies for a lack of openness and transparency concerning error. Because, inevitably, we have been working very closely with awarding bodies this Technical Group had a most important role to play in vouching for the independence of the programme and the trustworthiness of the results.
  • How reliable are results from national assessments, exams and qualifications in England?
  • Trying to find answers to questions such as How do we conceptualize reliability in different contexts? How do we interpret our findings – what do the results from strand 1 mean and how can we make sense of them? Class acc 84%; Cron alpha 0.78 How do we communicate our findings?
  • Finding answers to questions like what do the public know about reliability? What do they feel about reliability?
  • So, how unreliable is educational assessment? This is quite a controversial area, as it happens, and there wasn’t a great deal of evidence to be found. At least, not a lot of evidence that’s user-friendly enough to make good sense of. But one of England’s foremost professors of educational assessment has concluded that: [READ] He and two other professors provided evidence to the Select Committee in 2007 [READ] That’s quite high. Are they right? We’ll come back to that later.
  • Several empirical studies to investigate the reliabilities of results from NCTs, GCSEs, A levels, VQs
  • For several years NFER have been asking 11 year olds to pretest items that are to be used in the following year’s KS2 test before they take the current year’s test. So the data generated allows them to compare the pretest and live test results for five years, 2004-2008. Here is a summary of the results. Accuracy: degree of agreement between classifications based on observed scores and true scores on a test Consistency: degree of agreement between classifications based on two sets of observed scores from replications of the same measurement procedure Misclassification: the degree that observed scores and true scores on a test classify examinees into different categories. So 88% accuracy = 12% misclassification. We will come back to that later.
  • English 2008 English reading pre-test
  • 190 GCSE components – mainly objective tests and short answer questions
  • 97 GCE components – mainly objective tests and short answer questions
  • Assessors and internal verifiers for three workplace based NVQs. Kappa – a measure of agreement between two ratings of the same event that takes account of probability of agreement by chance. 0.61-0.80 substantial agreement 0.81-1.0 almost perfect agreement
  • Following the NFER work mentioned earlier on reliability in NC tests, some further analyses have been carried out to see what figures emerge for the internal consistency reliability - Cronbach’s alpha, and classification accuracy – degree of agreement between classifications based on observed scores and true scores on a test for the 2009 and 2010 live tests . Alpha values are relatively high and are similar over the two years for the subjects The classification accuracy figures estimated using two different methods are mostly around 87% for science, 85% for English and 90% for maths – so a misclassification rate of about 10%, 13 or 15%.
  • External research projects 1. Estimating and interpreting reliability, based on CTT – describes measurement process, describes different forms of reliability 4. Reporting of results and measurement uncertainties – international report on how results and associated errors are reported internationally Representing and reporting of assessment results and measurement uncertainties in some USA tests – high stake tests 6. Reliability of teacher assessment Internal research projects Reliability of composite scores: based on CTT, G-theory and IRT, qualification level
  • Example of error reporting from N Carolina. Confidence limits
  • Issues related to reliability discussed.
  • Ofqual and NFER – discussion group at 2009 AEA Europe conference in Malta to discuss issues with reporting assessment results and reliability information. Summary of views expressed by participants.
  • Participants show varied degrees of understanding and varying degrees of tolerance towards different kinds of error.
  • Ipsos-MORI 2009 survey. Teachers – 80% thought students got right grade
  • Ofqual quantitative online survey
  • Remember the public confidence objective – “ The public confidence objective is to promote public confidence in regulated qualifications and regulated assessment arrangements “. On to the media reaction to some of our work. You might want to reflect how that feeds into public confidence.

The Reliability Programme: Leading the way to better tests and assessments The Reliability Programme: Leading the way to better tests and assessments Presentation Transcript

  • Welcome Reliability Programme: Leading the way to better testing and assessments 22 March 2011 Event Chair: Dame Sandra Burslem, DBE, Ofqual's Deputy Chair
  • Welcome and Setting the Scene Glenys Stacey, Ofqual Chief Executive
  • Ofqual’s Reliability Programme Dennis Opposs
    • Reliability: quantifying the luck of the draw
    • Reliability work in England has generally been
      • Isolated
      • Partial
      • Under-theorised
      • Under-reported
      • Misunderstood
    • Ofqual’s Reliability Programme aimed to improve the situation.
    Background
    • To gather evidence for Ofqual to develop regulatory policy on reliability of results from national tests, examinations and qualifications
    Aims
    • Strand 1: Generating evidence of reliability
    • Strand 2: Interpreting and communicating evidence of reliability
    • Strand 3: Developing reliability policy
    • Strand 3a: Exploring public understanding
    • of reliability
    • Strand 3b: Developing Ofqual policy on
    • reliability
    Programme structure
  • Our Technical Advisory Group Paul Black Anton Beguin Alastair Pollitt Gordon Stanley Jo-Anne Baird
  • Strand 1 – Generating evidence
    • Synthesising pre-existing evidence
    • Literature reviews
    • Generating new evidence
    • Monitoring existing practices
    • Experimental studies
  • Strand 2 – Interpreting and communicating evidence
    • How do we conceptualise reliability?
    • How do we interpret our findings?
    • How do we communicate our findings?
  • Strand 3 – Developing policy
    • Exploring public understanding of, and attitudes towards, assessment error
    • Stimulating national debate on the significance of the reliability evidence generated by the programme
    • Developing Ofqual’s policy on reliability
  • Student misclassification
    • Controversial area - earlier conclusions include:
    • “… it is likely that the proportion of students awarded a level higher or lower than they should be because of the unreliability of the tests is at least 30% at key stage 2”
    • Wiliam, D. (2001). Level best? London: ATL.
    • “ Professors Black, Gardner and Wiliam argued […] that up to 30% of candidates in any public examination in the UK will receive the wrong level or grade”
    • House of Commons Children, Schools and Families Committee. (2008a). Testing and Assessment. Third Report of Session 2007–08. Volume I. HC 169-I. London: TSO.
    • Is this accurate?
  • Strand 1 – Generating evidence (1)
    • National Curriculum tests:
      • The reliabilities of KS2 science pre-tests and the stability of consistency over time
      • The reliabilities of the 2008 KS2 English reading pre-test
    • General qualifications:
      • The reliabilities of GCSE components/units
      • The reliability of GCE units
    • Vocational qualifications
  • Strand 1 – Generating evidence (2)
    • KS2 science pre-tests
    • The reliabilities of KS2 Science tests over five years
    • Values of internal consistency reliability (alpha) generally over 0.85
    • Classification accuracy (pre-tests) 83%-88%
    • Classification consistency (between pre-tests and live tests) 72%-79%
    • Reliability indices relatively stable over time
    • Relatively high reliability compared with similar tests
  • Strand 1 – Generating evidence (3)
    • A KS2 English reading pre-test
    • Data collected in 2007 during pre-testing 2008 KS2 English reading test
    • Containing 34 items and having a total of 50 marks (mean 28.5 and standard deviation 9.1, 1387 pupils)
    • Internal consistency reliability 0.88
    • Standard error of measurement 3.1
    • Classification accuracy (IRT) 83%
    • Classification consistency (IRT) 76%
  • Strand 1 – Generating evidence (4)
    • Cronbach’s alpha for GCSE components/units
  • Strand 1 – Generating evidence (5)
    • Cronbach’s alpha for GCE units
  • Strand 1 – Generating evidence (6)
    • Assessor agreement rates for a workplace-based vocational qualification
    Qualification Number of decisions Agreement rate (%) Cohen’s Kappa Q1 2144 96.1 0.763 Q2 479 100 1 Q3 3070 99.1 0.971
  • Strand 1 - Generating evidence (7)
    • The 2009 and 2010 live tests (populations)
    85 85 0.919 English 2010 85 87 0.910 English 2009 90 91 0.964 Mathematics 2010 90 90 0.968 Mathematics 2009 86 87 0.926 Science 2010 87 88 0.928 Science 2009 Method 2 Method 1 Classification accuracy (%) Cronbach’s alpha Subject
  • Strand 2 – Interpreting and communicating evidence (1)
    • External research projects
      • Estimating and interpreting reliability, based on CTT
      • Estimating and interpreting reliability based on CTT and G-theory
      • Quantifying and interpreting GCSE and GCE component reliability based on G-theory
      • Reporting of results and measurement uncertainties
      • Representing and reporting of assessment results and measurement uncertainties in some USA tests
      • Reliability of teacher assessment
    • Internal research projects
      • Reliability of composite scores: based on CTT, G-theory and IRT, qualification level
  • Strand 2 – Interpreting and communicating evidence (2)
    • Reporting results
    • and associated
    • errors (students
    • and parents)
  • Strand 2 – Interpreting and communicating evidence (3)
    • Technical seminars
    • Factors that affect the reliability of results from assessments
    • Definition and meaning of different forms of reliability
    • Statistical methods that are used to produce reliability estimates
    • Representing and reporting assessment results and reliability estimates / measurement errors
    • Improving reliability and implications
    • Disseminating reliability statistics
    • Tension in managing public confidence whilst exploring and improving reliability
    • Operational issues for awarding bodies in producing reliability information
    • Challenges posed by the reliability programme in vocational qualifications
  • Strand 2 – Interpreting and communicating evidence (4)
    • International perspective on reliability
    • Reliability studies should be built into the assessment quality assurance process
    • Information on reliability (primary and derived indices) should be in the public domain
    • The introduction of information about reliability (misclassification / measurement error) should be managed carefully
    • Education of the public to understand concept of reliability (measurement error) is seen to play an important part to alleviate the problem of misinterpretation by the media
    • The reporting of results and measurement error can be complex as results are normally used by multiple users
    • Primary reliability indices and classification indices should be reported at population level
    • Standard error of measurement should be reported at individual test-taker level
  • Strand 3a – Public perceptions of reliability (1)
    • External research projects
      • Ipsos MORI survey
      • Ipsos MORI workshops
      • AQA focus groups
    • Internal research project
      • Online questionnaire survey
    • Investigating
      • Understanding of the assessment process
      • Understanding of factors affecting performance on exams
      • Understanding of factors introducing uncertainty in exam results
      • Distinction between inevitable errors and preventable errors
      • Tolerance for errors in results
      • Disseminating reliability information
    • Views on accuracy of GCSE grades
    Strand 3a – Public perceptions of reliability (2)
  • Strand 3a – Public perceptions of reliability (3) Views on national exams system
  • Strand 3b – Developing Ofqual reliability policy (1)
    • Ofqual reliability policy based on
      • Evaluating findings from this programme
      • Evaluating findings from other reliability related studies
      • Reviewing current practices adopted elsewhere
  • Ofqual Board recommendations
    • Continue work on reliability as a contribution to improving the quality assurance of qualifications, examinations and tests
    • Encourage awarding organisations to generate and publish reliability data
    • Continue to improve public and professional understanding of reliability and increase public confidence
  • Next steps
    • Publishing reliability compendium later this year
    • Reliability work becomes “business as usual”
    • Creation of a further policy
  • Today
    • Presentations from the Technical Advisory Group and experts in teaching, assessment research and communications
    • Question and answer session
    • Tell us your opinions or email them to
    • [email_address]
  • Findings from the Reliability Research Professor Jo-Anne Baird, Technical Advisory Group Member
  • Refreshment Break
  • A view from the assessment community Paul E. Newton Director, Cambridge Assessment Network Division Presentation to Ofqual event The reliability programme: leading the way to better testing and assessments. 22 March 2011.
    • We need to talk about error
  • Talking about error
  • The Telegraph (front page)
    • The professional justification
      • what the profession needs to accomplish through talking about error
  • The bad old days
    • Boards seem to have strong objections to revealing their mysteries to ‘outsiders’ […] There have undoubtedly been cases of inquiries […] where publication would have been in the interests of education, and would have helped to prevent the spread of ‘horror-stories’ about such things as lack of equivalence which is an inevitable concomitant of the present cloak of secrecy .
    • Wiseman, S. (1961). The efficiency of examinations. In S. Wiseman (Ed.). Examinations in education. Manchester: MUP.
  • Promulgating the myth
    • However, any level of error has to be unacceptable – even just one candidate getting the wrong grade is entirely unacceptable for both the individual student and the system.
    • QCA. (2003). A level of preparation. TES Insert. The TES , 4 April.
    • The technical justification
      • why users and stakeholders need to know about error
  • Using knowledge of error
    • Students and teachers
      • maybe you’re better, or worse, than your grades suggest
    • Employers and selectors
      • maybe such fine distinctions shouldn’t be drawn
      • maybe other information should be taken into account
    • Parents
      • maybe that difference in value added is insignificant
      • maybe inferences like that should not be drawn
    • Awarding bodies
      • maybe that examination (structure) is insufficiently robust
    • Policy makers
      • maybe that proposed use of results is illegitimate
      • maybe that policy change will compromise accuracy
  • Talking about error
    • the commitment to greater openness and transparency about error is nothing new
    • but there is still a long way to go
  • The 20-point scale (1969-72)
    • The presentation of results on
    • (i) the broadsheet will be by a single number denoting a scale point for each subject taken by each candidate, accompanied by a statement on the range of uncertainty ; and
    • (ii) the candidate's certificate as a range of scale points (eg 13-17, corresponding to 15 on the broadsheet and indicating a range of uncertainty of plus or minus 2 scale points.)
    • Schools Council (1971). General Certificate of Education. Introduction of a new A-level grading scheme . London: Schools Council.
  • The 20-point scale (1969-72)
    • The following rubric is proposed, to be prominently displayed on both broadsheets and certificates:
      • " Attention is drawn to the uncertainty inherent in any examination . In terms of the scale on which the above results are recorded, users should consider that a candidate's true level of attainment in each subject while possibly represented by a scale point one or two higher or lower, is more likely to be represented by the scale point awarded than by any other scale point [...]."
      • Report by the Joint Working Party on A-level comparability to the Second Examinations Committee of the Schools Council on grading at A-level in GCE examinations. (1971)
  • 20-point scale (1983-86)
    • It was proposed that the new scheme should have the following characteristics:
    • [...] (d) results should be accompanied by a statement of the possible margin of error.
    • JMB (1983). Problems of the GCE Advanced level grading scheme . Manchester: Joint Matriculation Board.
  • Talking about error
    • there is disagreement within the profession over the concept of error
    • but, at least, we are beginning to make these differences of opinion more explicit
  • Measuring attainment
  • Judging performance
    • I argue that there is a strong case for saying that it is more sensible to accept that exams are just about fair competition – which means your performance must be reliably turned into a score but you accept as the luck of the draw things like the question paper being tough for you or having hay fever on the day, etc. Moreover, I think if you do that you can design things like regulatory work on reliability so that they reflect the priorities of the public . This was behind my first question to you about your presentation yesterday – do you really think Joe Public is interested in Cell 6? That’s an empirical question of course; I think the answer is no, but I’d love to find out for sure.
    • Mike Cresswell, 20 October 2009, personal communication
  • Uses of reliability information
    • Evaluation and improvement
      • highly technical (detailed & specific & idiosyncratic)
      • obscure (typically not published)
      • primary users = awarding bodies
    • Accountability
      • technical (but how detailed & generic & uniform?)
      • translatable (published but not necessarily disseminated)
      • primary users = regulator & analysts
    • Education
      • non-technical (uncomplicated & generic & uniform)
      • translated (widely disseminated)
      • primary users = members of the public
  • For education
    • How can we achieve greater openness and transparency?
  • The Sun
  • For education
    • use analogy , wherever possible
    • use commonsense , not technical, terms
    • convey misrepresentation , not variation
    • rely on heuristics , not statistics
    • […] results on a six or seven point grading scale are accurate to about one grade either side of that awarded.
    • Schools Council. (1980). Focus on examinations . Pamphlet 5. London: Schools Council.
    • The importance of assessment results in today’s education system...
    • and communicating uncertainty in what they can tell us
    • Warwick Mansell
    • The emphasis being placed on test results
  • Child takes exams Head teacher Judgement: school level Exams marked and graded Department School results Ofsted Judgement: local level Local authority/ federation/academy chain Judgement: national level Education initiatives Civil servants Ministers National productivity Debate: state ed successful? Teacher One pupil’s exam results: national implications
    • Types of “error”
    • Error:
    • “the difference between an approximate result and the true determination”.
    • Communication of measurement error:
    • It can, and is, done
    • “ The information in these tables only provides part of the picture of each school’s and its pupils’ achievements. Schools change from year to year and their future results may differ from those achieved by current pupils. The tables should be considered alongside other important sources of information such as Ofsted reports and school prospectuses.”
    • DfE, school performance tables website, 2011
    • What can go wrong if measurement certainty is not understood and communicated
  •  
    • Is the public ready to accept the concept of measurement error?
    • Sats results “wrong for thousands of pupils”
    • Daily Telegraph, 13/11/09
    • “ New Sats fiasco as one in three pupils 'will get wrong exam results’”
    • Daily Mail, 31/1/09
    • Talking about reliability at the macro, and at the micro, level
  • Was I reliably informed...? ... a former principal ponders John Guy Formerly Principal, Farnborough Sixth Form College
  • 3250 students; Mostly A levels 3312 applications for 1750 places in September 2010 61 AS courses Biggest? AS Mathematics AS Psychology AS English AS Media Smallest? AS Italian (6)
  •  
  • Reliability refers to the consistency of outcomes that would be observed from an assessment process were it to be repeated . High reliability means that broadly the same outcomes would arise. A range of factors that exist in the assessment process can introduce unreliability into assessment results. (un)reliability concerns the impact of the particular details that do happen to vary from one assessment to the next for whatever reason. So reliability was important to the College... ..and we paid over £800,000 a year to get it
  • Today’s session: Ponder aloud on reliability and the causes of unreliability and its impact upon College students A level History A level Business Studies A level Art O level Athletics
  • Hasna Benhassi Tatyana Tomashova
  • A level History 150 – 200 students taking A2 annually Previous achievements and value-added indicators suggest improving cohort Stable cohort of experienced and inspiring teachers, led by Chair of History Teaching Association Many experienced A level examiners Could be employed by Higher Education – and would be awarding degrees...
  • History A level results Awarding Body 145 140 166 179 195 Completers
  • 60 0 38 34 30 E A C B D 100 80 70 60 50 40 0 Mapping Raw Score UMS Scale Map to UMS A B C D E BAR 30 Marking tolerance+/- 5% Tolerance Amplified +/- 8% A* 90 42 27
  • History A level results Awarding Body Reliability refers to the consistency of outcomes that would be observed from an assessment process were it to be repeated . 145 140 166 179 195 Completers
  • 60 0 38 34 30 E A C B D Mapping Raw Score BAR A* 42 27 A-E range should be 40% Narrow A-E range produces unreliability – in this case range is 25% 70% 45%
  • 60 0 33 30 27 E A C B D Business Studies 2011 A2 raw marks – from web search BAR A* 36 25 18% A-E range!! 60% 42% 42 These raw marks over 42 worth nothing These raw marks between 27 and 42 worth 3% each These raw marks 23-27 worth 5% each These raw marks 0 - 23 worth 1.5 % each Candidate 1 Q4 = 4 raw marks Total 27 Candidate 2 Q4 = 0 raw marks Total 23 50% 30% Is this a reliable or valid assessment instrument?
  • The Regulated Assessment (wobbly) Ruler? Questions 1,2,3 Questions 5, 6, 7, 8 When you measure things... ...it’s a good idea to use a reliable ruler! Sometimes I think the College ruler is more reliable! 4 0!
  • AS level Art 2007 - 495 Candidates Reliability refers to the consistency of outcomes that would be observed from an assessment process were it to be repeated . A B C D E FSFC 2007 14.1 37.5 72.7 93.1 97.1 Joint Council Figure 2007 21 42 66 83 94 FSFC 2006 23.2 55.4 87.3 96.3 98.3 Joint Council Figure 2006 22 44 67 84 94 FSFC 2005 20.7 48.3 82.2 97.8 99.3 Joint Council Figure 2005 21 42 65 82 92 FSFC 2004 20.4 45.2 78.3 94.4 99 Joint Council Figure 2004 22.2 42.5 63.8 81.4 92.4 FSFC 2003 22.8 46.7 68.7 85.1 95.9 J oint Council Figure 2003 22.2 42.2 63.5 80.6 91.5
  • 2007 – a special year
      • New specification – 4 units
      • Awarding Body invited teachers to meeting to discuss grading
      • New boundaries for criterion judgements were proposed, with the grade A boundary set lower than in previous years.
      • Attendance at the Awarding Body meetings was not compulsory.
    Grade A 62 Grade B 54 Grade C 46 Grade D 38 Grade E 30 New boundaries (used by College) Criterion judgments, no disagreements at moderation; Work praised (again) for consistent internal assessment Grade A 69 Grade B 60 Grade C 51 Grade D 42 Grade E 33 Adjusted boundaries (summer 2007) New boundaries close to historic grade boundaries which the awarding body had sought to change
  • ANALYSIS Value added scores 2005: +0.4 2006: +0.4 2007: -0.3 2008: +0.4 Chi-squared test A B C D E U 2003-2006 21.8% 27.1% 30.1% 14.3% 4.75% 2% 2007 Expected 107.9 134.1 149 70.8 23.3 9.9 2007 Actual 70 116 174 101 20 14 Chi-sq 13.32 2.45 4.2 12.9 0.46 1.7 sum 35.02 Tables give 18.47 at 0.1% significance level Assuming similar ability of cohort, agreed with moderator, the chances of this change occurring randomly is infinitessimaL
  • Was this a reliable assessment?
      • College immediately contacted Board and was told to appeal
      • College appeal, sending copy of letter to Ofqual and the Chief Executive
      • Appeal heard by three members who were interested only in process
      • Appeal was rejected
      • No doubt the process was followed assiduously
      • However, the process was flawed
  • Conclusions Large cohorts from open access colleges are representative of the whole population Large cohorts of students therefore provide an opportunity for an additional check on processes Statistical analysis of the entire cohort will hide flaws in the assessment process An error is associated with every measurement but some measurements are error(mistake)-ridden – and unfair. Is error(mistake) designed into the assessment instrument? Awarding bodies are not keen to admit it! Reliability refers to the consistency of outcomes that would be observed from an assessment process were it to be repeated .
  • Questions and Answers to the Panel of Speakers Chair: Glenys Stacey, Ofqual Chief Executive
  • Ofqual’s Reliability Programme Closing remarks Dennis Opposs
  • Ofqual Board recommendations
    • Recommendation 1:
    • Continue work on reliability as a contribution to improving the quality assessment of qualifications, examinations and tests.
    • Work in the areas of teacher assessment, workplace-based assessment and construct validity of assessment would be of particularly interest and importance
    • The scope of the work possible will clearly be limited by the resource available.
  • Ofqual Board recommendations
    • Recommendation 2:
    • Encourage Awarding Organisations to generate and publish reliability data.
    • We need to use impact assessments to help decide what is appropriate.
    • The first progress is likely to involve GCSEs and A levels where the work has progressed furthest.
    • In due course we might make some of this regulatory requirements for Awarding Organisations.
  • Ofqual Board recommendations
    • Recommendation 3:
    • Continue to improve public and professional understanding of reliability and increase public confidence in the examination system by working with the Awarding Organisations and others.
  • Next steps
    • Publishing reliability compendium later this year
    • Reliability work becomes “business as usual”
    • Creation of a further policy
  • Today Tell us your opinions or email them to [email_address]
  • Thank you for attending Networking Lunch