School based assesment


Published on

Published in: Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

School based assesment

  1. 1. School-Based Assessment: Implementation Issues and Practices Grace Grima Principal Research and Development Officer MATSEC Examinations Board University of Malta Paper presented at the21st Annual AEAA Conference:Assessment and Certification in a Changing Educational, Economic & Social Context Cape Town, South Africa 25th -30th August 2003
  2. 2. School-Based Assessment: Implementation Issues and PracticesAbstractThe traditional system of assessment no longer satisfies the educational and socialneeds of the third millennium. In the past few decades, many countries have madeprofound reforms in their assessment systems. Several educational systems have inturn introduced school-based assessment as part of or instead of external assessmentin their certification. While examination bodies acknowledge the immense potentialof school-based assessment in terms of validity and flexibility, yet at the same timethey have to guard against or deal with difficulties related to reliability, quality controland quality assurance.In the debate on school-based assessment, the issue of ‘why’ has been widely writtenabout and there is general agreement on the principles of validity of this form ofassessment. In this paper, I focus on the ‘how’ by first focusing on critical issuesrelated to reliability, quality control and quality assurance in school-based assessmentthat ensure that standards are maintained. And then by giving an overview of five casestudies from regions around the world that have successfully implemented school-based assessment. Their experiences confirm that, when carefully implemented,school-based assessment need not compromise the standards of certification. 2
  3. 3. School-Based Assessment: Implementation Issues and PracticesIntroductionTraditionally, examination systems have given much importance to the issue ofreliability and the comparability of results. Broadfoot (1995) explains that: In a high stakes environment in which to a very significant extent, test results determined life chances, it was inevitable that there should be an overwhelming emphasis on reliability so that the assessment might seem to operate fairly and consistently.Wood (1991) justifies this emphasis in the following way: The examination bodies are running an examination and therefore have to care about reliability. Tedious it may be to have to observe this dictum, but it is simply no use having the most wonderful examination in the world if it cannot be marked and graded reliably.At the same time, however, as Broadfoot (1995) explains, less importance has beengiven to the issue of validity: The question of validity – whether the test does measure what it is intended to measure – has arguably been subordinated to the overwhelming need for comparability of results.However, the situation has started to change over the last two decades mainly becauseas Gipps (1999) explains: the focus has shifted towards a broader assessment of learning, enhancement of learning for the individual, engagement with the student during assessment, and involvement of teachers in the assessment process.The rise of school-based assessment (aka internal assessment, coursework andcontinuous assessment) is the result of this change. Izard (2001) as well as Raivoceand Pongi (2001) explain that school-based assessment (SBA) is often perceived asthe process put in place to collect evidence of what students have achieved, especiallyin important learning outcomes that do not easily lend themselves to the pen andpaper tests. Daugherty (1994) clarifies that this type of assessment has beenrecommended: …because of the gains in the validity which can be expected when students’ performance on assessed tasks can be judged in a greater range of contexts and more frequently than is possible within the constraints of time- limited, written examinations.However, as Raivoce and Pongi (2001) suggest the validity of SBA depends toa large extent on the various assessment tasks students are required to perform. 3
  4. 4. It is important to point out that although conceptually distinct, both externaland school-based assessments have their strengths. External assessment isreliable and is perceived as rigorous because candidates take the sameassessment administered under the same conditions. SBA, if carefullyplanned and implemented may be stronger in terms of validity and flexibility.The major difference between the two is described by Haynes (2001) in thefollowing way: while, in external assessment, the awarding body is in directcontrol of the mark or grade awarded to each candidate through theindividuals it appoints to make the assessment decisions, potentially, it hasless control over school-based assessment. Several scholars (e.g.: Harlen,1994; Brookhart, 2001) in the field of educational assessment advocate theneed of a combination of both forms on assessment and in fact severalsensible educational models make effective use of both. Haynes (2001)confirms that it has become a widely held opinion that the mix of external andinternal assessments provides a comprehensive approach to the assessment ofeducational achievement.In the debate on school-based assessment, the issue of ‘why’ has been widely writtenabout and there is general agreement on the principles of validity of this form ofassessment. In this paper, I focus on the ‘how’ by first focusing on critical issuesrelated to reliability, quality control and quality assurance in school-basedassessment that ensure that standards are maintained. And then by giving anoverview of five case studies from regions around the world that have successfullyimplemented school-based assessment. Their experiences confirm that, whencarefully implemented, school-based assessment need not compromise the standardsof certification.Critical Issues in School-Based AssessmentIn this section, I give an overview of several critical issues that form part of effectiveSBA systems. In his writing on judging the quality of assessment, Burton (1992)provides the following five rules of the thumb for evaluating different approaches: • The assessment should be appropriate to what is being assessed. • The assessment should enable the learner to demonstrate positive achievement and reflect the learner’s strengths. • The criteria for successful performance should be clear to all concerned • The assessment should be appropriate to all persons being assessed • The style of assessment should blend with the learning pattern so it contributes to it.These rules may be applied in the planning stage of school-based assessment. Portal(2003) gives some sage advice on the importance of this phase. He cautions thatrushed implementation may lead to great problems. He goes on to explain: Unless an assessment practice commands public confidence, it is probably not worthwhile to use on a national scale. The system needs to be able to deliver a clear picture of the attainment levels of individuals and of the performance of individual institutions, so that comparisons can 4
  5. 5. be made on a respectable basis, using data produced on the same or comparable scales.Similarly, Wood (1991) provides another cautious note: …the easiest way to bring coursework assessment into disrepute would be to take a stand-off, laissez-faire attitude to it, so that people will think that anyt hing goes.According to Raffan (2001) the keys to improving reliability and meeting the otherchallenges in a summative school-based assessment are probably:1. Providing teachers with sufficient, appropriate advice in booklets, videos and training meetings and assessor networks2. Giving sufficient time and attention to moderation procedures.The responsibility of providing teachers with sufficient information lies with theawarding body. In the UK, for example, the Qualifications and Curriculum Authority(QCA) publishes a Code of Ethics for the main examinations held at ages 16 and 18.This states that the Awarding Bodies must:• Set down explicit parameters and instructions for the setting of coursework tasks and must publish detailed marking criteria.• They must also normally provide a portfolio of exemplar tasks and marks schemes that meet the defined parameters and criteria.The setting of the parameters and writing the instructions for the required courseworktasks are relatively straightforward. The critical issue is the detailed criteria, whichneed to be developed in advance and be specific enough for comparable assessmentsto be given to comparable work in different schools (Izard, 2001). Teachers need tobe trained to become effective in using them, both in terms of understanding andinterpreting the wording and the levels as well as in terms of managing the necessaryrecord keeping systems.It is recommended to involve subject specialist teachers in the writing of the criteria.One approach suggested by Izard (2001) includes the development of rating scalesand checklists with descriptors that teachers can interpret in a consistent way. Thedescriptors can be developed by asking teachers to describe how they wouldrecognize quality work when it is presented by students. The teachers provide a rangeof indicators of competence that could be used by those making assessments. Thesedescriptors are then arranged as a descriptive rating scale.The issue of how detailed the criteria need to be is a complex one. On the one hand,there is the need for such criteria to be as detailed as possible, as concluded by Eglenand Kempa (1986) in their study on coursework with 100 chemistry teachers, cited inRaffan (2001):1. The more detailed and operationalised the criteria the more objective the assessment….the use of only generalized performance criteria tended to lead to highly discordant assessments, chiefly because of the adoption, by teachers of widely differing performance features. 5
  6. 6. 2. …in the absence of criteria that require both positive and negative performance features to be considered, teachers tend to base their assessments chiefly on negative performance characteristics.On the other hand, very detailed criteria may be viewed as cumbersome by the users.According to Raffan (2001), the performance assessment procedures in the vocationalsector in the UK, which involve many tightly specified criteria seem to be particularlyburdensome. Furthermore, Wolf (1995) reports that teachers feel that they are: Increasingly unable to do any real teaching beyond the narrowest interpretation of the assessment specifications. …because of the high assessment load and the delegation of assessment to the immediate trainer or teacher, formative assessment tends to disappear.The issue of efficient record keeping systems is also critical, in that it may help tomake or break the system. Recording and reporting on student’s progress andachievement takes time and needs particular skills. Moreover, teachers and studentsneed to be organized in order to keep meaningful and complete records and for theprocess of learning to be apparent in these records. Furthermore, the evidence onwhich decisions are taken needs to be selected very carefully. Portal (2003) expla insthat: the experience of profiling in the UK was frequently that teachers ended up keeping large collections of pupils’ work without having the opportunity or in some cases the expertise to make any clear use of it… It may help matters to focus on what is used to be known as significant steps in the pupil’s progress and concentrate on keeping only the documentation that exemplifies these.These issues need to be dealt with in training where criteria are discussed and whereteachers are given hands-on opportunities during workshops to interpret and applythese criteria in particular contexts (Ventura, 1995; Ventura & Murphy, 1998). It isbeneficial for teachers to have a portfolio of exemplar tasks and marks schemes towork with during the training that can in turn be used as reference points during thescholastic year. Portal (2001) suggests that although staff time for development mayseem high, individual professional development is also achieved, raising levels ofskill and developing important professional competencies among the staff concerned.Furthermore, there is a need for networking among the teachers in different schools.This becomes more pertinent the smaller the number of subject teachers in individualschools. Wolf (1995) emphasizes the importance of inter-assessor networks forhelping reliability of assessments by teachers. This system may be put into place bytwining of schools or the organization of regional professional development meetingswhere teachers of the same subject have the opportunity to work together, sort outdifficulties, and help each other become efficient in this form of assessment. Anetwork on the Internet may be a possible additional support system. It may also bethe main support system in places where distance creates problems for teachers tomeet regularly. This system is in use in Scotland where teachers who are voluntarilyparticipating in a project on formative assessment communicate mainly via the Netalthough they then have annual group meetings as well. Wolf (1995) explains that this 6
  7. 7. valuable process of collaboration between assessors in schools which provides themwith mutual support may break down or become impossible if the system introducescompetition between institutions.Moderation is another essential feature of school-based assessment. As Maxwell(2003) explicates, it is necessary to develop consistency in teacher judgment ofstudent achievement and to ensure public confidence in those judgments. Raffan(2001) explains that this process can be controversial as it raises issues of theconfidence in teachers and hence of control and power relationships between theteachers and the moderators employed by the external examinations body. Radnor andShaw (1995) discuss good moderation practice which takes account of the multipleperspectives that exist within the moderation activity. They call them the insider(teacher) and outsider (moderator) perspectives. It is important for the outsider torespect and accept the insider knowledge, that the teacher as assessor is valued. Theinsider would in turn need to be aware that relating individual work to a notion ofgeneralisable standards is an acceptable part of the process.In Malta, there is currently the need for a redefinition of the role of the moderatorsand their interactions with teachers. Ventura and Murphy (1998) recommended thatthe moderation process should not remain a one-off, end of year judgment but itshould develop into a dialogue between moderators and teachers. There is a need tomove away from the external model of moderation currently in use and to movetowards a reconciliation model. While the external model envisages the moderator asthe decisive powerful accredited appointee whose task creates an atmosphere ofdistrust, fear and uncertainty among teachers even when it is carried out withdiscretion, the reconciliation model tries to resolve the tension between the insiderperspective of the teacher and the outsider perspective of the moderator. According toRadnow and Shaw (1995) the reconciliation model recognizes that moderationdepends on the achievement, by discussion and negotiation within a group, of asocially constructed consensus about how work is to be evaluated and criteria applied.Thus, the teachers can articulate and exchange their subjective interpretations of thecriteria and the moderator is able to relate and compare the value that the groupassigns to the attainments under discussion with the attainments of other candidates inother schools. According to Harlen (1994) whe n the moderation process does notremain a one-off judgment but develops into a dialogue between moderators andteachers, it becomes a process of teacher development with a backwash effect onteaching.This dialogue need not be restricted to moderation only but may also be a feature ofthe monitoring process. In Malta, the MATSEC subcommittee that carried out theevaluation on the school-based assessment component recommended that monitoringshould take place to ensure that: this form of assessment is carried out in a satisfactorymanner, is of the standard expected and is assessed consistently within schools andbetween schools. In particular, as Grima and Ventura (2001) explain, the task of themonitoring panel would be to visit schools during the year in order to:• evaluate the physical and human resources available to carry out the coursework as specified in the syllabus• evaluate the type of standard of work that is being carried out• observe and evaluate the assessment methods and procedures. 7
  8. 8. In describing the tasks to be included in the monitoring process, Singh (2001)mentions two support systems that this process may provide to schools:• administrative and technical support when and where it is needed• distribution and sharing of exemplar materials and other related literature.Portal (2001) also recommends that data on monitoring visits and their outcomes needto be collected and maintained centrally, so that the management of the scheme can beeffective and to able to respond to anomalies as and when they arise.In summary, the key issues that need to be taken in consideration prior to theimplementation of school-based assessment are the following:(This list is adapted from Izard, 2001; Raivoce and Pongi, 2001; and Singh, 2001).• The weighting to be given to school-based assessment• The tasks to be included in the different subjects• The handling of comparability issues across schools• Verification and authentication procedures to be put in place• The monitoring process• The moderation of the results• The reporting of the school-based assessment results• Sufficient professional training for teachers and moderators.There are degrees of sophistication in the various school-based assessment modelsthat can be adopted by different countries. In deciding on the appropriateness ofpotential models, it is important to realize that ultimately the schools need to be ableto adequately support and implement the programme in terms of available resourcesand expertise and that the country or region needs to have in place an appropriatesystem for monitoring the programme and for moderating the results to ensureadequate standards. It is important to point out that such a system is only possiblewith the appropriate infrastructure in place. The extent of public confidence thatensues is also dependent on the measures taken to prevent individual teachers and/orstudents from succeeding in beating the system. Williams (2001) suggests that whilepotential dishonesty among students has always been a problem at all educationallevels, the use of school-based assessment for externally examined syllabuses and thewidespread development of information technology have introduced two newelements into the situation. The problem of authenticity in school-based assessmentmay cripple the entire system yet as Raffan (2001) concludes, that there are plenty ofopinions but little research on this topic.Williams (2001) suggests that procedures such as CORD need to take place:Culture - developing a culture of honesty emphasizing with students self- respect andtaking pride in their work, also that checks would be carried out and that cheatingwould be treated very seriously.Observation - knowing pupils well and carrying out observations of the pupils’overall level of work.Review - constantly reviewing students’ work using multiple drafts or work which ismarked and returned.Discussion - teachers discuss with pupils how the work had reached a particular stageand what the next step would be in a form of viva. 8
  9. 9. She concludes that in addition, examination bodies have a crucial role in explainingthe openness and effectiveness of moderation, in providing exemplars and otherhelpful advice in readable form, and by hosting meetings for in-service training.In Malta, the syllabuses of all subjects with an SBA component state that candidatesmay be called for an interview in relation to their coursework. Moreover, teachers areexpected to act professionally in authenticating the candidates’ work. When in doubtthey are advised not to sign the candidates’ authenticity form so that the Markers’Panel of that subject can interview the candidates about their work. Other suggestionsmade by Grima and Ventura (2001) include the use of databases for recording thecandidates’ coursework that is presented annually when the work includes projects,models and other original pieces of work. Retaining the coursework for a period oftime and changing the coursework requirements are other options under considerationto reduce the extent of copying and recycling that takes place.Five Case Studies from Around the WorldIn this section, I give a synopsis of five success stories from around the world. This isdone with the intention to share and learn from positive experiences that relate to theimplement ation of SBA. This section is based on papers and presentations made at theFirst and Second International Conferences of the Association of CommonwealthExaminations and Accreditation Bodies (ACEAB) which were held in the Mauritiusand in Malta in 2000 and 2002 respectively. Publications of the proceedings of bothconferences are available from the Association.New ZealandIn New Zealand, schools are primarily responsible for the quality of their assessmentdecisions. However as Lennox (2001) explains, the New Zealand QualificationsAuthority (NZQA) is responsible for checking a sample of assessment decisions andassisting schools to improve internal systems. National standards are established anddemonstrated through published examples of suitable assessment activities andexemplars of student work. A national professional development programme is inplace to enhance teachers’ assessment capabilities.In this setting, schools are self- managing with quite broad national curriculum andadministration guidelines. All schools are expected to have documented internalpolicies and procedures on assessment and therefore each school has an assessmentpolicy. Lennox (2001) explains that the need for local flexibility and innovation havedriven the reform which commenced in 2002. Therefore, while the country remainsadamant about the need for national systems and national standards, it is equally inneed of local differences, because of the wide variation in local school populationsand the need to allow for variety and integration in school programmes.Lennox (2001) explains that every year, NZQA collects samples of assessed studentwork in each subject from every school, along with assessment activities andschedules. These are checked by a national network of moderators, most of whom arepracticing teachers so they are in touch with national standards. In general, samplesare around the credit level (just below, just above and clearly above) and also at meritor excellence level. Following this, subject reports from moderators are sent to school 9
  10. 10. principals. These reports indicate how well the school is striking national standards ineach subject. There are also recommendations for adjustments as necessary, to bothteacher assessment standards and student results. Results can be changed or studentscan be reassessed within the year.The Authority’s reports also provide information for principals on how effectivelyassessment is managed in each subject area. NZQA also advises schools on steps thatneed to be taken for improvement purposes. In turn, the schools report back on themeasures taken to improve internal systems. In subsequent years, NZQA is able totake heavier or lighter samples from schools, or subjects within a school, on the basisof their proven record. Sanctions are in place for schools which show non-improvement. Such sanctions include the removal of accreditation in some or all oftheir subjects. Because of New Zealand’s official information legislation, finalmoderation reports, including actions planned by schools to rectify any problems, arelikely to be made public.The South PacificThe South Pacific Board for Educational Assessment (SPBEA) which is located inFiji, has adopted three approaches in developing tasks for the inclusion of the school-based assessment programme of the Pacific Senior Secondary Certificate. Raivoceand Pongi (2001) explain these tasks as follows:• Centrally developed tasks (CATs) where both the frame and the activities and the marking schedules are developed by SPBEA with teachers’ involvement restricted to the administration of the tasks and marking the product of students’ work. These tasks are used to assess outcomes that cannot be assessed by pen and paper but are general and not school specific.• Teacher designed tasks (TDTs) where teachers in the school develop the task frame, the tasks and the marking schedules for such tasks. This process is suitable for assessing outcomes that are sensitive to differing school conditions such as available resources and different geographic environment. In this case, SPBEA’s involvement is restricted to the approval of the tasks and ensuring comparability in the marks awarded by teachers.• Tasks with a common assessment frame (CAF) have their broad frame, guidelines and the marking scheme determined by SPBEA but individual teachers decide the specific areas for the tasks. Such tasks are used to assess outcomes that are general but are sensitive to broad differences (e.g. geographic, resources). Examples are research skills, which cannot easily be assessed by pen and paper.This system has been developed to meet the needs of the countries in the region.SPBEA has however put in place quality assurance procedures aimed at enhancingnot only the validity and reliability of SBA results but also their authenticity. Thisprocess is explained in Raivoce and Pongi (2001) as follows:Each year each participating school is required to submit an assessment programmefor each subject that has a SBA component. The programme has to clearly indicatewhat the school intends to do in its SBA as well as the various assessments tasks that 10
  11. 11. make up the programme. Before any programme is approved for implementationSPBEA has to check for compliance with prescribed requirements, appropriatestandards and timeframe.In this region, monitoring the implementation of the SBA has become an importantpart of the process. Officers make frequent visits, twice yearly, to member countriesinitially to assist schools with the planning and developing their SBA programmesand later to verify the status of the SBA programme. The main purpose of theverification visit is to ascertain that each school’s SBA programme for each subject isbeing implemented as approved. This involves checking to see that the appropriatecriteria are being used in the assessment, that the work is of appropriate standard andthat the work is on schedule.SPBEA has also put in place procedures aimed at minimizing variation in the standardof SBA between schools and between countries. This is achieved by the introductionof a two level procedure for moderating the SBA results from schools. In the firstinstance, samples of students’ work from every participating school in a country arenationally moderated to take account of any differences in the assessment betweenschools. Once this is achieved, samples of students’ work from each country are againmoderated for any differences that may occur between countries.In some subjects, a panel of moderators carries out the country moderation while inother subjects a single person is responsible for the process of moderation. Inter-country moderation is always carried out by a single external moderator. Once theSBA results are moderated, they are ready for interpretation before being reported tostakeholders. The SBA and external assessments are combined and students’ overallperformance is reported on a nine point scale.ScotlandWright (2001) explains that SBA plays a significant role in the Scottish assessmentsystem and it has been in use for many years. While in the past, internal assessmentwas employed in subject areas with a practical focus to maximize validity andflexibility, the structure of the new courses integrates internal and externalassessments in determining Course Awards.The system of SBA in Scotland supports a combination of internal and externalmoderation. Wright (2001) explains that the aim of internal moderation which iscarried out by school staff is to ensure that school staff are making consistentassessment decisions in accordance with the assessment criteria defined by theScottish Qualifications Authority (SQA) that provides guidelines on the best practicefor internal moderation as it applies to different types of qualifications. Externalmoderation which is carried out by moderators appointed by SQA is the means bywhich SQA ensures that internal assessment is in line with the national standard setout in the qualifications. SQA issues a range of documents to both schools andmoderators that set out how moderation operates.In Scotland, moderation is generally carried out by direct inspection of a sample ofstudents’ completed work. The sample size for moderation within one school usuallycomprises 12 candidates. In schools with less than 12 candidates, the entire group is 11
  12. 12. scrutinized. In schools with more than 12 candidates, SQA chooses the 12 and notifiesthe school. From the schools’ point of view, the procedures are administrativelysimpler.A selection of schools are moderated each year. Wright (2001) explains that a schoolis chosen for moderation if:• it is offering the qualification for the first time• it did not offer the qualification in the previous two years• its assessments were not accepted on the last occasion the school was moderated• it had not been selected for a specified period of time• it showed poor agreement in the previous year between internal and external assessments in national courses• it was subject to concerns expressed by moderators• it requested moderation and this was agreed by SQA• it is selected randomly.Two main approaches are used: central moderation and visiting moderation. It is thenature of the evidence generated that determines the type of moderation to be used.Central moderation is used when the work is easy to transport and where processskills, if important, are clearly evident from the product. The advantages of thisapproach are cost-effectiveness and ease of monitoring the work of moderators tomaintain a national standard. However, visiting moderation is used for those subjectswith bulky or ephemeral products or performances. The main advantage of thisapproach is the potential for interchange between moderators, teachers and principals.Its drawbacks are that it is a costly method and it is more difficult to monitor the workof moderators. In addition, postal moderation is employed to increase flexibility andmaximize cost effectiveness. This approach has relatively low costs, however it isdifficult to monitor the decisions of moderators, which in turn is done by scrutinizingthe written reports they submit. When moderators who are subject experts findassessments that are invalid, unreliable or not aligned to the national standard, theygive advice on how to bring the assessments into line and then monitor that theadvice has been followed.The CaribbeanHaynes (2001) explains that the Caribbean Examinations Council (CXC) hascombined external assessment with the internal assessment of candidates in order toenhance the validity and reliability of the entire assessment system. Today internalassessment is a significant element in the assessment of many subjects in. Hecontinues to explain that although the format and skills are different for each subject,there are a number of common requirements. These are:• Candidates undertake specific assignments over a given period of time fulfilling specific skills as outlined in the syllabus• Class teachers assess the work and submit the grades to CXC• CXC moderates the teachers’ assessment• Candidates’ final grade includes the marks amended as a result of this process.On its part, CXC ensures adherence to a common standard and consistency by:• The use of detailed SBA guidelines 12
  13. 13. • The use of moderating procedures• Continuous teacher orientation• Moderation feedback reports sent to parents.CXC has taken the decision that SBA tasks do not contribute to more that 40% andnot less than 20% of the total mark. Haynes (2001) suggests that this decision wastaken to control volume of SBA, to establish uniformity and parity across subjects aswell as to curb some of the expected criticism by those who would question thereliability of an examination which depended substantially on the judgment ofteachers. Furthermore, CXC has introduced an alternative paper which can be takenby non-school candidates in lieu of the school-based assessment. This is a writtenpaper and is based on the same area of the syllabus as the SBA component andweighted in the same way as the SBA.In the Caribbean, each teacher is required to submit a sample of SBA to CXC.Moderation by remarking is the main technique employed by CXC. The replacementof the teacher’s marks with those of the moderator’s takes place only if the teacher’sclass size is less than five and if the marks fall outside the tolerance band. In thisregion, the moderation process has another critical purpose. It is to provide feedbackreports to the school and subject teachers on the strengths and weaknesses of theircandidates and to improve their professional expertise in assessment.The moderation procedure outlined to far mainly focuses on quality control. However,Haynes (2001) explains that a shift is taking place to include quality assuranceprocedures. As a result of feedback from teachers, candidates and parents, some of thefollowing conditions have been implemented:• In all syllabuses, the SBA assignments carry rubrics outlining clearly the main features required in the SBA responses• Assignments in all cognate subjects are similar in size and number and the demands they make on the student time and effort and resources• For many subjects, resources such as exemplar assignments with careful details of grading and relevant comments on the grading process, assessment criteria for projects and notes on selecting, conducting and assessing assignments have been constructed and are available to all schools.In the Caribbean, candidates are given a two year period to complete SBAassignments which are then assessed by teachers. Thereafter, CXC performs a qualitycontrol check of those assignments. At present, CXC carries out this check by movinga significant number of trained moderators from different territories to a singlelocation to remark the SBA samples. This is an annual event which is very costly.In this region, the need is felt for additional quality assurance procedural help fromthe stakeholders such as Ministries of Education, Principals and Heads of Department.Such monitoring could be effected through school visits, meetings within schools,workshops for teachers and email facilities for example. Haynes (2001) suggests thatthe stakeholders’ task would be to assist at all points during the SBA to provideprofessional support and advice to teachers, on the methods which should be used toachieve consistency in managing critical aspects of the SBA over an extended periodof time. He concludes that the achievement of this consistency by schools would 13
  14. 14. remove the need for yearly moderation of SBA from these schools and wouldtherefore result in a reduction of annual costs for CXC.Queensland, AustraliaIn the state of Queensland, external examinations were abolished 30 years ago.Instead assessment became entirely school-based. Since 2002, Queensland now hasone Authority: the Queensland Studies Authority (QSA) which was established todevelop subject syllabus frameworks, to supervise a system of moderation to assurethe application of common standards and to issue state-authorized certificates.Maxwell (2003) explains that in this state, in Years 11 and 12, the assessmentprogramme is entirely designed within schools. Each school is encouraged to developits own coherent assessment policy for the conduct of assessment within the school,the responsibility of which resides with the school principal. A common framework ofassessment is developed for each subject and within-school moderation is carried outto ensure comparability of assessment judgments across the teachers in a subject.However, the Authority controls the certification processes for the award of theSenior Certificate. The key features of this process are:• The development of a framework or guideline syllabuses for the different subjects• The incorporation in these syllabuses of assessment criteria and standards for exit grades.• A moderation system to assure comparability of grades across the state• The issuing of individual certificates to students who complete Year 12• Reporting the results in terms of five levels of achievement.The actual moderation process involves the following procedure:• Each school submits a teaching and assessment plan for each subject offered.• This plan is accredited by a subject review panel.• Later on, schools submit designated samples of student folios to the review panels. At the end of Year 11 this is done for monitoring purposes and to give feedback to teachers. At the end of Year 12 then, moderation is done for verification purposes, to verify the teachers’ judgments of the final grades.• Finally there is a follow- up random sampling procedure in place to monitor the success of the moderation processes.Maxwell (2003) emphasizes the point that in this case, moderation relates tocomparability of teacher judgments concerning assessment evidence. Evidence can bequite different in character and detail across schools and across subjects withinschools, but indicate and equivalent standard. This process of moderation allowsassessments to differ from place to place and student to student but to be mapped ontoa common set of standards.This is a very unique system that is characterized by a partnership between schoolsand the central agency. (In fact during one of his AERA presentations on assessmentin April 2003, Paul Black himself praised this system and said that this is where hewould be, given the chance). Here, the agency cannot intervene directly in theassessment programme of a school, nor can it undo assessments undertaken by theschool over the two year course of study. However, it can hold the school accountable 14
  15. 15. for an appropriate interpretation of the syllabus, appropriate implementation of theirteaching and assessment plan, and the application of common standards for assessingthe evidence on student achievement on exit from Year 12. It also provides help toschools that seek it or have a record of problems. About 10 percent of teachers areinvolved in moderation panels whose membership is rotated periodically.Consequently, in this context, the moderation system operates a powerful mechanismfor professional development.Maxwell (2003) concludes that, in essence, the Queensland experience shows that it ispossible to balance internal and external interests in assessment to ensure thatassessment is located internally but verified centrally. This demands careful designand implementation of the relationship between schools and the central agency so thattheir separate and legitimate interests are held in creative tension. On the one hand,the demand on the central agency is to maintain the quality, credibility and usefulnessof the qualification it issues. On the other, the demand on schools and teachers is todevelop high quality assessment procedures and practices. To some extend thisrepresents a dilemma for starting such a system. Maxwell (2003) recommends that: it is necessary to trust schools and teachers initially but schools and teachers can really only develop expertise in assessment when they have to practice themselves. You have to grow the system, taking a calculated risk and orchestrating the transition with appropriate support mechanisms.ConclusionTo conclude, I refer to a publication of the National Research Council (2001) of theUS. Setting out to explore the implications of our new understandings of knowing andlearning, and asking how do we know what students know, Pellegrino and hiscolleagues’ conclusions highlight several features of school-based assessment. Theysuggest that assessment needs to move from a focus on discrete bits of knowledge tomore comprehensive wholes, cover a broader range of understandings and skills, andinvolve a variety of applications and contexts. They advocate attention and shiftingresources towards teacher- managed assessments involving tasks that studentsundertake in the normal course of their classroom activities and portfolios forassembling student performance information. Their hope is for more comprehensive,coherent and continuous assessment systems.It is our task as assessment professionals to find ways and means to make assessmentmore relevant and valid for our learners yet at the same time maintain optimalstandards. I have tried to show how this can happen by outlining ways of valorizingschool-based assessment in different education systems, because essentially I concurwith Raffan (2001) when he says that “effective assessment is difficult, timeconsuming and very often expensive, yet it is essential for education”. 15
  16. 16. ReferencesBroadfoot, P. (1995) Performance Assessment Perspective: International Trends and Current English Experience. In H. Torrance (Ed) Evaluating Authentic Assessment. London: The Falmer PressBrookhart, S. (2001) ‘Successful Students’ Formative and Summative Uses of Assessment Information. Assessment in Education, 8, 2.Burton, L. (1992) Who assesses whom and to what purpose? In M. Stephens and J. Izard (Eds.) Reshaping Assessment Practices: Assessment in the Mathematical Sciences Under Challenge. Victoria: Australian Council for Educational Research.Dougherty, R. (1994) Quality Assurance, Teacher Assessment and Public Examinations. In W. Harlen (Ed) Enhancing Quality in Assessment. Newcastle Upon Tyne: Athenaeum Press.Gipps, C. (1999) Socio-cultural Aspects of Assessment. Review of Research in Education, 24, 355-392.Grima, G. and Ventura, F. (2001) School-based Assessment in Malta: lessons from the past, directions for the future. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations SyndicateHarlen, W. (1994) Enhancing Quality in Assessment. London: British Educational Research Association.Haynes, A.B., (2001) Current Practices and Future Possibilities for the Caribbean Examinations Council. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations SyndicateIzard, J. (2001) Implementing School-Based Assessment: Some Successful Recent Approaches used in Australia and the Philippines. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations Syndicate.Lennox, B. (2001) Achieving National Consistency in School-Based Assessment Against Standards. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations SyndicateMaxwell, G.S. (2003) Progressive Assessment: Synthesising Formative and Summative Purposes of Assessment. Proceedings of The Second International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Malta: MATSEC Examinations Board.National Research Council (2001) Rethinking what students know: the science an design of educational assessment. Washington, DC: National Academy Press. 16
  17. 17. Portal, M. (2001) School-based Assessment: Problems and Solutions. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations SyndicatePortal, M. (2003) Classroom Assessment and its Consequences. Proceedings of The Second International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Malta: MATSEC Examinations Board.Radnor, H. and Shaw, K. (1995) Developing a collaborative approach to moderation. In H. Torrance (Ed) Evaluating Authentic Assessment. London: The Falmer PressRaivoce, A. and Pongi, V. (2001) School-Based Assessment: A First Hand Experience in the Small Island States of the South Pacific. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations Syndicate.Raffan, J. (2001) School-Based Assessment: Principles and Practice. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations Syndicate.Singh, P. (2001) Implementing School-Based Assessment: A Functional Approach. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations SyndicateVentura, F. (1995) Coursework assessment and moderation at secondary education certificate (SEC) level. Report No.1, MATSEC Support Unit, University of Malta. Unpublished Document.Ventura, F. and Murphy, R. (1998) The impact of measures to promote equity in the secondary education certificate examinations in Malta: an evaluation. Mediterranean Journal of Educational Studies, 3, 1.Williams, S. (2001) How do I know if they’re cheating? Teacher strategies in an information age. The Curriculum Journal, 12, 2.Wolf, A. (1995) Authentic Assessment in a Competitive Sector: Institutional Prerequisites and Cautionary Tales. In H. Torrance (Ed) Evaluating Authentic Assessment. London: The Falmer PressWood, R. (1991) Assessment and Testing. Cambridge: Cambridge University Press.Wright, R. (2001) School-Based Assessment – Scotland. Proceedings of The First International Conference of the Association of Commonwealth Examinations and Accreditation Bodies. Reduit: Mauritius Examinations Syndicate 17