Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Buckingham Uni PGCE Feb 2017 Assessment

1,115 views

Published on

A look at aspects of assessment in MFL

Published in: Education
  • Be the first to comment

Buckingham Uni PGCE Feb 2017 Assessment

  1. 1. ASSESSMENT Buckingham University PGCE/IPGCE Feb 2017 Steve Smith
  2. 2. WHY DO WE ASSESS? Pachler et al (2014) To generate information for students about their learning. To ensure that learning objectives have been reached. To motivate students. To gather data for reporting. To select students for groupings or for opportunities in later life. To identify strengths and weaknesses in students. To provide certification. To fulfil statutory requirements. To measure standards which may be used to hold teachers accountable.
  3. 3. Formative assessment (assessment FOR learning) • “Although the terms formative assessment and assessment for learning are defined slightly differently by different people, there is increasing agreement that assessment improves learning when it is used to support five key strategies in learning : Clarifying, sharing, and understanding learning intentions and criteria for success. Engineering classroom discussions, activities, and tasks that elicit evidence of student achievement. Providing feedback that moves learning forward. Activating students as learning resources for one another. Activating students as owners of their own learning.” (Dylan Wiliam, 2011)
  4. 4. Formative assessment (2) • The aim of formative assessment is to monitor student learning to provide ongoing feedback that can be used by teachers to improve their teaching and by students to improve their learning. More specifically, formative assessments: • help students identify their strengths and weaknesses and target areas that need work; • help teachers recognize where students are struggling and address problems immediately.
  5. 5. Formative assessment examples (1)  Sharing learning objectives and success criteria with the students, the aim being to enable students to develop the capacity to own and monitor their own progress as independent language users. This task needs to be supported by developing students’ ‘meta-language’, i.e. how to talk about their subject and their learning;  Using effective questioning to enable all students to take part in the lesson, whatever their personalities and degrees of confidence; this could include use of the ‘no hands up’ strategy;  Using the ‘question basketball’ technique. You ask a question to a random student, then choose another for an evaluation of the answer, then another to provide an explanation of why the answer is correct or incorrect.  Using ‘waiting time’: a strategy to encourage students to reflect on the quality of their answers. Examples of prompts include: “What can we add to X ‘s answer?” or “Do you agree with X’s answer?” You would need to plan for increasingly linguistically challenging questions which cannot be answered with just ‘reproduced’ language and do require language manipulation;
  6. 6. Formative assessment examples (2) • Using response systems which involve all students at once to assess their progress in the lesson, e.g. asking if a word is correct and asking students to respond with thumbs up or down. This can create a ‘teachable moment’, when the teacher asks a student “You thought this was correct/incorrect - can you tell me why?” This technique can also be used with multiple choice answers and cards, mini-whiteboards or an electronic voting system; (Jones and Wiliam, 2008) • Getting students to act on feedback. It may seem hard to encourage students to do this. One simple technique is to tell them that there are errors and provide them with the time in class to put them right. The errors could be classified in spelling, grammar-for verb endings-, missing words, etc. • Sharing lesson objectives with students – this is often done at the beginning of a lesson, but you could just say: “Later in the lesson I’m going to ask you what you think the aim of the lesson is.” • Sharing success criteria. “What do I have to do to get the best result?”
  7. 7. Formative assessment: discussion questions • How useful is sharing objectives? Which language? • How useful is meta-language of MFL? • Questioning – hands up/no hands up? • Waiting time? Pace? • Random questioning? • Response systems? Digital? • Marking – big issue! We need to talk about this! • Sharing success criteria? Using markschemes?
  8. 8. Summative assessment (assessment OF learning) • Research shows that the extent to which a student is familiar with a task will significantly affect their performance. Unfamiliarity with a test type causes anxiety and a higher cognitive load, especially when the task is quite complex. • By doing a task over and over again prior to an assessment involving that task, the student develops strategies which ease cognitive load and make it easier. To take a simple example, it would be unwise to test a student’s knowledge of grammar through translation into L2 unless they had had a good deal of practice at that skill. • A mark scheme should place appropriate emphasis on the skills you wish to test: if you have been working on a range of areas, e.g. accuracy, fluency, vocabulary range, and grammar complexity, you would not wish to assess students primarily on accuracy.
  9. 9. Summative assessment (2) • Example of a mismatch between teaching and assessment: When you give a test which requires students to infer meaning from context with unfamiliar words - this would be assessing the students not on the language learnt during the unit, but on compensation strategies, e.g. guessing meaning from context. Although compensation strategies are important, a test needs to assess students only on what they have been taught and not on their adaptive skills. Such an assessment might be perceived by students as unfair and could cause them to become demotivated. A test should therefore have ‘construct validity’, i.e. it must assess what it sets out to assess.
  10. 10. Validity and reliability • Validity – when a test successfully tests what it sets out to test • Reliability – when you can trust the marks and results would be consistent • Example of a test which is reliable but not valid? • Example of a test which is valid but not reliable? Discuss! • Notion of objective testing
  11. 11. “Teaching to the test” • When you know that the outcome of a test is of importance both to the student and to you, you will usually want to make sure that you match your teaching to the test. There may, however, be undesirable side-effects of this approach. • ‘Backwash effect’. As an example, if the assessment contains an element of translation into L2 it would be tempting to spend a good deal of time working on this skill in the classroom. We know students perform better in tasks they have practised. If you feel that practising translation into L2 severely limits the amount of L2 exposure students receive, then because of the backwash effect, your methodology is compromised. Good balance between effective methodology and effective test preparation. Ideally the test would consist of activities you would normally wish to undertake in the classroom; the best valid tests do.
  12. 12. Multiple choice testing • Statisticians say that three options are as effective as four, although four choices are often given on exam papers. For an even more subtle use of multiple choice to limit the chance of guessing a correct answer, you can design questions with, say, two correct answers out of five. • Important with multi-choice that all options be ‘in play’. That is, they must be plausible to the student. When the aim is to reveal the range of skill in a class (e.g. for a higher stakes summative assessment), a good multi-choice question should have the aim of allowing about 70% or 80% of students to get the answer right. A good balance of outcomes would be around 70% get the right option, with the other two options getting about 15% each. A question which attracts equal responses for each option is a poor one. Some examination awarding bodies pilot questions and reject ones which produce unwanted outcomes, i.e. ones which do not produce a valid comparison between students.
  13. 13. Discrete versus multi-skill testing • If you wish to test one particular skill, listening, for example, you may be tempted to do so by excluding any requirement to read, speak or write in L2. By including these other skills it may be impossible to know for sure whether you are just testing listening. This presents a dilemma, since in general in the classroom you would not wish to isolate skills in this fashion. You may wish, for example, in order to stay in L2 and maximise exposure, to combine a listening task with spoken or written responses in L2. • You will in this context need to keep in mind the risk of the backwash effect referred to before. Just because the assessment concerns a ‘discrete skill’, it does not mean you necessarily have to use discrete skill classroom activities. Keep in mind: if the instruction for a task is in L2 it is possible that students will misinterpret what they have to do. Where instructions are typically given in L2, students need training in recognising the instructions and format. Important for GCSE!
  14. 14. Summative assessment: discussion questions • Should we grade? • Types of grading? • How do we use data from summative assessments? • How often should we assess? • Are pupils over-tested? • How do we create long-term memory? • (I)GCSE issues – when do we introduce photo card, role-play, translation, essay, etc?

×