Assessment
Upcoming SlideShare
Loading in...5
×
 

Assessment

on

  • 1,430 views

 

Statistics

Views

Total Views
1,430
Views on SlideShare
1,430
Embed Views
0

Actions

Likes
1
Downloads
46
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Assessment Assessment Document Transcript

  • Assessmenta basic goal of this chapter is to help you understand how such knowledge can be used to reinforce, rather thanwork against, your role as teacher. Toward that end, we will begin by defining what we mean by theterm assessment and by two key elements of this process, measurement and evaluation.What is Assessment?Broadly conceived, classroom assessment involves two major types of activities: collecting information abouthow much knowledge and skill students have learned (measurement) and making judgments about the adequacyor acceptability of each students level of learning (evaluation). Both the measurement and evaluation aspects ofclassroom assessment can be accomplished in a number of ways. To determine how much learning hasoccurred, teachers can, for example, have students take exams, respond to oral questions, do homeworkexercises, write papers, solve problems, and make oral presentations. Teachers can then evaluate the scoresfrom those activities by comparing them either to one another or to an absolute standard (such as an A equals 90percent correct). Throughout much of this chapter we will explain and illustrate the various ways in which youcan measure and evaluate student learning.MeasurementMeasurement is the assignment of numbers to certain attributes of objects, events, or people according to a rule-governed system. For our purposes, we will limit the discussion to attributes of people. For example, we canmeasure someones level of typing proficiency by counting the number of words the person accurately types perminute or someones level of mathematical reasoning by counting the number of problems correctly solved. In aclassroom or other group situation, the rules that are used to assign the numbers will ordinarily create a rankingthat reflects how much of the attribute different people possess (Linn & Gronlund, 1995).EvaluationEvaluation involves using a rule-governed system to make judgments about the value or worth of a set ofmeasures (Linn & Gronlund, 1995). What does it mean, for example, to say that a student answered eighty out ofone hundred earth science questions correctly? Depending on the rules that are used, it could mean that thestudent has learned that body of knowledge exceedingly well and is ready to progress to the next unit ofinstruction or, conversely, that the student has significant knowledge gaps and requires additional instruction.Why Should We assess Students Learning?This question has several answers. We will use this section to address four of the most common reasons forassessment: to provide summaries of learning, to provide information on learning progress, to diagnose specificstrengths and weaknesses in an individuals learning, and to motivate furtherlearning.http://college.cengage.com/education/pbl/tc/assess.html#top TESTING: BASIC CONCEPTS: BASIC TERMINOLOGY by Anthony Bynom, Ph.D., December 2001 Most teachers are involved in testing in some form, either invigilating, marking or actually writing tests. This article isaimed at teachers who may be either assessing test material or writing their own tests. The basic question to begin with is why do we test? I hope there are not many of you who will follow the example of a former colleague. This person would use tests as a punishment. If his class became too boisterous he would announce a test. Then he would retire to his desk and set themost difficult grammar test he could find in order, as he put it, ‘to teach the blighters a lesson.’ In this instance testing was used as a means of classroom management. The more conventional answer to the question of why we test is to getinformation. The type of information required will dictate the type of test needed. Types of information needed could come under the following headings.  SELECTION
  • This is where testees are being selected for some future course or type of employment. You are trying to find out if the people concerned have the right attributes to benefit from the course.  PLACEMENT Is when you want to place testees at a correct level for their abilities.  RANK ORDERING OF TESTEES This familiar process places testees in order, who is first second etc.  APTITUDE This attempts to Predict likely future performance.  DIAGNOSIS OF PROBLEM AREAS An attempt to find out why things are happening.  VALIDATION OF CURRICULA, PEDAGOGICAL PRACTICES AND MATERIALS Is the curriculum working? Do your classroom practices succeed? Are the materials you are using appropriate?  FEEDBACK This is used to amend procedures, if necessary.  EXPERIMENTATION When you want to try something new or different. TYPES OF TESTING Testing may initially be divided into two types. 1. Norm Referenced Tests Norm Referenced tests answer such questions as, how student ‘A’ compares with student ‘B’. Attainment or Achievement tests should be specifically designed to answer this question. 2. Criterion Referenced Tests Criterion Referenced tests answer such questions as, How much has student ‘Y’ learnt? Or, How much does student ‘X’ know? Proficiency tests should be designed to answer such questions. In all the other test areas you should always bear in mind the purpose of your test. For example; Aptitude tests should be designed to provide information to assist prediction of future learning success. Diagnostic tests should obviously provide information on areas of difficulty. Performance tests should be designed to provide information for the evaluation of a specific skill or task. RELIABILITY AND VALIDITY We now move on to the two key issues for any test, reliability and validity.These two concepts and their relationship to testing form the most fundamental issue in current thinking on testing.Although they are absolutely basic they very often appear to be incompatible, in that some tests that are known as reliableare seen as falling short in validity. While criticism of test validity is based on the question of how reliable they are In fact itmight be truly said that the whole art and science of testing lies in attempting to find ways of harmonising these twoimportant qualities. To help with this let’s look at their meaning.Reliability. This means the consistency of the test’s judgement and results. It is about producing precise and repeatable measurements on a clear scale of measurement units. Such tests give consistent results across a wide range ofsituations. This is achieved by carefully piloted trials of the test. Sometimes, several versions of the test may be used on a controlled population of testees. The outcomes of these trials are carefully analysed in order to establish consistency. When a consistent set of figures is achieved the test may be deemed reliable. Validity. This means the truth of the test in relation to what it is supposed to evaluate. It concerns the relevance and usefulness of what you are is measuring. The difficulty in setting such tests lies in the problem how sure you can be about what is actually being measured. Is it consistent with the worthwhile quality you think you’re measuring?To help with this you should consider the following: Content validity. Have satisfactory samples of language and language skills been selected for testing? Construct validity. Is the test based on the best available theory of language and language use? Predictive validity. Does the test successfully predict future outcomes? Concurrent validity. Does the test correlate with other existing measures? Usually a similar test.There are other ways one can look at the subject of validity but the above are the main ones and give you the basic idea.
  • DISCREET POINT AND INTEGRATIVE TESTINGYou may see or hear these words when being asked to assess or even write a test so let’s see what they mean. Discrete Point tests are based on an analytical view of language. This is where language is divided up so that components of it may be tested. Discrete point tests aim to achieve a high reliability factor by testing a large number of discrete items. From these separated parts, you can form an opinion is which is then applied to language as an entity. You may recognise some of the following Discrete Point tests: 1. Phoneme recognition. 2. Yes/No, True/ False answers. 3. Spelling.4. Word completion.5. Grammar items.6. Most multiple choice tests.Such tests have a down side in that they take language out of context and usually bear no relationship to the concept or use of whole language. Integrative tests In order to overcome the above defect, you should consider Integrative tests. Such tests usually require the testees to demonstrate simultaneous control over several aspects of language, just as they would in real language use situations. Examples of Integrative tests that you may be familiar with include: 1. Cloze tests 2. Dictation 3. Translation 4. Essays and other coherent writing tasks 5. Oral interviews and conversation 6. Reading, or other extended samples of real text OTHER ISSUES Should you aim for direct or indirect testing? To help with this decision you may find the following helpful:Indirect testing makes no attempt to measure the way language is used in real life, but proceeds by means of analogy.Some examples that you may have used are: Most, if not all, of the discrete point tests mentioned above. Cloze tests Dictation (unless on a specific office skills course)Indirect tests have the big advantage of being very ‘test-like’. They are popular with some teachers and mostadministrators because can be easily administered and scored, they also produce measurable results and have a highdegree of reliability.Direct tests, on the other hand, try to introduce authentic tasks, which model the student’s real life future use of language.Such tests include: Role-playing. Information gap tasks. Reading authentic texts, listening to authentic texts. Writing letters, reports, form filling and note taking. Summarising.Direct tests are task oriented rather than test oriented, they require the ability to use language in real situations, and theytherefore should have a good formative effect on your future teaching methods and help you with curricula writing.However, they do call for skill and judgment on the part of the teacher.COMMUNICATIVE LANGUAGE TESTINGSince the late 1970s and early 1980s the Communicative approach to language teaching has gained dominance. What isactually meant by ‘Communicative ability’ is still a matter of academic interest and research. Broadly speakingcommunicative ability should encompass the following skills: Grammatical competence. How grammar rules are actually applied in written and oral real life language situations. View slide
  •  Sociolinguistic competence. Knowing the rules of language use, ‘Turn taking’ during conversation discourse, etc. or using appropriate language for a given situation. Strategic competence. Being able to use appropriate verbal and non-verbal communication strategies.Communicative tests are concerned not only with these different aspects of knowledge but on the testees’ ability todemonstrate them in actual situations. So, how should you go about setting a Communicative test?Firstly, you should attempt to replicate real life situations. Within these situations communicative ability can be tested asrepresentatively as possible. There is a strong emphasis on the purpose of the test. The importance of context isrecognised. There should be both authenticity of task and genuiness of texts. Tasks ought to be as direct aspossible. When engaged in oral assessment you should attempt to reflect the interactive nature of normal speech andalso assess pragmatic skills being used.Communicative tests are both direct and integrative. They attempt to focus on the expression and understanding of thefunctional use of language rather than on the more limited mastery of language form found in discreet point tests.The theoretical status of communicative testing is still subject to criticism in some quarters, yet as language teachers seethe positive benefits accruing from such testing, they are becoming more and more acceptable. They will not only helpyou to develop communicative classroom competence but also to bridge the gap between teaching, testing and reallife. They are useful tools in the areas of curriculum development and in the assessment of future needs, as they aim toreflect real life situations. For participating teachers and students this can only bebeneficial. http://www3.telus.net/linguisticsissues/testing.htmMethods of Assessmentby William BaddersWith the release of the National Science Education Standards, the issues of why, how, and what we, as teachers, assessin our classrooms will become a major challenge in the multifaceted science reform effort currently underway. Aseducators are changing their ideas about what constitutes exemplary inquiry-based learning, and recognizing that scienceis an active process that encourages higher-order thinking and problem solving, there is an increased need to aligncurriculum, instruction, and assessment. Classroom assessment techniques are focusing on aligning assessments moreclosely with the instructional strategies actually used with children.The Nature of AssessmentAssessment can be defined as a sample taken from a larger domain of content and process skills that allows one to inferstudent understanding of a part of the larger domain being explored. The sample may include behaviors, products,knowledge, and performances. Assessment is a continuous, ongoing process that involves examining and observingchildrens behaviors, listening to their ideas, and developing questions to promote conceptual understanding. The termauthentic assessment is often referred to in any discussion of assessment and can be thought of as an examination ofstudent performance and understanding on significant tasks that have relevancy to the students life inside and outside ofthe classroom.The increasing focus on the development of conceptual understanding and the ability to apply science process skills isclosely aligned with the emerging research on the theory of constructivism. This theory has significant implications forboth instruction and assessment, which are considered by some to be two sides of the same coin. Constructivism is a keyunderpinning of the National Science Education Standards.Constructivism is the idea that learning is an active process of building meaning for oneself. Thus, students fit new ideasinto their already existing conceptual frameworks. Constructivists believe that the learners preconceptions and ideasabout science are critical in shaping new understanding of scientific concepts. Assessment based on constructivist theorymust link the three related issues of student prior knowledge (and misconceptions), student learning styles (and multipleabilities), and teaching for depth of understanding rather than for breadth of coverage. Meaningful assessment involvesexamining the learners entire conceptual network, not just focusing on discreet facts and principles.The Purpose of Assessment View slide
  • Critical to educators is the use of assessment to both inform and guide instruction. Using a wide variety of assessmenttools allows a teacher to determine which instructional strategies are effective and which need to be modified. In this way,assessment can be used to improve classroom practice, plan curriculum, and research ones own teaching practice. Ofcourse, assessment will always be used to provide information to children, parents, and administrators. In the past, thisinformation was primarily expressed by a "grade". Increasingly, this information is being seen as a vehicle to empowerstudents to be self-reflective learners who monitor and evaluate their own progress as they develop the capacity to beself-directed learners. In addition to informing instruction and developing learners with the ability to guide their owninstruction, assessment data can be used by a school district to measure student achievement, examine the opportunityfor children to learn, and provide the basis for the evaluation of the districts science program. Assessment is changing formany reasons. The valued outcomes of science learning and teaching are placing greater emphasis on the childs abilityto inquire, to reason scientifically, to apply science concepts to real-world situations, and to communicate effectively whatthe child knows about science. Assessment of scientific facts, concepts, and theories must be focused not only onmeasuring knowledge of subject matter, but on how relevant that knowledge is in building the capacity to apply scientificprinciples on a daily basis. The teachers role in the changing landscape of assessment requires a change from merely acollector of data, to a facilitator of student understanding of scientific principles.The Tools of AssessmentIn the development and use of classroom assessment tools, certain issues must be addressed in relation to the followingimportant criteria.A. Purpose and Impact— How will the assessment be used and how will it impact instruction and the selection ofcurriculum?B. Validity and Fairness— Does it measure what it intends to measure? Does it allow students to demonstrate both whatthey know and are able to do?C. Reliability— Is the data that is collected reliable across applications within the classroom, school, and district?D. Significance— Does it address content and skills that are valued by and reflect current thinking in the field?E. Efficiency— Is the method of assessment consistent with the time available in the classroom setting?There is a wide range of assessments that are available for use in restructuring science assessment in the classroom.These types of assessments include strategies that are both traditional and alternative. The various types of alternativeassessments can be used with a range of science content and process skills, including the following general targets.Declarative Knowledge— the "what" knowledgeConditional Knowledge— the "why" knowledgeProcedural Knowledge— the "how" knowledgeApplication Knowledge— the use of knowledge in both similar settings and in different contextsProblem Solving— a process of using knowledge or skills to resolve an issue or problemCritical Thinking— evaluation of concepts associated with inquiryDocumentation— a process of communicating understandingUnderstanding— synthesis by the learner of concepts, processes, and skillsAssessment can be divided into three stages: baseline assessment, formative assessment, and summative assessment.Baseline assessment establishes the "starting point" of the students understanding. Formative assessment providesinformation to help guide the instruction throughout the unit, and summative assessment informs both the student and theteacher about the level of conceptual understanding and performance capabilities that the student has achieved. The widerange of targets and skills that can be addressed in classroom assessment requires the use of a variety of assessmentformats. Some formats, and the stages of assessment in which they most likely would occur, are shown in the table. ASSESSMENT FORMATS Format Nature/Purpose Stage Oral and written responses based on individual experienceBaseline BaselineAssessments Assess prior knowledge Multiple choice, short answer, essay, constructed response, written reportsPaper and Pencil FormativeTests Assess students acquisition of knowledge and concepts
  • Embedded Assess an aspect of student learning in the context of the learning FormativeAssessments experience Require communication by the student that demonstrates scientificOral Reports Formative understanding Assess individual and group performance before, during, and after a scienceInterviews Formative experience Require students to create or take an action related to a problem, issue, or Formative andPerformance Tasks scientific concept Summative Formative andChecklists Monitor and record anecdotal information Summative Require students to explore a problem or concern stated either by theInvestigative Projects Summative teacher or the studentsExtended or Unit Require the application of knowledge and skills in an open-ended setting SummativeProjects Assist students in the process of developing and reflecting on a purposeful Formative andPortfolios collection of student-generated data SummativeIt is clear that different kinds of information must be gathered about students by using different types of assessments. Thetypes of assessments that are used will measure a variety of aspects of student learning, conceptual development, andskill acquisition and application. The use of a diverse set of data-collection formats will yield a deeper and moremeaningful understanding of what children know and are able to do, which is, after all, the primary purpose ofassessment.William Badders is an elementary science teacher in the Cleveland Public Schools in Cleveland, Ohio anda DiscoveryWorks Author. Science Professional Development | Research Articles Education Place | Site Index | Contact Us Copyright © 2000 Houghton Mifflin Company. All Rights Reserved.http://www.eduplace.com/science/profdev/articles/badders.html