Successfully reported this slideshow.

Overview of Assessment Ch. 3


Published on

Published in: Education
  • Be the first to comment

  • Be the first to like this

Overview of Assessment Ch. 3

  1. 1. Have you ever had the feeling, after taking a test, that the test was not valid indicator of your knowledge, that you really knew more but the test did not allow you to show it, or that the test seemed unfair because it asked questions about topics not covered in class? This chapter looks at some basic principles of assessment. It also looks at a number of different kinds of tests and the ways in which tests and other assessment devices can be chosen and used as to provide the most accurate and useful information. Using what you know Overview ofAssessment
  2. 2. Read each of the following statements. Put a check under “Agree” or “disagree” to show how you feel bout each one. If you can, discuss your responses with classmates. <ul><li>In assessment, how a student gets an answer is more important than whether the answer is right or wrong. </li></ul><ul><li>Most tests do not yield useful information because they distort the reading/writing process. </li></ul><ul><li>In general, informal tests are better than formal ones. </li></ul><ul><li>One of the best ways to assess a student is to teach him or her in a weak area and see how much and how well she or he learns </li></ul><ul><li>Time spent assessing low-achieving readers would be better spent instructing them </li></ul><ul><li>Agree Disagree </li></ul><ul><li>_____ _____ </li></ul><ul><li>_____ _____ </li></ul><ul><li>_____ _____ </li></ul><ul><li>_____ _____ </li></ul><ul><li>_____ _____ </li></ul>Anticipation Guide
  3. 3. <ul><li>Although assessment and instruction are placed in separate sections in this text, in practice, the two are blended. Assessment should be an integral part of all instruction. </li></ul><ul><li>Standards for assessment endorsed by the International Reading Association and National council of Teachers of English (Joint Task Force on Assessment, 1994) Stress that the primary purpose of assessment is to improve teaching and learning. </li></ul>Principles of Effective Assessment
  4. 4. <ul><li>Assessment must reflect changing academic demands as students move up through the grades and encounter higher-level comprehension and study tasks. Dynamic assessment fits in with the concept of response to intervention discussed in chapter 1. </li></ul><ul><li>Under No Child Left Behind (NCLB) all students except for 1 percent who have been excluded because of serious cognitive deficits must be assessed in terms of grade-level standards. </li></ul><ul><li>Zone of proximal development: Difference between what students can do on their own and what they can do under the guidance of an adult or more knowledgeable peer. </li></ul>Dynamic Assessment
  5. 5. <ul><li>Assisted Testing- Is an easy-to-apply form of dynamic assessment in which students are given cues or prompts to see how much help they need in order to respond correctly (Johnson 1993). </li></ul><ul><li>In assisted testing you ask: How much help and what kind of help do I have to provide in order for a student to perform successfully? You start off by giving a little assistance and then increase it until the student can respond correctly. </li></ul>Administering a Dynamic Assessment
  6. 6. <ul><li>Finding a student’s knowledge level provides a realistic starting point. Often we assume that problem learners have no knowledge in particular area, and we waste time reteaching what they already know what they already know. </li></ul>Levels of Knowledge
  7. 7. <ul><li>Assessment should emphasize the students’ strengths so that these provide a foundation for instruction. </li></ul><ul><li>See Chapter 6 for more information on trial teaching. </li></ul>Trial Teaching
  8. 8. <ul><li>The Diagnostic Assessments of Reading (Riverside) is accompanied by online Trial Teaching Strategies, which are series of brief lessons that can be matched to students’ diagnostic profiles and link instruction to assessment. </li></ul><ul><li>Diagnosis does not stop with dynamic or assisted testing or even trial teaching. It is an ongoing process. As Doris Johnson (1993) notes, “It goes on forever.” In assessment, you create a hypothesis and evaluate it through testing, including dynamic testing, observation, trial teaching, and carefully monitoring the students’ performance. In a sense, every lesson that you teach should be a trial or diagnostic one. </li></ul>Dynamic Assessment Analysis
  9. 9. <ul><li>Step 1 : Establish an estimate of the levels on which the students are operating. </li></ul><ul><li>Step 2 : Gather and evaluate information about students’ reading/ writing strengths and weaknesses. </li></ul><ul><li>Step 3 : assess and evaluate students’ teaching-learning situation. Through dynamic testing and trial teaching, determine under what circumstances students learn best. Also assess the home situation. </li></ul><ul><li>Step 4 : Evaluate materials used in the students’ program. </li></ul><ul><li>Step 5: Integrate information and design a long-term program. </li></ul><ul><li>Step 6 : Continually assess and evaluate the program and make modifications as necessary. In general you will be asking the following questions. </li></ul><ul><li>On what levels are students functioning? </li></ul><ul><li>What are the students’ potential for growth? </li></ul><ul><li>What are the students’ strengths and weaknesses in reading and writing? </li></ul><ul><li>What are the students’ most immediate or most essential needs in reading and writing? </li></ul><ul><li>Under what circumstances and in what setting would these students learn best? </li></ul><ul><li>What would be the most effective materials for these students? </li></ul><ul><li>Are there any physical, psychological, social, or other factors that need to be considered? </li></ul><ul><li>How might the home, larger community, and school work together to help students? </li></ul>Essential Steps in the Assessment Process
  10. 10. <ul><li>Every child can learn, given the right kind of instruction , materials, tasks, and situation. The purpose of assessment is to determine the optimal learning circumstances for a particular student. </li></ul>Authentic Assessent
  11. 11. <ul><li>Groups tests of reading and writing generally fall into one of two categories: norm referenced or criterion referenced. In norm-referenced tests, which are often referred to as standardized tests , students are compared with a norm-group, which is a sample of the others who are in the same grade or are the same age. The score indicates whether students’ performance is average, above average, or below average compared to the norm group. Scores are commonly reported in one or more of the following ways: </li></ul>Norm-Referenced Tests Norm-referenced test : the performance of students is compared to that of norming or sample group. Percentile Rank : the most-used score for norm-referenced tests of reading and writing. Percentile rank is measure of comparative standing. If students progress at the same rate, their percentile ranks stay the same. To move up to a higher percentile, they must make a better-than-average gain.
  12. 12. <ul><li>Raw score is the total number correct. It has no meaning until transformed into a percentile rank, grade equivalent, or other score. </li></ul><ul><li>Percentile rank indicates where a student’s score fall son ranking of percentages for 1 to 99. </li></ul><ul><li>Grade-equivalent scores characterize performance as being equivalent to that of other students in a particular grade. </li></ul><ul><li>The International Reading Association opposes the use of grade-equivalent scores because they are open to misinterpretation. </li></ul><ul><li>Normal-curve equivalents (NCEs) place students on a scale of 1 through 99. </li></ul><ul><li>The term stanine is a combination of the words standard and nine and describes a nine-point scale. </li></ul><ul><li>Scaled scores are a continuous ranking of scores from the lowest levels of a series of norm-referenced tests through the highest-from kindergarten or the first grade through high school. </li></ul><ul><li>DRP units are used to report performance on the Degrees of Reading Power Tests. </li></ul><ul><li>DRP units range from 15 for the easiest materials to 85 for the most difficult reading material. The advantage of DRP units is that the same type of measurement is used to indicate students’ reading levels and also the difficulty level of reading material. </li></ul><ul><li>Lexile scores are also used for reporting tests scores and the difficulty level of texts. The Lexile framework is a scale from scale from 200 to 1700 with 200 being very easy reading material-about mid-first-grade level-and 1700 being very difficult reading material of the type found in scientific journals. </li></ul>Norm-Referenced Tests Continued..
  13. 13. <ul><li>Criterion-referenced test: student performance is measured against a standard. A typical standard of performance on criterion-referenced comprehension test is 75 percent. </li></ul><ul><li>One form of criterion-referenced assessment is the benchmark, a description of a key task that students are expected to perform. For instance, in one intervention program for struggling readers, the benchmark is that they be able to read a children’s book entitled A Kiss for Little Beat (Minarik, 1959) (Hiebert, 1994). Benchmarks need not to be tied to a specific book but might be stated in more general terms: </li></ul><ul><li>Uses context and phonics cues to decode difficult words. </li></ul><ul><li>Can read fourth-grade material and retell the main events or details in the section. </li></ul>Criterion-Referenced Tests
  14. 14. <ul><li>Reading tests can also be categorized as being survey or diagnostic tools. Survey test typically provide an overview of general comprehension and word knowledge. Diagnostic tests assess a number of areas or assess key areas in greater depth. The Stanford Diagnostic Reading Test, one of the best known of the group diagnostic tests, assess comprehension, reading or listening vocabulary, word analysis skills, and, at higher levels, the ability to scan. A list of survey and diagnostic tests is presented in Tables 3.1 and 3.2. </li></ul><ul><li>Standardized Test: assessment tasks and administration are carefully specified so that anyone taking the test does so under similar conditions. The term standardized test is also used to mean a norm-referenced test. </li></ul>Survey versus Diagnostic Tools
  15. 15. <ul><li>Test can also be categorized as being formal or informal. Formal tests may be standardized. They are designed to be given according to a standard set of circumstances. These tests have sets of directions, which are to be followed exactly. They may also have time limits. All norm-referenced tests are standardized. The advantage of formal standardized tests is that typically they have been constructed with care and tried out on hundreds of thousand of students. </li></ul>Formal versus Informal Tests
  16. 16. <ul><li>Informal tests generally do not have a set of standard directions, so there is a degree of flexibility in how they are given. In fact, the main advantage of informal tests is their flexibility. They may be designed to assess almost any skill or area, and may be tailored for any population. Informal tests are typically constructed by teachers. Their disadvantage is that they may not be constructed with sufficient care, and their reliability and validity may be unknown. One of the most widely used assessment devices in the field of literacy is the informal reading inventory, which is explored in the next chapter. </li></ul><ul><li>Summative assessment summarizes students’ progress at the end of a unit or semester and is administered after learning has taken place. </li></ul><ul><li>Formative assessment is used to inform instruction. It takes place during learning. </li></ul>Informal Tests
  17. 17. <ul><li>Assessment is summative or formative. As noted earlier, summative assessment summarizes students’ progress at the end of a unit or a semester or at some other time and is administered after learning has taken place. Norm-referenced and high stakes tests are generally summative. Formative assessment is on going and used to in form instruction. </li></ul><ul><li>Assessing for learning: Using Formative Assessments The purpose of assessing for learning is to obtain information about students then provide the instruction they need. </li></ul><ul><li>Assessing to learn begins with a clear explanation of the standards students are expected to meet. The classroom teacher or tutor might need to break down the state standards into a curriculum map or series of steps that, if followed, will lead to achieving the standard. The standard is expressed in terms that the students can understand. </li></ul>Formative versus Summative Assessment
  18. 18. <ul><li>Formative assessment can be powerful. Black and Wiliam (1998) found that its use increased average student performance by as much as 24 percentile points. Formative assessment was especially helpful for struggling learners: “ While formative assessment can help all pupils, it yields particularly good results with low achievers by concentrating on specific problems with their work and giving them clear understanding of what is wrong and how to put it right.” </li></ul>Assessing for Learning: Using Formative Assessments Continued..
  19. 19. <ul><li>Reliability : the consistency of an assessment device. It is the degree to which the device would yield similar results if given again to the same person or group. </li></ul><ul><li>Validity : the degree to which an assessment device measures; also, the degree to which the results can be used to make an educational decision. </li></ul>Evaluating Assessment Devices
  20. 20. Evaluating Assessment Devices Continued… <ul><li>Reliability </li></ul><ul><li>For a test, reliability means the if students retook the test they would get approximately the same score. For an observation guide, its means that if two or three observers rated the same student at the same time, their ratings would be similar. </li></ul><ul><li>Validity </li></ul><ul><li>Validity means that a device measures what it says it measures, such as vocabulary, comprehension, rate of reading, attitude toward reading, and so forth. It also means that the device will provide information that will be useful in making an instructional decision. </li></ul>
  21. 21. <ul><li>Correlation coefficient : statistical measure that expresses in mathematical terms the degree to which two variables are related. </li></ul><ul><li>Construct validity is the degree to which a test measures a theoretical trait or construct such as critical reading, learning ability, or phonological awareness. </li></ul><ul><li>Content or curricular validity is the degree to which the content of a tests reflects reading or tasks as they are taught in the schools. Many of the national standardized tests are based on the content standards adopted by major professional organizations, such as the International Reading Association, state standards, and the content of basal readers and other materials used to teach literacy. </li></ul>Validity
  22. 22. <ul><li>In judging the quality of a test, it is also important to know the standard error of measurement (SEM). The SEM is a statistical estimate of the amount that a test score might vary if the test were given again and again. Although tests yield a particular scores. </li></ul>Standard Error of Measurement Standard error of measurement ( SEM ): estimate of the difference between the obtained score and what the sore would be if the test were perfect.
  23. 23. Usefulness & Fairness <ul><li>Ultimately, assessment measures should provide information that can be used to foster students’ learning. Results from assessments should not be used to convey a sense of failure and discourage effort. Instead, results should be used to promote students’ growth and sense of self-efficacy. Instead, results should be used to promote students’ growth and sense of self-efficacy. </li></ul><ul><li>Tests, of course, should also be fair. As the Joint Task Force on Assessment (1994) notes, “Because traditional test makers have all markers have all too frequently designed assessment tools reflecting narrow cultural values, students and schools with different backgrounds and concern often have not been fairly assessed.” Test bias can take many forms. A test can be biased on the basis of geography, gender, socioeconomic status, ethnicity, or race. </li></ul>
  24. 24. <ul><li>U nder the No Child Left Behind Act of 2001, English language learners (ELLs) who have been in the U.S. schools for at least 10 months are required to be assessed in English reading. ELLs are also tested annually to measure their proficiency in English. </li></ul><ul><li>Language (s) spoken at home </li></ul><ul><li>Educational background </li></ul><ul><li>Reading and writing activities engaged in </li></ul><ul><li>Favorite books in first language </li></ul><ul><li>Proficiency in speaking first language </li></ul><ul><li>Proficiency in speaking English </li></ul><ul><li>Proficiency in reading and writing in first language </li></ul><ul><li>Proficiency in reading and writing in English </li></ul>Assessing English Language Learners
  25. 25. <ul><li>When group tests are used, struggling readers are often assessed unfairly. It is a widespread practice to administer the same norm-referenced or criterion-referenced test to an entire class, even though there may be a wide range of reading ability in that class. For instance, a seventh-grade level would find a typical seventh-grade test to be extremely frustrating. </li></ul><ul><li>How can you tell if a test is too easy or too hard? A test is probably too hard if a student fails to obtain a score that is better than he would have gotten if he had merely guessed. </li></ul><ul><li>The solution is to assess students on the level to which they functioning (gunning, 1982). This might mean giving a student an out-of-level test. For a seventh grader reading on a second-grade level this means giving the student a test that actually has a second-grade material on it. </li></ul><ul><li>Another possible solution is to select a test that assesses a wide range of reading levels. Administering the right-level test means that you need to know students’ approximate reading levels. </li></ul>Functional-Level Assessment
  26. 26. <ul><li>H igh-stakes tests, such as collage entrance exams, have long been part of education. However, the number and uses of high-stakes tests has increased dramatically. High-stakes tests are now used in many school districts to determine whether young children PASS or Fail. </li></ul>High-Stakes Tests
  27. 27. <ul><li>Parents are understandably concerned about their children’s performance. When discussing assessment results, focus on the child’s strengths. Also avoid comparisons with other children, if possible. Discuss the student’s performance in the light of what he might reasonably be expected to do. If the student has below-average scores, stress signs of progress. </li></ul>Reporting To Parents
  28. 28. <ul><li>A ssessment is an interactive process and should consider the reader, the text, the techniques being used, the reading or writing task involved, and the context in the which the reading or writing is performed. Assessment should also be dynamic. Through instruction provided after initial testing or through trial or assisted teaching, it should attempt to discover what the student’s true learning potential is and how the student learns best. </li></ul>Summary