A MIXED METHOD ANALYSIS OF THE IMPACT OF HIGH STAKESTESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH              ...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKESTESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH              ...
ABSTRACT   A Mixed - Method Analysis of the Impact of High Stakes Testing on   English Language Learners in Major Urban Hi...
administrators, and district ESL personnel viewed as the impact thathigh stakes standardized assessments had on ELLs, ESL ...
ELLs in a school increased, the performance on the statewide, high-stakes testing in terms of all students passing the 10t...
DEDICATION      This dissertation is dedicated in humble gratitude to my Lord and    Savior, Jesus Christ, in whom I move,...
ACKNOWLEDGEMENTS      I would like to acknowledge those without whom this work wouldnot have been possible. First and fore...
Additional thanks go to Dr. William Parker, Dean of Graduate Studies,for believing in me and giving me the opportunity to ...
with a special thanks to my most avid prayer warrior, my aunt, Mrs. SinaGunnels, and her chicken with a snuff cup under it...
TABLE OF CONTENTS                                                                                           PageABSTRACT ....
Page        No Child Left Behind (NCLB) ................................................. 15                 Historical Pe...
Page        Null Hypothesis One .................................................... 64        Null Hypotheses Two ..........
Page                Quantitative................................................................ 78                Qualita...
Page        Appendix D On-Line Questionnaire ..................................... 191        Appendix E Letter to Partici...
LIST OF TABLESTable                                                                                      Page4.1.1 Means a...
Table                                                                         Page4.2.4 Pearson Correlation: 2006 10 th Gr...
Table                                                                                    Page4.7 Distribution of Responden...
CHAPTER I                              INTRODUCTION      For years, the English language learners (ELLs) have beensubjecte...
2clearly demonstrate they deserve, and effectively utilize public funding. Initself, this is not a disturbing trend. Insti...
3their classrooms. As an internal act, the evidence of learning is analyzedfrom scores students obtain on standardized ass...
4of assessments are what is called „washback‟ (Alderson & Wall, 1993), orhow the results of an assessment affect the stake...
5but which has a large proportion of ELLs, will risk being ranked asunderperforming because the measure used to evaluate i...
6predominant population of White, Hispanic, or African-Americanstudents. Each of these student groups is given the same te...
7standardized assessments have on ELLs, ESL curriculum andinstruction, and what they observed as actually occurring.      ...
8      This major question was explored using the following probes:1. Why is TAKS given as a statewide test?2. What are th...
9                              Assumptions      Fraenkel and Wallen (2003) stated that an assumption is anythingtaken for ...
10study. Responses to the open-ended questions became difficult to classifyunder a certain category. This was facilitated ...
11English Language Learners (ELLs) is the preferred term to describe astudent whose native language is other than English ...
12Standardized Assessments include the Texas Assessment of Knowledgeand Skills (TAKS) and the State and Locally-Developed ...
13                        Organization of the Study      Chapter I identifies the problem this study addresses: the impact...
14      Results of the study are presented in detail in Chapter IV.Quantitative results include the available data collect...
CHAPTER II                          REVIEW OF LITERATURE      Key issues and concerns about the No Child Left Behind Act (...
16people were extremely wary that more federal aid would bring federalcontrol” (p.iv).       The National Assessment of Ed...
17economics, arts, history, and geography” (U.S. Department of Education,2002).      The premise of NCLB is that our natio...
18Description of the Key Factors      There are four key elements in the NCLB Act (Rosenbusch, 2005):      (1) Accountabil...
19assessments in Reading and Mathematics at three grade spans (3-5; 6-9;and 10-12). By SY 2007-2008, states must have in p...
20Expectations for Parents Due to NCLB (from Collegeboard.com)      (1) New standards for students will require that begin...
21for improvements that could include increased funding or staff andcurriculum changes.      (4) NCLB requires school dist...
22English and Mathematics will mean less emphasis on art, music, historyand other subjects.      In the U.S. Department of...
23include high school graduation rates and an additional indicator for middleschools to reach the “proficient” level or hi...
24version of the same test and permits schools to use a student‟s best score tocount toward AYP, and (4) increased minimum...
25      Minimum subgroup size. To make AYP, schools and districts mustmeet achievement targets for each significant subgro...
26      English language learners. Initially the U.S. Department of Education(ED) required all English language learners t...
27improvement only when it does not make AYP in the same subject andacross all three grade spans (elementary, middle and h...
28      Education Secretary Margaret Spellings has been more flexible thanher predecessor in policies regarding students w...
29success or failure. For example, parents in Pennsylvania may see a reportcard that indicates that their child‟s elementa...
30down so much that it shortchanges the very groups of disadvantagedchildren that it aims to help. Public support may with...
31and Stansfield (1998), is a positive way to refer to any LEP student inEnglish.      NAEP does not provide a definition ...
32English is dominant; and (d) whose difficulties in speaking, reading, writing,or understanding the English language may ...
33United States, mobility, prior school experiences, or educational goals of anystudent in this group can distinguish him ...
34peers live in poverty. There are numerous differences among Englishlearners; for example, Spanish-speaking families tend...
35However, there are major issues in this disaggregated reporting amongdifferent subgroup categories (students who are eco...
36complex for LEP students and, very likely, students in other targetedgroups.      Because of the strong effect of langua...
37      Inconsistency in the classification of LEP students may lead to moreheterogeneity in the LEP subgroup. With a more...
38      Some states with substantial numbers of LEP students haveexpressed concern over this issue. They have proposed ide...
39      On the other hand, NCLB‟s attention to students in the four subgroupcategories in general and to the LEP populatio...
40some of the existing tests, the English language proficiency domain was notoperationally defined before the test develop...
41may not be a reasonable justification to spend the limited NCLB resourceson English language proficiency test developmen...
421. Improve current LEP classification and assessment. There is a need to   establish a common definition of English lang...
43      redesignated students to be counted toward subgroup AYP progress      (Abedi, 2003).      Based on the results of ...
44that fail to achieve AYP goals face demanding corrective actions, such asreplacement of school staff, implementation of ...
45identified seven principles that captured the intended and unintendedconsequences of such programs. Current research con...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN T...
Upcoming SlideShare
Loading in …5
×

A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN TEXAS by Arhtur L. Petterway, PhD

2,053 views

Published on

A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN TEXAS by Arhtur L. Petterway, PhD


PhD Committee Members - Dr. M. Paul Mehta, Dissertation Chair; Committee Members: Dr. William Allan Kritsonis, Dr. Douglas S. Hermond, Dr. David E. Herrington, Dr. Camille Gibson

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
2,053
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
23
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKES TESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN TEXAS by Arhtur L. Petterway, PhD

  1. 1. A MIXED METHOD ANALYSIS OF THE IMPACT OF HIGH STAKESTESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN TEXAS ARTHUR L. PETTERWAY Submitted to the Graduate School Prairie View A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May, 2007
  2. 2. A MIXED-METHOD ANALYSIS OF THE IMPACT OF HIGH STAKESTESTING ON ENGLISH LANGUAGE LEARNERS IN MAJOR URBAN HIGH SCHOOLS IN TEXAS A Dissertation by ARTHUR L. PETTERWAYApproved as to style and content by: L. M. Paul Mehta, Chair HO Aid- ie. Dr. Will itsonis . Dou las Hermond ck_ Dr. David Herrington Dr. Camille ibson F Dr. . Paul M Dean, The Whitlowe R. Green e of Education I Dr. William Parker Dean, Graduate School May 2007
  3. 3. ABSTRACT A Mixed - Method Analysis of the Impact of High Stakes Testing on English Language Learners in Major Urban High Schools in Texas April, 2007 Arthur L. Petterway: B.A. – Dillard University M.Ed., Prairie View A&M University Dissertation Chair: Dr. M. Paul Mehta Ample research has been conducted on the intrinsic validity ofstandardized assessments, and on the factors affecting the assimilationand integration of English language learners (ELLs). The reliability ofthese assessments as a universal tool to measure student learning, andas a basis for determining school performance needed closerexamination. The purpose of this study was to determine the impact of high-stakes testing on ELLs. This was shown in both the quantitative andqualitative dimensions of the study. Data obtained from Texas EducationAgency (TEA) were used to determine whether there was a relationshipbetween the percentage of ELLs enrolled in a school and the percentageof all students passing the 10th grade Texas Assessment of Knowledgeand Skills (TAKS) tests in the core areas of English Language Arts andMathematics given in 2003, 2004, 2005, and 2006. The qualitativeaspect of this study explored what certified English as a SecondLanguage( ESL) teachers, non-certified ESL teachers who teach ELLs, iii
  4. 4. administrators, and district ESL personnel viewed as the impact thathigh stakes standardized assessments had on ELLs, ESL curriculum,and instruction in ESL classrooms. This study determined the impact of high-stakes testing on ELLsusing the explanatory design of mixed method analysis. Data of 173major urban high schools obtained from the Texas Education Agency(TEA). It was determined through the Pearson correlation computationsusing the Statistical Package for the Social Sciences (SPSS) that therewas a significant relationship between the percent of ELLs enrolled in aschool and the percent of all students passing the 10 th Grade TAKS testsin English Language Arts and Mathematics. In the qualitative portion ofthe study, the views and opinions of district ESL personnel weregathered. Principals, assistant principals, ESL and non-ESL teacherstook part in an on-line, open-ended questionnaire; one-on-oneinterviews; and focus groups. The focus groups addressd the purposesof statewide testing; its intended consequences; problems and changescreated by TAKS, and the recommendations to improve ESL curriculumand instruction. The results of the study affirmed the expected outcome that asignificant relationship existed between the percentage of ELLs enrolledin a school and the percentage of all students passing the 10 th gradeTAKS tests in both core areas of English Language Arts andMathematics. The regression analysis predicted that as the percentage of iv
  5. 5. ELLs in a school increased, the performance on the statewide, high-stakes testing in terms of all students passing the 10th grade TAKS testsdecreased. Respondents of the study considered TAKS as a tool to gaugeknowledge in the different core areas. English language learners wereexpected to have at least average scores on TAKS. There was a differencein the expected and actual results; respondents observed dismal orfailing performance of ELLS in the actual results in TAKS. This wasevident by the high failure rate of ELLs in their respective schools. Higherdropout rate and lower graduation rate of ELLs were problemsencountered due to TAKS. Respondents favored a different test for ELLs,possibly given at a later date after ELLs had studied in the country for atleast several years. Respondents believed that interventions were neededto help ELLs perform better. Both the school and the home, together withthe community, have to be involved in preparing ELLs for their presentand future roles in the American society. Results of this study may provide valuable data to district andschool administrators to develop strategies that will improve theperformance of ELLs on the statewide, high-stakes testing and to developassessments that truly measure learning without the nullifying effect oflinguistic and cultural bias. The study may also help to enhance thereliability of standardized assessments as a tool to determineaccountability for student performance. v
  6. 6. DEDICATION This dissertation is dedicated in humble gratitude to my Lord and Savior, Jesus Christ, in whom I move, trust, and have my being. Through HIS divine wisdom and purpose, HE gave me my parents: Bob Stevenson Petterway November 23, 1923-September 28, 1992 and Myrtice Lee Petterway February 10, 1927-February 15, 1959They are now in Glory with HIM sharing this divine blessing that HE has bestowed upon me. vi
  7. 7. ACKNOWLEDGEMENTS I would like to acknowledge those without whom this work wouldnot have been possible. First and foremost I wish to acknowledge theblessings bestowed upon me by my Lord and Savior Jesus Christ.Among these blessings are the kind souls I mention here. First, I would like to acknowledge my dissertation committee,Dr. M. Paul Mehta, Dr. William A. Kritsonis, Dr. Douglas Hermond,Dr. David Herrington, and Dr.Camille Gibson. They have been thorough,fair, understanding, demanding, and, most of all, dedicated to academicexcellence in all phases of this work. I would especially like to thankDr. M. Paul Mehta for taking time, along with his duties as the Dean ofEducation, and carrying the baton of being my committee chairman andDr. Robert Marshall, my former committee chairman (who would not letMe fall off of my bicycle and taught me how to eat an elephant) for thelong hours they have spent and the endless patience they have shown asthey have guided me through this endeavor. I also wish to thankDr. Kritsonis for his passion for making sure that I get published before Ireceive my PhD and taking on the duties of head cheerleader for thisproject. Thanks goes to Dr. Hermond for serving as lead statistician andfor not allowing me to attempt to eat the whole pie and limiting me to asmall slice. Many thanks also to Dr. Herrington who guided me in tyingup several loose ends. I am also grateful to Dr. Gibson for taking time outof her very busy schedule to offer her support and encouragement. vii
  8. 8. Additional thanks go to Dr. William Parker, Dean of Graduate Studies,for believing in me and giving me the opportunity to prove myselfbeginning with my pursuit of my Masters Degree. I also gratefully acknowledge all of my professors and thank themfor the wisdom and knowledge they have so generously shared. I wouldfurther like to thank all of the faculty and staff of Prairie View A&MUniversity who have contributed to my achievement in countless ways. Ialso wish to acknowledge the unwavering encouragement of my studentcohort as we shared the joy and pain of this incredibly challengingpursuit. I would like to thank my principal, Mrs. Linda Llorente, for herunderstanding and support as I pursued this dream. She has beenabundantly patient and understanding of the demands this work hasplaced on my time and energy. I also wish to acknowledge thecontributions of my peers and colleagues at Austin High School.Although it is virtually impossible to name all who have contributed tothe completion of this work, I feel that there are several who must bethanked by name. I would like to thank Andy Lamboso and RhodoraMaligad who helped with proofreading and typing along with Kathy Koch,Betty Shaw, Debbie Kubiak, and Raul Asoy who helped with proofreadingand editing. Finally, I wish to acknowledge the precious prayers offered by therighteous to strengthen and uphold me through this challenging time, viii
  9. 9. with a special thanks to my most avid prayer warrior, my aunt, Mrs. SinaGunnels, and her chicken with a snuff cup under its wing. ix
  10. 10. TABLE OF CONTENTS PageABSTRACT ..................................................................................... iiiDEDICATION ................................................................................. viACKNOWLEDGEMENT .................................................................. viiTABLE OF CONTENTS ................................................................... ixLIST OF TABLES .......................................................................... xivCHAPTER I. INTRODUCTION .......................................................... 1 Statement of the Problem....................................................... 3 Purpose of the Study ............................................................. 6 Research Questions ............................................................... 7 Quantitative .................................................................. 7 Null Hypothesis One ...................................................... 7 Null Hypothesis Two ...................................................... 7 Qualitative .................................................................... 7 Description of the Research Design ........................................ 8 Assumptions ......................................................................... 9 Limitations of the Study ........................................................ 9 Delimitations of the Study .................................................... 10 Definition of Terms ............................................................... 10 Significance of the Study ...................................................... 12 Organization of Study ........................................................... 13CHAPTER II. REVIEW OF LITERATURE ......................................... 15 ix
  11. 11. Page No Child Left Behind (NCLB) ................................................. 15 Historical Perspective .................................................. 15 Description of the Key Factors ..................................... 18 Expectations for Parents .............................................. 20 Response to NCLB ....................................................... 21 Adequate Yearly Progress (AYP) ............................................. 22 Purpose and Support to NCLB ..................................... 22 Changes and Updates.................................................. 24 AYP and Limited English Proficient (LEP) Students ................ 30 Definition of English Language Learners (ELLs) ........... 30 Issues and Other Considerations of LEP ....................... 34 High Stakes/Statewide Testing ............................................. 43 Principles of Testing Programs ..................................... 44 Accountability in Testing ............................................. 50 Effects of High Stakes Testing on Student Motivation .... 52 Other Considerations of Assessment on Testing ............ 56 Related Studies .................................................................... 59 Summary ............................................................................. 61CHAPTER III.METHODOLOGY. ...................................................... 63Introduction .................................................................................. 63 Research Questions .............................................................. 64 Quantitative ................................................................ 64 x
  12. 12. Page Null Hypothesis One .................................................... 64 Null Hypotheses Two ................................................... 65 Qualitative .................................................................. 65Research Methods ................................................................ 66Research Design................................................................... 67 Quantitative ................................................................ 68 Qualitative .................................................................. 68Pilot Study ........................................................................... 68Population and Sample ......................................................... 70 Quantitative ................................................................ 70 Qualitative ................................................................. 71Instrumentation ................................................................... 72 Instruments ................................................................ 72 Validity ....................................................................... 74 Reliability ................................................................... 74Research Procedures ............................................................ 75 Quantitative ................................................................ 75 Qualitative ................................................................. 75Data Collection and Recording .............................................. 76 Quantitative ................................................................ 76 Qualitative ................................................................. 76Data Analysis ....................................................................... 78 xi
  13. 13. Page Quantitative................................................................ 78 Qualitative .................................................................. 79 Summary ............................................................................. 80CHAPTER IV ANALYSIS OF DATA .................................................. 83 Findings .............................................................................. 85 Quantitative Research Question .................................... 85 Null Hypothesis One ..................................................... 86 Null Hypothesis Two ..................................................... 86 Qualitative Research Question ..................................... 100 Discussion ........................................................................ 141 Summary .......................................................................... 145CHAPTER V SUMMARY, CONCLUSION, IMPLICATIONS ANDRECOMMENDATIONS ................................................................. 147 Summary .......................................................................... 147 Conclusions ...................................................................... 150 Implications ...................................................................... 151 Recommendations for Further Study .................................. 154REFERENCES ............................................................................. 157APPENDICES .............................................................................. 177 Appendix A IRB .................................................................. 178 Appendix B Consent Form .................................................. 183 Appendix C Interview Questions .......................................... 186 xii
  14. 14. Page Appendix D On-Line Questionnaire ..................................... 191 Appendix E Letter to Participants ........................................ 196 Appendix F Request for Extant Data from T.E.A... ................ 198VITA ........................................................................................... 201 xiii
  15. 15. LIST OF TABLESTable Page4.1.1 Means and Standard Deviations of ELLs Enrolled in School and All Students Passing the 2003 10th Grade TAKS for English Language Arts and Mathematics .............................. 874.1.2 Means and Standard Deviations of ELLs Enrolled in School and All Students Passing the 2004 10th Grade TAKS for English Language Arts and Mathematics ............................... 874.1.3 Means and Standard Deviations of ELLs Enrolled in School and All Students Passing the 2005 10th Grade TAKS for English Language Arts and Mathematics ............................... 884.1.4 Means and Standard Deviations of ELLs Enrolled in School and All Students Passing the 2006 10th Grade TAKS for English Language Arts and Mathematics ............................... 884.1.1.1 Comparison of Results in 10 th Grade English Language Arts TAKS ......................................................................... 894.1.1.2 Comparison of Results in 10 th Grade Mathematics TAKS...... 904.2.1 Pearson Correlation: 2003 10 th Grade TAKS for English Language Arts and Mathematics............................................ 914.2.2 Pearson Correlation: 2004 10 th Grade TAKS for English Language Arts and Mathematics............................................ 914.2.3 Pearson Correlation: 2005 10 th Grade TAKS for English Language Arts and Mathematics............................................ 92 xiv
  16. 16. Table Page4.2.4 Pearson Correlation: 2006 10 th Grade TAKS for English Language Arts and Mathematics............................................ 924.2.5 Coefficients for Percentage of All Students Passing the 2003 10th Grade English Language Arts TAKS ........................ 934.2.6 Coefficients for Percentage of All Students Passing the 2003 10th Grade Mathematics TAKS ...................................... 944.2.7 Coefficients for Percentage of All Students Passing the 2004 10th Grade English Language Arts TAKS ........................ 954.2.8 Coefficients for Percentage of All Students Passing the 2004 10th Grade Mathematics TAKS ...................................... 964.2.9 Coefficients for Percentage of All Students Passing the 2005 10th Grade English Language Arts TAKS ........................ 974.2.10 Coefficients for Percentage of All Students Passing the 2005 10th Grade Mathematics TAKS ...................................... 984.2.11 Coefficients for Percentage of All Students Passing the 2006 10th Grade English Language Arts TAKS ........................ 994.2.12 Coefficients for Percentage of All Students Passing the 2006 10th Grade Mathematics TAKS .................................... 1004.3 Distribution of Respondents by Gender .................................. 1024.4 Distribution of Respondents by Age ........................................ 1024.5 Distribution of Respondents by Professional Position ............... 1034.6 Distribution of Respondents by Highest Degree Earned ........... 103 xv
  17. 17. Table Page4.7 Distribution of Respondents by Years of Experience in Education ............................................................................. 1044.8 Distribution of Respondents by Certifications Held .................. 1054.9 Why is TAKS Given as a Statewide Test to ELLs? .................... 1074.10 What are the Anticipated Results of Statewide Testing for ELLs? ............................................................................ 1114.11 What are the Actual Results of Statewide Testing for ELLs? ............................................................................ 1154.12 What are the Intended Consequences of TAKS for ELLs? ....... 1184.13 What Has Happened to ELLs Because of TAKS? .................. 1224.14 What Problems Have Occurred for ELLs Due to TAKS? ......... 1264.15 What Changes Have Occurred for ELLs Due to TAKS? .......... 1294.16 What Recommendations are suggested for Improvement of ELLs Performance on TAKS?............................................. 1324.17 What are the Recommendations, with Greatest Value, are offered for ELLs Success on TAKS? ................................. 136 xvi
  18. 18. CHAPTER I INTRODUCTION For years, the English language learners (ELLs) have beensubjected to educational systems that did not expect them to rise to thesame standards as their native English-speaking peers (Winograd, 2002).Although that it can take several years to acquire the second languageskills needed to be successful in school (Collier, 1989), too often Englishlanguage learners born in the U.S. are still in English as a secondlanguage (ESL) classes and far behind their grade level peers in thecontent areas by the time they reach high school (Freeman & Freeman,2002). One factor that should be considered in this failure to reach gradelevel requirements is that language may constitute an element of self-identity. It is possible that minority groups are insistent on retainingtheir ethnic language as their “first.” English proficiency then would be amere elective instead of an indispensable learning tool. If this is the case,schools are being held accountable for the consequences of a socio-cultural phenomenon that is beyond their limited powers to address. Public schools are under close scrutiny. Since they are supportedby public funds, there is an increasing demand for accountability. TheNo Child Left Behind Act (NCLB) now requires all students to beaccounted for in any state‟s assessment system, although that has notalways been the case (Abedi, 2004). School districts are now required to 1
  19. 19. 2clearly demonstrate they deserve, and effectively utilize public funding. Initself, this is not a disturbing trend. Institutions that are wholly or partlysupported by public funds should be accountable. This is essentially aconsequence of democracy. A government that is created by, and for thepeople, is so unlike an aristocracy that is not required to serve aconstituency beyond the guarantee of protection from marauders orinvading armies. The U.S. system of government empowers the state toundertake measures that guarantee the common good. This goes beyondthe guarantee of physical safety, since the term “common good” has awider application, and implies a calculated sensitivity to every citizen‟spursuit of happiness. While education is not categorized as afundamental right, it is perceived as primary among a bundle of valuesessential for every person‟s quest for self-fulfillment and happiness. Thisexplains why there is little argument about whether the governmentshould be involved in education at all, and whether this is an endeavorbetter left to the private sector (Abedi, 2004). The government‟s involvement in education opens a wide avenuefor the analysis and evaluation of results. In today‟s world, it is notenough that public schools have adequate facilities, although thisconstitutes one level of analysis. It is important that schools are safe andteachers are qualified, although in the hierarchy of priorities consideredfor evaluating schools, these outcomes are not standard. Schools arejudged principally based on the amount of learning that takes place in
  20. 20. 3their classrooms. As an internal act, the evidence of learning is analyzedfrom scores students obtain on standardized assessments. Institutions are now facing an ever-increasing demand foraccountability. There is pressure from every conceivable corner to makepublic schools accountable to their stakeholders. This means that it isnot enough for students to learn in school; it is equally important thatlearning should occur in ways that are measurable. If students areunable to demonstrate what they have learned, it is presumed that nolearning took place at all. The time when public schools are allowed tooperate without proven success is over. It is appropriate to inquire aboutthe valid manifestations of success and learning, and how they mayactually be measured. Cultural construct renders school rankings flawedto a certain extent since they become less accurate as a measure of thefaculty and administration‟s performance. Instead, they becomeunintended indicators of the ethnicity of the students to which schoolscater (Abedi, 2004). Statement of the Problem High stakes assessment systems are meant to bring attention tothe needs of ELLs, who are most at risk of not reaching the educationalgoals set for them (Anderson, 2004). But what results do statewideaccountability tests really produce for ELLs (Anderson, 2004)?Assessment systems usually produce both positive and negativeconsequences (Anderson, 2004). The positive and negative consequences
  21. 21. 4of assessments are what is called „washback‟ (Alderson & Wall, 1993), orhow the results of an assessment affect the stakeholders taking the test(Anderson, 2004). While quantifiable washback effects such as increased dropoutrates or increased referral to Special Education have been researched,assessment washback is more complicated than numbers alone can tell(Anderson, 2004). Students who qualify for Special Education may beallowed to take alternative assessments in lieu of the state assessmentssuch as the Texas Assessment of Knowledge and Skills (TAKS). It isinteresting to note that while the numbers of African-American andHispanic students are over-represented in Special Education, about eightto nine percent of ELLs are identified as receiving Special Educationservices in the United States (D‟Emilio, 2003; Zehler, Fleischman,Hopstock, Pendzick, & Stepherson, 2003). While these assessments arenot on grade level, schools are expected to demonstrate that, based onstudents‟ scores on alternative assessments, improvement in academicperformance is taking place. Data are needed that tell us more about the full range of intendedand unintended consequences occurring in schools today (Anderson,2004). Since school rankings affect student and faculty morale, theyserve more as a force for the preservation of the status quo than a forcefor improvement in student performance. A school that works hard toensure that learning occurs, and that its students progress academically,
  22. 22. 5but which has a large proportion of ELLs, will risk being ranked asunderperforming because the measure used to evaluate its performanceis blind to this important demographic reality. One way to get at these data is by talking with the stakeholders atthe schools. Educators are the ones who deal directly with the impact ofhigh stakes assessments, but are overlooked in research. While teachers‟opinions are often cited as anecdotal evidence that a problem exists,their expert observations often go unrecorded in any systematic way(Anderson, 2004). Standardized assessments are a measure for holding schoolsaccountable for student learning. At the present time, schools in Texasare ranked Exemplary, Recognized, Acceptable or Underperforming,depending on the performance of their students in the Texas Assessmentof Knowledge Skills (TAKS). This produces a vicious cycle sinceexemplary schools attract the best students who may leaveunderperforming schools to seek what is perceived to be a higher qualityof instruction in higher ranked schools. These labels tend to have a self-fulfilling effect, or at least they make it difficult for underperformingschools to achieve higher performance scores on standardized tests,since they face the additional burden of surmounting language barriersand a history of low performance. Related to this concern is the prevailing system of voluntarysegregation in most zones and districts. Some schools have either a
  23. 23. 6predominant population of White, Hispanic, or African-Americanstudents. Each of these student groups is given the same tests, and yetthey have varying degrees of proficiency in the language in which theassessments are given. It begs to be asked whether these assessments,in fact, measure learning and whether they are linguistically andculturally neutral. The implication is that these students will be able toanswer the test questions even if they do not have equal exposure tocultural references that may frame some of the test questions. This study is intended to explore what educators perceive as theconsequences of statewide assessment for ELLs and what they observeas actually occurring (Anderson, 2004). Purpose of the Study The purpose of this study was to determine the impact of high-stakes testing on ELLs. This was shown in both the quantitative andqualitative dimensions of the study. Data obtained from TEA were usedto determine whether there is a relationship between the percentage ofEnglish language learners enrolled in a school and the percentage of allstudents passing the 10th Grade TAKS tests in the core areas of EnglishLanguage Arts and Mathematics given in 2003, 2004, 2005 and 2006. Tosupport the quantitative aspect, this study explored what certified ESLteachers, non-certified ESL teachers who teach ELLs, administrators,and district ESL personnel viewed as the impact that high stakes
  24. 24. 7standardized assessments have on ELLs, ESL curriculum andinstruction, and what they observed as actually occurring. Research QuestionsQuantitative Is there a relationship between the percentage of English languagelearners enrolled in a school and the percentage of all students passingthe 10th grade TAKS tests in the core areas of English Language Artsand Mathematics given in 2003, 2004, 2005, and 2006? HypothesesH01: There is no statistically significant relationship between the percentage of English language learners enrolled in a school and the percentage of all students passing the 10th grade TAKS tests in English Language Arts given in 2003, 2004, 2005, and 2006?H02: There is no statistically significant relationship between the percentage of English language learners enrolled in a school and the percentage of all students passing the 10th grade TAKS tests in Mathematics given in 2003, 2004, 2005, and 2006?Qualitative The major question addressed by this study was: What are theanticipated and observed consequences of the statewide testing,specifically TAKS, on ESL curriculum and instruction as viewed bycertified ESL teachers, non-certified ESL teachers who teach ELLs,school administrators, and district ESL personnel?
  25. 25. 8 This major question was explored using the following probes:1. Why is TAKS given as a statewide test?2. What are the intended consequences of this statewide testing? (Or what has happened because of TAKS?)3. What problems have occurred related to or because of TAKS?4. What changes were caused by this statewide testing?5. What are your recommendations to improve this statewide testing?6. What needs to be done for the ESL students to improve their performance in general and specifically for this statewide test? Description of the Research Design The study analyzed the issues and challenges faced by ELLs andthe public schools that serve them. Quantitative data for this researchwere gathered from the Texas Education Agency (TEA) regarding thepercentage of ELLs and the performance of 10th grade students from themajor urban high schools in Texas on the Texas Assessment ofKnowledge and Skills (TAKS) tests in English Language Arts andMathematics for 2003, 2004, 2005, and 2006. Qualitative data werederived from one-on-one and focus group interviews and an on-linequestionnaire focusing on respondents‟ views and opinions about thevarious ways that standardized assessments impact ELLs.
  26. 26. 9 Assumptions Fraenkel and Wallen (2003) stated that an assumption is anythingtaken for granted rather than tested or checked. This study is nodifferent and the following assumptions were made: (a) that the firstlanguage of the ELLs is Spanish and they have varying degrees of fluencyin the English language; (b) that the ESL curriculum is appropriate forthe mastery of the TAKS test for the ELLs; (c) that the on-line open-endedqualitative questionnaire will be completed by the respondents on time;and (d) that the respondents in the focus groups will truthfully expresstheir views and opinions regarding issues or concerns brought to thegroup. Limitations of the Study Limitations of the study included several factors: mainly thequalitative questionnaire and the manner in which respondents gavetheir responses. The questionnaire may have vague questions open tomore than one interpretation. The pilot study helped in streamlining thequestionnaire to remove or modify such vague issues or concerns.Another limitation may have been the manner in which the respondentsanswered the question. For one reason or another, they may not havetruthfully answered some of the questions. The respondents may or maynot have completed the questionnaire due to no ready access to acomputer or they just did not want to complete the questionnaire. Thesenon-respondents became part of the mortality factor involved in the
  27. 27. 10study. Responses to the open-ended questions became difficult to classifyunder a certain category. This was facilitated through the Non-Numerical, Unstructured Data, Indexing Searching & Theorizing Vivo-“Nudist Alive” (NVivo) software system (Version 7.0) and by the focusgroup interviews where the respondents helped determine the category ofsuch responses. A factor that may have been encountered in the quantitativedimension of the study was the lack of intended data for the study.Diligent efforts were made to gather data from available sources. Delimitations of the Study The questions for the on-line qualitative questionnaire may havebeen a delimitation of the study. The pilot study contributed to theimprovement of the qualitative tool. Another delimitation may have beenthe choice of participants, especially in the focus groups. The “snowballtechnique” addressed this issue. Better interaction happened with added„quality‟ members to the focus groups. Qualitative data are available and the inclusion of the quantitativeaspect of the study provided a challenge and an opportunity to determineif certain factors of the study have any impact on the ELLs. Definition of TermsContent Standards are broad descriptions of the knowledge, skills, andunderstandings that schools should teach and students should acquirein a particular subject area (McLaughlin & Shepard, 1995).
  28. 28. 11English Language Learners (ELLs) is the preferred term to describe astudent whose native language is other than English (Chamot &O‟Malley, 1994). These students require instructional modifications, andeventually take the TAKS after two years of enrollment in the schooldistricts.High Stakes Assessment is an assessment in which student promotion(i.e., high school graduation) can be denied if the scores do not reflectcompetence (NCBE, 1997).Limited English Proficient (LEP) refers to a student with a languagebackground other than English, and whose proficiency in English is suchthat the probability of academic success in an English-only classroom isbelow that of an academically successful peer with an English-languagebackground (CCSSO, 1992).No Child Left Behind Act (NCLB) of 2001 (PL – 107 – 110). It is thereauthorization of the Elementary and Secondary Education Act (ESEA).Opportunity-to-learn (OTL) Standard defines the level and availability ofprograms, staff and other resources sufficient to enable all students tomeet challenging content and performance standards (McLaughlin &Shepard, 1995).Performance Standards are concrete examples and explicit definitions ofwhat students have to know and be able to do to demonstrate that suchstudents are proficient in the skills and knowledge framed by the contentstandards (McLaughlin & Shepard, 1995).
  29. 29. 12Standardized Assessments include the Texas Assessment of Knowledgeand Skills (TAKS) and the State and Locally-Developed AlternativeAssessment (SLDAA) for students who are exempted from the TAKS. Astandardized assessment is a measurement of what students know andcan do (McLaughlin & Shepard, 1995).Standards-based Reform requires setting standards of performance inacademic subject areas as a means of improving the substance of schoolcurricula and increasing the motivation and effort of students, teachers,and school systems and thereby improving student achievement(McLaughlin & Shepard, 1995).Washback shows how the results of an assessment affect thestakeholders taking the test (Alderson & Wall, 1993). Significance of the Study Expected outcome of this study may possibly provide additionalvaluable data for writers or researchers in regard to biases instandardized assessments that may encourage school districts to developassessments that truly measure learning without the nullifying effect oflinguistic and cultural bias. Additionally, this study enhances thereliability of standardized assessments as a tool in determiningaccountability where the performance of English language learners isconcerned.
  30. 30. 13 Organization of the Study Chapter I identifies the problem this study addresses: the impactof high stakes assessments on the curriculum and instruction of Englishlanguage learners. It includes the hypotheses and research questions ofthe present study. Included are the definitions of terms valuable to thestudy. Chapter II includes the review of literature about the essentialconditions and factors regarding the NCLB Act, the AYP implications forconcerned schools, high-stakes, statewide assessments and theimplications and challenges they present to the preparation andeducation of ELLs. The information reveals the difficulties that Englishlanguage learners face when taking these high stakes assessments, thepossible positive and negative consequences and possible “washback”related to the assessments. A mixed methods study is identified and expounded in Chapter III.Quantitative data for this research were gathered from the TexasEducation Agency regarding the percentage of ELLs and the performanceof major urban high schools in Texas in the statewide test (TexasAssessment of Knowledge and Skills) for 2003, 2004, 2005, and 2006.Qualitative data were derived from an on-line, open-ended questionnaireand interviews that focused on the respondents‟ views and opinionsabout the varied ways standardized assessments impact Englishlanguage learners.
  31. 31. 14 Results of the study are presented in detail in Chapter IV.Quantitative results include the available data collected from TexasEducation Agency. Results of computations employing the StatisticalPackage for the Social Sciences (SPSS) statistical package, (Version 14.0)are shown in tabular presentations and explanations regarding therelationship among the variables are included. Qualitative results includethe participants‟ views and opinions on the impact of high stakes testingon English language learners and the information collected from the on-line, open-ended questionnaire, individual and focus group interviews. Major findings of the study are discussed in Chapter V. Impact ofhigh stakes standardized assessments on English language learners arealso summarized. Other relevant factors that influenced this study arepresented, as well as recommendations for future research.
  32. 32. CHAPTER II REVIEW OF LITERATURE Key issues and concerns about the No Child Left Behind Act (NCLB) of2001 and the Adequate Yearly Progress (AYP) are major parts of the reviewof related literature. Included are the principles and accountability involvedin high-stakes testing and the descriptions and accommodations given tothe ultimate beneficiary of the efforts exerted by the federal and statepolicymakers, the school and district administrators – the learners,specifically, the English language learners who strive to be better citizens ofthis country. Short description of related studies on statewide testing andEnglish language learners (ELLs) are given to show their tie-in with thisstudy. No Child Left Behind (NCLB)Historical Perspective The NCLB Act of 2001 (PL – 107 -110), is the reauthorization of theElementary and Secondary Education Act (ESEA). The ESEA was firstpassed in 1965 with the goal of improving the U. S. educational system byproviding better education for students in poverty through an increase inservices to them. The ESEA provided federal funds for schools but did notrequire accountability in the use of those funds. In 2003, the Center ofEducational Policy clarified why accountability was not part of ESEA in1965: “At that time, the federal role in education was marginal, most stateeducation agencies had very limited authority and capabilities, and local 15
  33. 33. 16people were extremely wary that more federal aid would bring federalcontrol” (p.iv). The National Assessment of Educational Progress (NAEP) was initiatedas a federal testing program at about the same time when ESEA came intoexistence. NAEP was tasked to report how the nation‟s students wereperforming on selected items at the three grade levels --- 4th, 8th and 12th.Brennan (2004) reported that there were fears that the NAEP might becomea “high-stakes federal testing program” found in some European countries.He explained that, “to help preclude that possibility, it was written into lawthat NAEP could not report scores for individual students” (p.2). The NAEPevolved through the 1980s and early 1990s from a reporting of item scoresto test scores and then, on a trial basis, to a reporting of scores thataddressed achievement levels (below basic, basic, proficient, and advanced).It is currently used to confirm state NCLB testing results which, accordingto Brennan, “is the de facto elevation of NAEP to a federally-mandated high-stakes testing program” (p.9). Through the NCLB Act, policymakers in Washington seek to raiseacademic achievement in the nation by requiring schools to assess allstudents on specified content areas and report their progress towardproficiency. Focus of NCLB is on core academic subjects as defined in thelaw: “The term „core academic subjects‟ means English, reading or languagearts, mathematics, science, foreign language, civics, and government,
  34. 34. 17economics, arts, history, and geography” (U.S. Department of Education,2002). The premise of NCLB is that our nation‟s schools are failing. Thus, thepurpose of NCLB is raising the achievement of all students and eliminatingthe achievement gap among students differentiated by race, ethnicity,poverty, disability, and English proficiency. Since this Act redefines thefederal role in education policy that has traditionally been a stateresponsibility, it merits the attention of educators, parents and citizens.Because the NCLB Act has an impact on the teaching and the learning ofthe core content areas, including languages, language educators need to beinformed about it. If a roomful of educators were asked which word or phrase best sumsup No Child Left Behind (NCLB), many would say accountability. Othersmight propose student achievement, proficiency or raised expectations. Butperhaps the most accurate word to encapsulate the United States‟ mostambitious federal education law – which proposes to close achievement gapsand aims for 100% student proficiency by 2014 - is testing. Certainly, thefocus on holding schools accountable for student achievement onstandardized tests sets NCLB apart from previous versions of the law.(Guilfoyle, 2006).
  35. 35. 18Description of the Key Factors There are four key elements in the NCLB Act (Rosenbusch, 2005): (1) Accountability. States are required to establish a definition ofstudent proficiency in the core academic subjects of Reading/LanguageArts, Mathematics and Science through prescribed indicators and set atimetable to bring all students in all subgroups up to the defined levels ofproficiency by 2013-2014. The school must report to parents their child‟sprogress in each targeted academic subject annually, and the state isrequired to report the results of students‟ performance on the annual testsfor every public school to parents and the community. Schools that fail tomeet state-defined AYP toward their defined goals for two years areidentified as needing improvement. Schools that have not met AYP after fouryears are subject to restructuring or reconstitution. (2) Testing. States must develop and administer annual tests thatdefine the proficiency that all students are expected to reach inReading/Language Arts, Mathematics, and Science. States must include asample of students in fourth and eighth grades in a biennial NAEP inMathematics and Reading to verify state assessments. NCLB requires that by School Year (SY) 2005-2006, each state mustmeasure every child‟s progress in Reading and Mathematics in each ofgrades 3 through 8 and at least once during grades 10 through 12. In themeantime, each state must meet the requirements of the previous lawreauthorizing ESEA (the Improving America‟s Schools act of 1994) for
  36. 36. 19assessments in Reading and Mathematics at three grade spans (3-5; 6-9;and 10-12). By SY 2007-2008, states must have in place Scienceassessments to be administered at least once during grades 3-5; grades 6-9;and grades 10-12. States must ensure that districts administer a test ofEnglish proficiency to measure oral language, Reading and Writing skills inEnglish to all limited English proficient students, as of SY 2002-2003.Students may still undergo state assessments in other subject areas (i.e.,History, Geography, and Writing skills), if and when the state requires it.NCLB requires assessments only in the areas of Reading/Language Arts,Mathematics, and Science. (3) Teacher Quality. Public elementary and secondary school teacherswho teach core content areas are required to be “highly qualified,” which isdefined as having full state certification (may be attained through alternateroutes specified by the state), holding a bachelor‟s degree, and havingdemonstrated subject matter competency as determined by the state underNCLB guidelines. States are required to develop a plan by the end of 2005-2006 to ensure that every teacher is highly qualified to teach in his or hercore content area. (4) Scientifically-Based Research. The NCLB Act requires that alleducational decisions be informed by scientifically-based research asdefined in the legislation. The NCLB Act funds for Reading First Grants, forexample, are to be used for methods of reading instruction backed byscientifically-based research.
  37. 37. 20Expectations for Parents Due to NCLB (from Collegeboard.com) (1) New standards for students will require that beginning 2005,students in grades 3 through 8 must be tested in Mathematics and Englishto ensure they are meeting state standards. Students in Grades 10 through12 will be tested at least once. By 2007, states will begin testing students inScience as well. Results of the yearly tests will be known to parents. NCLBrequires that school districts provide parents with an annual “report card”that shows how well students in each school performed. The information isbroken down by race, ethnicity, gender, disability status, and othercategories so that parents will know how well each school is doing ineducating minority students or those with disabilities. (2) By the end of SY 2005-2006, teachers must be “highly qualified” inthe subjects they teach. States will determine what skills teachers musthave to be “highly qualified”, but the requirements could include a degree inthe subject they teach or extra training. States must provide annual reportcards about teacher certifications, including the percentage of classrooms inthe state not taught by highly qualified teachers. Principals must alsomaintain information about whether or not their school‟s teachers meet therequirements. (3) Each year, schools must increase the number of students whoachieve state standards. At the end of 12 years, all students should be ableto pass the tests. Schools that fail to achieve this progress will be targeted
  38. 38. 21for improvements that could include increased funding or staff andcurriculum changes. (4) NCLB requires school districts to notify parents if the child‟sschool has been identified as needing improvement as a result of failing toincrease the number of students meeting state standards. (5) About half of all public schools receive funding to help studentsfrom low-income families. If such a school is targeted for improvement andfails after two years, parents can choose to transfer their child to anotherschool or enroll in free tutoring. Parents have this choice for as long as theschool fails to adequately perform.Response to NCLB (Rosenbusch, 2005) NCLB has engendered controversy that is centered in part on theincreased role of the federal government in educational policy. A majority ofAmericans believe that decisions about what is taught in public schoolsshould be made at the local level by the school board (61%), rather than atthe state level (22%) or the federal level (15%) (Rose & Gallup, 2003).Results of a 2004 survey indicate that they disagree with “the majorstrategies NCLB uses to determine whether a school is or is not in need ofimprovement” (Rose & Gallup, 2004, p.2). For example, 83% of thosesurveyed believe that testing only in English and Mathematics will not yielda fair picture of the school, 73% say it is not possible to judge a student‟sproficiency in English and Mathematics on a single test, and 81% areconcerned that basing decisions about school on students‟ performance in
  39. 39. 22English and Mathematics will mean less emphasis on art, music, historyand other subjects. In the U.S. Department of Education, there is support for highstandards and high expectations for every child, but the NCLB focus onstandardized testing is resulting in a narrowing of the curriculum and a“sorting of students” (Marshak, 2003, p.229) and “could halt thedevelopment of truly significant improvements in teaching and learning”(Lewis, 2002, p.179). The National Education Association supports theNCLB Act in its goal but views it as an obstacle to improving publiceducation because of its focus on “punishment rather than assistance”, and“mandates rather than support for effective programs” (National EducationAssociation, n.d.). Adequate Yearly Progress (AYP)Purpose and Support to NCLB The No Child Left Behind Act of 2001 (NCLB; Public Law No. 107-110,115 Stat. 1425, 2002), the most recent reauthorization of the Elementaryand Secondary Act of 1965, holds states using federal funds accountable forstudent academic achievement. States are required to develop a set of high-quality, yearly student assessments that include, at a minimum,assessments in Reading/Language Arts, Mathematics and Science. Eachyear, they must report student progress in terms of percentage of studentsscoring at the “proficient” level or higher. This reporting is referred to asadequate yearly progress (AYP). A state‟s definition of AYP should also
  40. 40. 23include high school graduation rates and an additional indicator for middleschools to reach the “proficient” level or higher, which must be no morethan 12 years after the start date of the 2001 – 2002 school year, providedthat the first increase occurs within the first 2 years (Abedi, 2004). AYP will be reported for schools, school districts, and the state for allstudents. In addition, AYP must be reported for the following subgroupcategories of students: (a) economically disadvantaged students, (b)students from major racial and ethnic groups, (c) students with disabilities,and (d) students with limited English proficiency (LEP). According to theeducational statistics for 2000 – 2001 school year, the total number ofstudents labeled as LEP in the nation‟s public schools is more than 4.5million or 9.6% of total enrollment; (National Center for Education Statistics[NCES], 2002). States are continuing to find new ways to calculate AYP under theNCLB, in order to increase the number of schools and districts that meetthe student achievement targets set by law. Over the past few years, theU.S. Department of Education (ED) has allowed states to make manychanges in the way they determine AYP, including the following: (1)confidence intervals, which make allowances for natural fluctuations in testscores and essentially bolster a school‟s or subgroup‟s percentage ofstudents scoring at proficient levels; (2) performance indices that allowschools to get “partial credit” for the performance of students below theproficient level; (3) retesting, which allows students to retake a different
  41. 41. 24version of the same test and permits schools to use a student‟s best score tocount toward AYP, and (4) increased minimum subgroup sizes, which meanthat in many schools, subgroups do not get counted for AYP purposes. Thechanges have the effect of making it easier for the schools to make AYP,early indications are that the number of schools not making AYP has leveledoff, despite predictions that this number would increase as proficiencytargets rose (Olson, 2005).Changes and Updates In NCLB‟s original conception, determining AYP for a subgroup ofstudents, a school, or a district was already fairly complicated. States had toestablish, for every year between 2003 and 2014, a set of ever-increasingstate targets in terms of the percentage of students scoring at the proficientlevel or above on annual tests, with a final goal of 100% proficiency in 2014.If at least 95% of the students in each subgroup are tested, and if allstudents and subgroups meet the state proficiency targets, the school ordistrict makes AYP. The school has to meet targets for an additionalacademic indicator, such as the graduation or attendance rate. The law hasa “safe harbor” provision: if a school or subgroup fails to meet the statetargets, it could still make AYP if it reduces the number of students who arenot proficient from the previous year by 10%, and meets its additionalacademic indicator. Some other state changes that have been approved are brieflysummarized below (Center on Education Policy, 2005):
  42. 42. 25 Minimum subgroup size. To make AYP, schools and districts mustmeet achievement targets for each significant subgroup of students enrolled,such as African-American students, low-income students, or students withdisabilities. Higher minimum subgroup sizes mean that in many schools,subgroups do not get counted for AYP purposes. Thirteen states increased their minimum subgroup sizes in 2004; tenmore did so in 2005. The trend is away from a single minimum size andtoward larger subgroup sizes, different subgroup sizes for differentsubgroups and/or purpose, and the use of formulas for determiningsubgroup sizes. Georgia is one state that uses a formula approach. Itssubgroup size varies according to the size of the school; the minimum size iseither 40 students or 10% of a school‟s student population, whichever isgreater, with a cap of 75 students. Participation averaging. NCLB requires 95% of the students in everyschool and every subgroup within a school to take each subject testrequired by the Act. If this test participation requirement is not met, theschool cannot make AYP even if its test scores meet state targets. In March2004, the Department relaxed this requirement, allowing states to averagetheir participation rates over two or three years, so that a 94% participationrate one year could be balanced by a 96% participation rate the following orprevious year. In 2005, six states changed their accountability plans toincorporate this new policy, in addition to the 32 that did so last year.
  43. 43. 26 English language learners. Initially the U.S. Department of Education(ED) required all English language learners to be tested with the samegrade-level tests as other students. In response to state and local criticism,the Department revised its policy in February 2004 to allow states to exemptimmigrant students who are in their first year of enrollment in a U.S. schoolfor less than one year from taking the regular state English Language Artstests. These students still have to take an English language proficiency testand a Mathematics test, but the results need not count toward AYP. Whencalculating AYP for the subgroup of English language learners, states canalso count the progress of former English language learners for two yearsafter they reach English proficiency. Six more states adopted these changesin 2005, in addition to the 36 states that did so in 2004. Extra time is given for students with disabilities and English languagelearners to graduate. In 2005, eight states received approval from ED tocount students with disabilities and/or English language learners asgraduating on time even if they need extra years of high school. Seven statesreceived permission to do this in 2004. For students with disabilities, theirindividualized education plans would need to call for extra years of highschool beyond age 18. English language learners can be counted asgraduating on time if it takes five years, or as determined on a case-to-casebasis (Center on Education Policy, 2005). Identifying districts for improvement. In 2005, ED approvedamendments requested by 13 states to identify a district as being in need of
  44. 44. 27improvement only when it does not make AYP in the same subject andacross all three grade spans (elementary, middle and high school) for twoconsecutive years. In 2004, 18 states made this change. Californiaattempted to have ED accept a relatively lenient method that exempteddistricts where low-income students reached a certain level on state tests.ED rejected that method, and California settled on the grade span approachinstead (Davis & Sack, 2005). Annual measurable objectives. Eleven states changed their annualscore targets in 2005; four states did so in 2004. For example, Florida wasallowed to change its schedule of annual measurable objectives so thattargets would increase in smaller increments annually, rather than in largeincrements every three years (Olson, 2005); Virginia did so as well. Severalother states, including Alabama, Alaska, New Mexico, and North Carolina,changed their annual targets because they were introducing newassessments. NCLB is a demanding law. The achievement goals are ambitious, andthe burden on states and districts of declaring schools in need ofimprovement and then imposing sanctions on them is high. To try to meetthese demands, states have a strong incentive to keep the numbers ofschools and districts not making AYP as low as possible. Unable to changethe fundamental requirements written into the law, states are usingadministrative methods to lessen the numbers of schools and districts notmaking the AYP – confidence intervals, indexing, and other techniques.
  45. 45. 28 Education Secretary Margaret Spellings has been more flexible thanher predecessor in policies regarding students with disabilities, and ingranting special exemptions to some districts in the areas of school choiceand supplemental educational services (tutoring). Secretary Spellings hasdecided to allow the Chicago school district to provide tutoring despite thefact that the district has been identified for improvement (Gewertz, 2005).This exemption was then extended to New York City, Los Angeles, Boston,Memphis, Anchorage, and Dayton. This was a regulatory change. Secretary Spellings went further with four districts in Virginia bysuspending a key element of the law itself, invoking a clause in NCLB thatallows the Secretary of Education to do so. Her action exempted thesedistricts from the law‟s requirement that they provide school choice beforetutoring (Olson, 2005). Secretary Spelling‟s letter to Virginia officialsindicates that this is a pilot program intended to raise the numbers ofstudents receiving supplemental educational services (Spellings, 2005). Inaddition, districts in the five states most affected by Hurricane Katrina wereallowed to postpone, for one year, the consequences that follow when aschool is in need of improvement, such as tutoring, restructuring, andcorrective action (Olson & Davis, 2005). ED‟s willingness to make adjustments based on state and localexperience is commendable. But on the downside, parents in many stateswould now find it difficult to understand what it means when a school doesor does not make AYP, and what criteria were used to determine this
  46. 46. 29success or failure. For example, parents in Pennsylvania may see a reportcard that indicates that their child‟s elementary school has made AYP, butmight wonder whether the school is improving or whether it simply madeAYP as the result of what might be seen as a new “loophole” in the law. Theparents probably would not understand that the school may have made AYPthrough the use of a 95% confidence interval, safe harbor with a 75%confidence interval, or the Pennsylvania Performance Index as a second safeharbor. In other states, parents of English language learners, students withdisabilities, or other subgroups may not realize that raising the minimumsubgroup sizes means that their children no longer count for AYP purposesat the school level. They might not realize that the use of confidenceintervals allows for considerable leeway in a subgroup‟s test scores notavailable to larger groups of students, and that this is occurring despite theassertion that improving achievement for subgroups is a major focus of thelaw. Other drawbacks to the increasing complexity may contribute in thedifficulty of discerning clear trends in the number of schools and districtsnot making AYP, because the rules governing AYP keep changing every year.Amid these changes, it is impossible to determine whether an increase inthe number of schools making AYP within a state is due to better teachingand learning or NCLB rule changes. The constant rule changes, particularlythe use of large confidence intervals and ever-increasing minimumsubgroup sizes, may raise questions about whether the law is being watered
  47. 47. 30down so much that it shortchanges the very groups of disadvantagedchildren that it aims to help. Public support may wither if theimplementation of the law is perceived as deceptive or confusing. As states continue to learn from one another about the new types offlexibility that ED is allowing, and as state achievement targets continue torise until 2014, changes in AYP policies are likely to occur at a more rapidpace, at the expense of the public‟s ability to understand these changes.More transparency is needed at both the state and federal levels. Statesmust fully and clearly explain their rationales for requesting changes toaccountability plans. Once changes are approved by ED, they should beexplained in such a way that the public understand how AYP is determined. At the federal level, ED should more systematically and promptlypublicize its decisions about what types of changes to state accountabilityplans are and are not acceptable, and why. The current process of grantingchanges does not help state officials learn from other states‟ experiences,nor does it help them understand how ED is interpreting the intent of thelaw. Adequate Yearly Progress (AYP) and Limited English Proficient (LEP) StudentsDefinition of English Language Learners (ELLs) and LEP Limited English Proficient (LEP) students are students who lacksufficient English skills to participate in a regular education, all-Englishspeaking classroom. English Language Learner (ELL), according to Rivera
  48. 48. 31and Stansfield (1998), is a positive way to refer to any LEP student inEnglish. NAEP does not provide a definition of the LEP population; instead itpresents criteria for the inclusion of LEP students. NAEP inclusion criteriaindicate that: A student who is identified on the Administration Schedule asLEP and who is a native speaker of a language other than English should beincluded in the NAEP assessment unless: (a) the student has receivedReading or Mathematics instruction primarily in English for less than 3school years including the current year , and (b) the student cannotdemonstrate his or her knowledge of Reading or Mathematics in Englisheven with an accommodation permitted by NAEP (NCES, 2001). Due to the importance of LEP subgroups in NCLB accountability andreporting, NCLB provides an operational definition of LEP (NCLB, 2002).According to this definition: The term „limited English proficient‟, whenused with respect to an individual, means an individual (a) who is aged 3through 21; (b) who is enrolled or preparing to enroll in an elementaryschool or secondary school; (c) who was not born in the United States orwhose native language is a language other than English; who is a NativeAmerican or Alaska Native, or native resident of the outlying areas; and whocomes from an environment where a language other than English has had asignificant impact on the individual‟s level of English language proficiency;or who is migratory, whose native language is a language other thanEnglish, and who comes from an environment where a language other than
  49. 49. 32English is dominant; and (d) whose difficulties in speaking, reading, writing,or understanding the English language may be sufficient to deny theindividual the ability to meet the State‟s proficient level of achievement onState assessments described in section 111(b)(3); the ability to successfullyachieve in classrooms where the language of instruction is English; or theopportunity to participate fully in society. The term “English language learner” (ELL) is a recent designation forstudents whose first language is not English. This group includes studentswho are just beginning to learn English as well as those who have alreadydeveloped considerable proficiency. The term reflects a positive focus onwhat these students are accomplishing – mastering another language- andis preferred by some researchers to the term “limited English proficient”(LEP), the designation used in federal and state education legislation andmost national and state data collection efforts (August & Hakuta, 1997;LaCelle-Peterson & Rivera, 1994). The ELL population is highly diverse, and any attempt to describe thegroup as a whole, as with any diverse group of people, is bound to result ininaccurate generalizations. While this group of students share oneimportant feature - the need to increase their proficiency in English - theydiffer in many other important respects. ELLs are a diverse cross-section ofthe public school student population. The primary language, culturalbackground, socio-economic status, family history, length of time in the
  50. 50. 33United States, mobility, prior school experiences, or educational goals of anystudent in this group can distinguish him or her from any other ELLs. ELLs represent a rapidly growing, culturally and linguistically diversestudent population in the United States. In 2000-2001, LEP studentscomprised nearly 4.6 million public high school students. The majority wereSpanish speakers (79.0%), followed by Vietnamese (2.0%), Hmong (1.6%),Cantonese (1.0%), and Korean (1.0%). Since the 1990-1991 school year, thelimited English proficient population has grown approximately 105%, whilethe overall school population has increased by only 12%. English learners matriculate in schools throughout the nation, butmost frequently in large urban school districts in the Sun Belt states, inindustrial states in the Northeast, and around the Great Lakes. This trendis changing as immigrants move to more affordable suburban and ruralareas and to areas where language-minority families are relative newcomers,such as the Midwest. More than half (56.1%) reside in four states alone:California (32.9%), Texas (12.4%), Florida (5.6%) and New York (5.2%)(Kindler, 2002). English learners represent one in four K – 12 students inCalifornia schools (California Department of Education, 2000). This population includes recent immigrants as well as children bornin the United States. In the 2000-2001 school year, more than 44% of allLEP students were enrolled in Pre-K through Grade 3; about 35% wereenrolled in Grades 4 – 8; and only 19% were enrolled at the high school level(Kindler, 2002). Many LEP students attend schools where most of their
  51. 51. 34peers live in poverty. There are numerous differences among Englishlearners; for example, Spanish-speaking families tend to have lower parentaleducational attainment and family incomes than Asian-or Pacific-languagefamilies (August & Hakuta, 1997). Many criteria are used across the nation for identification of ELLs.Among the most commonly used criteria are Home Language Survey resultsand scores from English proficiency tests. There are reasons to believe thatthe Home Language Survey results may not be valid because of parents‟concern over equity in education for their children, parents‟ citizenshipissues, and communication problems (Abedi, 2004b). Similarly, there areconcerns about the validity of current English proficiency tests, such as theLanguage Assessment Scales and other commonly used assessments(Zehler, Hopstock, Fleischman & Greniuk, 1994). Criterion-related validitycoefficients, or the correlation between English proficiency tests and otherexisting valid measure of English proficiency, are not strong, explaining lessthan 5% of the common variance (Abedi, 2003). Finally, in terms of contentand construct validity, there is little evidence that the contents of theexisting English proficiency tests align sufficiently with commonly acceptedEnglish language proficiency standards, such as standards by Teachers ofEnglish to Speakers of Other Languages (Bailey & Butler, 2003).Issues and Other Considerations of LEP Disaggregated progress reports by subgroups mandated by the NCLBlegislation will monitor the nation‟s goal of having “no child left behind.”
  52. 52. 35However, there are major issues in this disaggregated reporting amongdifferent subgroup categories (students who are economicallydisadvantaged, students from major racial and ethnic groups, students withdisabilities, and LEP students). NCLB requirement for subgroup reportingmay give the impression that students in the subgroup categories start theachievement race at about the same level and can progress with otherstudents at about the same rate. This might be an overly optimistic view ofthe situation of less advantaged learners. By focusing this discussion onthe consequences for schools enrolling LEP students, we see how puttinginto practice the policy may produce invalid assessment and unreliablereporting while exacerbating the burdens of current educators. Following isa discussion of some challenges in AYP measurement and reporting for LEPstudents. Results of research on the assessment of LEP students suggest astrong confounding of language and performance. LEP students exhibitsubstantially lower performance than non-LEP students in subject areashigh in language demand. Studies suggest that the large performance gapbetween LEP and non-LEP may not be due mainly to lack of contentknowledge. LEP students may possess the content knowledge but may notbe at the level of English language proficiency necessary to understand thelinguistic structure of assessment tools. Strong confusion of languagefactors and content-based knowledge makes assessment and accountability
  53. 53. 36complex for LEP students and, very likely, students in other targetedgroups. Because of the strong effect of language factors on the instruction andassessment of LEP students, they lag far behind native English speakers.This leads to huge initial differences. LEP students start with substantiallylower baseline scores. More important, unless LEP students‟ Englishlanguage proficiency is improved to the level of native English speakers-which is not an easy task- they will not be able to move at the same rate onthe Adequate Yearly Progress line as do native English speakers. NCLB cannot have much of an effect on the initial performancedifferences between LEP and non-LEP students. A more sensible questionhere is whether or not NCLB can provide enough resources to schools with alarge number of LEP students to help them increase these students‟language proficiency to a sufficient extent that they can progress with theirnative English speaker peers in both instruction and assessment. Inconsistency in LEP classification across and within states makesAYP reporting for LEP students even more complex. If students are notcorrectly identified as LEP, how can their AYP be reliably reported at asubgroup level? Although NCLB attempts to resolve this issue by providing adefinition for this group, its criteria for classifying LEP students may facethe same problems as the existing classification system (Abedi, 2003;Zehler, Hopstock, Fleishman & Greniuk, 1994).
  54. 54. 37 Inconsistency in the classification of LEP students may lead to moreheterogeneity in the LEP subgroup. With a more heterogeneous population,larger numbers of students are needed to provide the statistically reliableresults required by NCLB. The population of LEP students in many districtsand states is sparse. In many states, there may not be enough students in adistrict or school to satisfy even the minimum number of 25 studentssuggested in the literature (Linn, Baker & Herman, 2002). Other researchershave argued that even 25 students may not be enough to providestatistically reliable results and have proposed a minimum group size of 100students (Hill & DePascale, 2003). Considering a small number of LEPstudents in many districts and states, the small group size for LEP reportingwould be another obstacle in regard to reliable AYP reporting. The LEP subgroup suffers from yet another major problem related toAYP reporting: The lack of stability of this group. In many states anddistricts across the nation, LEP students‟ level of English proficiency isreevaluated regularly, and if they reach a proficient level of Englishproficiency, they move out of the LEP subgroup. While this helps the moreEnglish-proficient students receive more appropriate instruction andassessment, it results in the LEP subgroup continuing to be low-performing.The students in this group will always be labeled as underachievers, andschools with large number of LEP students will be stuck in the “need forimprovement” category.
  55. 55. 38 Some states with substantial numbers of LEP students haveexpressed concern over this issue. They have proposed ideas and negotiatedwith the federal government to ease the level of possible negative impactthat this situation may have on school, district, and state accountability.For example, Indiana and Delaware will continue to include exited LEPstudents in the LEP subgroup for 2 years after they have been determined tobe proficient in English. Georgia plans to include LEP students as long asthey still receive services through the English for Speakers of OtherLanguages program, even if they have met exit criteria (Erpenbach, Forte-Fast & Potts, 2003). In California, students re-designated as LEP will remainin the LEP category until they reach the proficient or above level on theCalifornia Standards Test in English-language arts for 3 consecutive years(California Department of Education, 2003). However, the question ofwhether this policy will provide a long-term solution to the problem of LEPsubgroup instability or serve only as a temporary relief remainsunanswered. The measurement of the academic achievement of LEP students ismuch more complex than what the NCLB legislation conceives. A fairassessment of students in the four targeted subgroup categories requiresmuch more serious consideration than is outlined in the law. Despiteattempting to solve the age-old problem of heterogeneity among LEPstudents, the NCLB seems to perpetuate it, thereby leaving more room forchildren to be left behind.
  56. 56. 39 On the other hand, NCLB‟s attention to students in the four subgroupcategories in general and to the LEP population in particular is a step in theright direction. Considering that Title III of NCLB requires assessment ofLEP students‟ English proficiency on an annual basis and providing supportto states to develop reliable and valid measures of students‟ proficiency ispromising. Any decisions concerning assessment for all subgroups,particularly LEP students, must be informed by results of research andexperience in the education community. Currently, several tests for measuring students‟ level of Englishlanguage proficiency exist. Some of these tests have been used for manyyears by different states and districts. In spite of the existence of such tests,states are developing new English language proficiency tests with fundingthrough NCLB‟s Enhanced Assessment Instruments. A reasonableexplanation for this might be that states did not find that the existing testsprovided reliable and valid measures of students‟ level of English languageproficiency as required by NCLB. If this is the reason for the development ofthe new tests, then the test developers should be aware of problems in theexisting tests to avoid the same problems in the new tests. For example, a careful review of some of the most commonly usedlanguage proficiency tests concluded that the tests differ considerably intypes of tasks and specific item content and are based on differenttheoretical emphases prevalent at the time of their development (Zehler,Hopstock, Fleischman & Greniuk, 1994). This suggests that in the case of
  57. 57. 40some of the existing tests, the English language proficiency domain was notoperationally defined before the test development process. This and similarstudies and reviews should inform the development process of new tests.For example, it is imperative this domain be operationally defined before anyeffort in developing an English proficiency test. This definition should bebased on current developments in the areas of psycholinguistics,developmental psychology, education, linguistics, and psychometrics.Content standards for English for speakers of other languages should alsobe considered (Bailey & Butler, 2003). In analyzing data from the administration of existing languageproficiency tests, researchers have expressed concerns about the reliabilityand validity of these tests, the adequacy of the scoring directions, and thelimited populations on which test norms are based. For example, analysesof several large data sets from different locations across the nation haveshown validity problems in predicting LEP classification and lack of powerin identifying different levels of English language proficiency among the LEPstudent population (Abedi, 2003; Abedi, Leon, & Mirocha, 2003). Thoseinvolved in the development of new English language proficiency testsshould learn from such research and should conduct more analyses on thewealth of data that exists in this area. To be considered valid and reliablemeasures of English language proficiency, as outlined in the NCLB, newtests must first go through a rigorous validation process. Otherwise, there
  58. 58. 41may not be a reasonable justification to spend the limited NCLB resourceson English language proficiency test development (Abedi, 2003). As a final thought, assessment and accountability of LEP studentscannot be pursued in isolation of other important factors. An effectiveeducation system for LEP students that may lead to a successful AYPoutcome should include at least three interactive components: (a)classification, (b) instruction, and (c) assessment. A problem in any one ofthese components may affect the other two. For example, a studentmisclassified as LEP student may be assigned a different curriculum andthus receives inappropriate instruction. Alternately, inappropriateinstruction may result in low performance that may in turn result inmisclassification. While each component has a unique role, they sharecommon ground - the effect of language factors or barriers. Unnecessarylinguistic complexity of assessment may threaten the validity andequitability of assessment among LEP students. Complex linguisticstructure of instruction may negatively affect LEP students‟ ability tounderstand classroom instruction, and invalid assessment of students‟ levelof English proficiency may result in misclassification. In a positive light,valid assessment may provide diagnostic information that can informinstruction and classification (Abedi, 2003). An effective way to help LEP students reach proficiency in the AYPmodel is to consider the broader picture using the interactive model. Thefollowing are few critical needs:
  59. 59. 421. Improve current LEP classification and assessment. There is a need to establish a common definition of English language proficiency and substantially improve the validity of LEP instruments. Among other things, validity of LEP assessment can be enhanced by avoiding cultural biases and reducing unnecessary linguistic complexity of assessments.2. Improve monitoring of progress. Schools need effective and valid data collection methods that can be used to monitor LEP progress at every stage of a student‟s education. Weaknesses must be quickly addressed with appropriate instructional strategies.3. Improve teacher quality. LEP students need teachers who are well qualified in both language development and content, each of which plays a crucial role in LEP student achievement. The federal government can play a key role in this process by funding and encouraging programs that improve teacher capacity in this dual role. Teachers of LEP students should receive training in content delivery, language sheltering, and the teaching of the academic language.4. Consider redesignated LEP students as part of the LEP subgroup that established the baseline score. State plans allowing redesignated students to remain in the LEP subgroup for only a limited time are temporary fixes. While new LEP students are added to the subgroup, redesignated students should be retained for AYP reporting. This “semicohort” approach to tracking LEP students allows the progress of
  60. 60. 43 redesignated students to be counted toward subgroup AYP progress (Abedi, 2003). Based on the results of the research, policymakers, lawmakers, anddecision makers are urged to take appropriate action to correct theinequities resulting from the NCLB in regard to the subgroups targeted bythe legislation, particularly the LEP student subgroup. What is encouragingis that states, in collaboration with the federal government, are taking stepsto remedy some of these issues. The hope is that these continued efforts willbring more fairness into the assessment of and accountability for LEPstudents (Abedi, 2003). High Stakes / Statewide Testing The 2001 reauthorization of the Elementary and Secondary EducationAct (ESEA), also known as the No Child Left Behind (NCLB), carries testingand accountability requirements that will substantially increase studenttesting and hold all schools accountable for student performance. Thislegislation marks a major departure from the federal government‟straditional role regarding elementary and secondary education. It requiresthat states administer Reading and Mathematics tests annually in grades 3– 8 and during one year in high school starting in 2005 – 2006. Theserequirements will affect almost 25 million students each school year(National Center for Education Statistics, 2002). NCLB requires states to meet adequate yearly progress (AYP) goals toensure school accountability for student achievement on state tests. Schools
  61. 61. 44that fail to achieve AYP goals face demanding corrective actions, such asreplacement of school staff, implementation of new curriculum, extension ofthe school day or academic year, parental choice options, and, finally,complete reorganization. Today‟s widespread implementation of standards-based reform andthe federal government‟s commitment to test-based accountability ensurethat testing will remain a central issue in education for the foreseeablefuture. Test results can provide useful information about student progresstoward meeting curricular standards. But when policymakers insist onlinking test scores to high-stakes consequences for students and schools,they often overlook lessons from the long history of research (Abrams &Madaus, 2003). Current emphasis on testing as a tool of education reform continues along tradition of using tests to change pedagogical priorities and practices.In the United States, this use of testing dates back to 1845 in Boston, whenHorace Mann, then Secretary of the Massachusetts State Board ofEducation, replaced the traditional oral examination with a standardizedwritten essay test. Internationally, high-stakes testing extends as far backas the 15th century in Treviso, Italy, where teacher salaries were linked tostudent examination performance (Madaus & O‟Dwyer, 1999).Principles of Testing Programs A 1988 examination of the effects of high-stakes testing programs onteaching and learning in Europe and in the United States (Madaus, 1988)
  62. 62. 45identified seven principles that captured the intended and unintendedconsequences of such programs. Current research confirms that theseprinciples still hold true for contemporary statewide testing efforts. Principle 1: The power of tests to affect individuals, institutions,curriculum, or instruction is a perceptual phenomenon. Tests produce largeeffects if students, teachers, or administrators believe that the results areimportant. Policymakers and the public generally do believe that test scoresprovide a reliable, external, objective measure of school quality. They viewtests as symbols of order, control and attainment (Airasian, 1988). Today‟s high-stakes testing movement relies on the symbolicimportance of test scores. Forty-eight states currently require schools toprovide the public with “report cards” (Edwards, 2003). Goldhaber andHannaway (2001) found that the stigma associated with a school receiving alow grade on the state report card was a more powerful influence on Floridateachers than were the school-level sanctions imposed for poor test results. Principle 2: The more any quantitative social indicator is used forsocial decision making, the more likely it will be to distort and corrupt thesocial process it is intended to monitor. In other words, placing greatimportance on state tests can have a major influence on what takes place inthe classrooms, often resulting in an emphasis on test preparation that cancompromise the credibility or accuracy of test scores as a measure ofstudent achievement.

×