Walden University
COLLEGE OF EDUCATION
This is to certify that the doctoral study by
Kathy Jones
has been found to be complete and satisfactory in all respects,
and that any and all revisions required by
the review committee have been made.
Review Committee
Dr. Thomas Schnick, Committee Chairperson, Education Faculty
Dr. Barbara Bailey, Committee Member, Education Faculty
Dr. Brett Welch, University Reviewer, Education Faculty
Chief Academic Officer
David Clinefelter, Ph.D.
Walden University
2010
Abstract
The Relationship Between Oral Reading Fluency and Reading Proficiency
by
Kathy S. Jones
MA, Abilene Christian University, 2001
BS, Abilene Christian University, 1997
Dissertation Submitted in Partial Fulfillment
of the Requirements for the Degree of
Doctor of Education
Administration
Walden University
August 2010
Abstract
Students who struggle with reading in Grade 3 often fall behind their peers in reading
proficiency. Failure to meet minimum reading proficiencies of state-mandated tests can
negatively affect children’s success in subsequent grades. Educators have used oral
reading fluency tests as reliable indicators of students’ later reading proficiency. Studies
in 7 states found that oral reading fluency predicted performance on state reading
assessments. One aspect of the automaticity theory was used to explain why struggling
readers have insufficient attention to devote to reading proficiently. This
nonexperimental, quantitative study investigated whether a statistically significant
relationship existed between Grade 3 students’ oral reading fluency rates and their
reading proficiency when assessed by the state-mandated assessment. Pearson correlation
was used to compare the middle-of-year oral reading fluency rates, measured by the
Dynamics Indicators of Basic Literacy Skills oral reading fluency, and reading
proficiency, measured by the scale score Grade 3 Reading Texas Assessment of
Knowledge and Skills, of 155 Grade 3 students for the 2008-2009 school year in a school
district. The results indicated a relationship between oral reading fluency and reading
proficiency. Study results may help elementary school administrators, teachers, and
reading specialists to identify at-risk readers and implement interventions to enable
students to gain greater reading proficiency and improve their performance on state-
mandated assessments.
The Relationship Between Oral Reading Fluency and Reading Proficiency
by
Kathy S. Jones
MA, Abilene Christian University, 2001
BS, Abilene Christian University, 1997
Dissertation Submitted in Partial Fulfillment
of the Requirements for the Degree of
Doctor of Education
Administration
Walden University
August 2010
UMI Number: 3423184
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
UMI 3423184
Copyright 2010 by ProQuest LLC.
All rights reserved. This edition of the work is protected against
unauthorized copying under Title 17, United States Code.
ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106-1346
Dedication
This dissertation is dedicated to God. Throughout my life, He has prepared me for
what was coming next. At times, He prepared me for something before I even dared to
dream about it. My doctoral journey has been no exception. God first planted the dream
in me of pursuing a doctorate 15 years ago. I did not even have a bachelor’s degree at the
time, and the dream seemed impossible. Every time it was in danger of fading, God
would use someone to nudge me forward and place me back on track.
He has been with me through all the twists and turns of these past 5 years that I
have been enrolled in the doctoral program. He has molded me and made me into the
person I have become. I dedicate this honor to Him and trust that He will guide me as He
continues to use me to serve others with the talents with which He has blessed me.
Acknowledgments
God has used many people to encourage and inspire me on this doctoral journey. I
am thankful for the support of my family. My husband, Roy, has stood by my side
through all the ups and downs. He celebrated with me as I conquered hurdles and
encouraged me when I was discouraged. My adult children, Rachel, Benjamin, Timothy,
and Daniel, have showered me with comments, and I have especially been bolstered and
encouraged with their cries of “Way to go, Mom!” My parents, Bill and Anita Love,
planted the vision of setting goals and working hard to achieve them. My father has gone
on, but I treasure the memory of the crack in his voice as he proudly told others I was
working on my doctorate.
People in the field of academia have guided me on my journey as well. I
appreciate the support of my chair, Dr. Thomas L. Schnick. He has promptly reviewed
my work each time it was submitted and provided me with constructive criticism that
improved the quality of my dissertation. Dr. Barbara Bailey served as my methodologist.
I thank her for helping me see how the problem, purpose, and research questions
provided the foundation for the dissertation.
I am especially appreciative of my editor, Dr. Noelle Sterne, who coached me
from where I was to where I needed to be. She had a clear vision of what my dissertation
had to be and how to get me there. In addition to editorial changes, she encouraged me to
clearly describe my thoughts, to tell her more about what I was writing, and to find
additional sources to support my statements. This coaching was invaluable in my desire
to deepen and expand the substance of this dissertation.
I am grateful too for the bonds I have forged with fellow students. Because
Walden is an online university, I have not had the pleasure of meeting many of my peers.
Nonetheless, I was amazed at how deeply the bond of friendship developed. I especially
appreciate the friendship of Susan Myers. We met in our first online classes, roomed
together at two residencies, exchanged many emails, and talked on the phone. I am
confident that our professional relationship will continue even beyond graduation.
I appreciate and thank Keith Richardson, the superintendent of the district
involved in this study, for his cooperation. He supported me in signing the data use
agreement and letter of cooperation so that I could use the district archival data for the
study. I appreciate also the continued support and encouragement of Doug Doege, the
principal of the school for the study. He began working with me when the logistics of the
study were still a dream to me. As the details developed, he helped make them a reality. I
am also grateful for the cooperation of the Grade 3 teachers and students.
Finally, I give special thanks to Dynamics Measurement Group for giving me
permission to use the DIBELS ORF benchmark in the study and to the Texas Education
Agency for granting copyright permission for the use of the Grade 3 Reading TAKS in
the study.
i
Table of Contents
List of Tables .......................................................................................................................v
Section 1: Introduction to the Study ....................................................................................1
Problem Statement.........................................................................................................3
Nature of the Study........................................................................................................4
Research Question and Hypothesis................................................................................6
Purpose of the Study......................................................................................................6
Theoretical Framework..................................................................................................7
Operational Definitions..................................................................................................8
Assumptions, Limitations, Scope, and Delimitations..................................................10
Significance of the Study.............................................................................................11
Transition Statement....................................................................................................13
Section 2: Literature Review .............................................................................................16
Introduction..................................................................................................................16
Automaticity Theory....................................................................................................16
Oral Reading Fluency ..................................................................................................21
Definition of Oral Reading Fluency ..................................................................... 22
The Role of Oral Reading Fluency in the Context of Learning to Read .............. 26
Fluency Instruction ............................................................................................... 29
Reading Proficiency.....................................................................................................30
Reading Proficiency and Automaticity Theory .................................................... 30
ii
Definition of Reading Proficiency........................................................................ 34
Vocabulary and Matthew Effects.......................................................................... 36
Comprehension ..................................................................................................... 38
Relationship Between Oral Reading Fluency and Reading Proficiency in
State-Mandated Assessments...........................................................................43
Studies of Grade 3 Only........................................................................................ 44
Studies of Reading First Schools.......................................................................... 44
Studies of Grade 3 and Other Grades ................................................................... 45
Limitations of State Studies.................................................................................. 47
Summary......................................................................................................................49
Section 3: Research Design ...............................................................................................52
Introduction..................................................................................................................52
Research Design...........................................................................................................52
Setting and Sample ......................................................................................................55
Setting ................................................................................................................... 55
Characteristics of the Sample................................................................................ 55
Sampling Method.................................................................................................. 57
Sample Size........................................................................................................... 57
Eligibility Criteria for Study Participants ............................................................. 58
Instrumentation and Materials .....................................................................................58
DIBELS ORF........................................................................................................ 59
TAKS ................................................................................................................... 67
iii
Data Collection and Analysis.......................................................................................71
Data Collection ............................................................................................................71
Data Analysis...............................................................................................................73
Researcher’s Role ........................................................................................................74
Protection of Participants’ Rights................................................................................74
Summary......................................................................................................................75
Section 4: Results of the Study.........................................................................................77
Introduction..................................................................................................................77
Research Question and Hypothesis..............................................................................77
Research Tools.............................................................................................................78
Data Analysis...............................................................................................................80
Summary......................................................................................................................87
Section 5: Discussion, Conclusions, and Recommendations.............................................89
Overview......................................................................................................................89
Interpretation of Findings ............................................................................................89
Recommendations for Action ....................................................................................100
Recommendations for Further Study.........................................................................105
Quantitative Studies............................................................................................ 105
Qualitative Studies.............................................................................................. 108
Mixed-Method Studies........................................................................................ 109
Conclusion .................................................................................................................112
References........................................................................................................................116
iv
Appendix A: Permission to Collect Data.........................................................................133
Appendix B: Data Use Agreement ..................................................................................135
Appendix C: Permission to Use DIBELS........................................................................139
Appendix D: Permission to Use TAKS ...........................................................................141
Curriculum Vitae .............................................................................................................151
v
List of Tables
Table 1. Characteristics of Grade 3 Students in Texas Overall and the Research
Site .......................................................................................................................56
Table 2. Correlation of DIBELS ORF Rates and TAKS Scores for Grade 3 Students.....80
Table 3. Ranges and Categorization Cut Points of DIBELS ORF and Grade 3 TAKS ....82
Table 4. Comparison of Students’ Performance by Categories for DIBELS ORF and
Grade 3 Reading TAKS.......................................................................................84
Table 5. Studies Correlating State-Mandated Assessments to the DIBELS ORF.............90
1
Section 1: Introduction to the Study
It is important for students to learn to read proficiently by the end of Grade 3.
From kindergarten through Grade 2, students focus on learning to read (Ehri, 2005). The
process of learning to read is a complex but systematic one. Many children learn to
recognize letters and words even before beginning formal school (Ehri, 2005). By the
end of Grade 3, students are expected not only to decode the words in a text, but also to
understand what they read (National Institute of Child, Health and Human Development
[NICHHD], 2000). Students who struggle with reading at the end of Grade 3 often
continue to fall farther behind their peers in terms of reading proficiency (Morgan,
Farkas, & Hibel, 2008). The failure to meet minimum reading proficiencies at the end of
Grade 3 can have negative consequences on children’s success in subsequent grades
(Jimerson, Anderson, & Whipple, 2002; Katsiyannis, Zhang, Ryan, & Jones, 2007).
After Grade 3, students transition from learning to read to reading to learn in other
subject areas (Musti-Rao, Hawkins, & Barkley, 2009). As a part of implementing the No
Child Left Behind legislation (NCLB, 2002), the federal government commissioned the
National Accessible Reading Assessment Project (NARAP, 2006) to define reading
proficiency. Their definition described reading as a process in which readers decode
words in order to make meaning. Readers understand text by using a variety of reading
strategies to determine the purpose of a passage and understand the context and nature of
the text. Once the NARAP established its working definition of reading proficiency,
states used this definition to write and assess curriculum objectives for reading. In section
2
2, I review literature that discussed reading in the context of students’ learning to read
and reading proficiency.
In compliance with NCLB (2002), federal legislators mandated that school
districts monitor students’ progress by identifying academic needs early in school, and
providing scientifically-based interventions. As NCLB requires, states have designed
curriculum standards and annual assessments to measure the number of students who
read proficiently. For example, Texas has used performance on high-stakes tests in
Grade 3 to determine promotion or retention of students (Texas Education Agency
[TEA], 2008). Given the legal mandates and the commitment of NCLB to leave no child
behind academically, it is important that local school districts identify nonproficient and
struggling readers before they fail in reading and later grades (Jenkins, Hudson, &
Johnson, 2007).
To ascertain students’ reading proficiency in the early grades, researchers have
established that oral reading fluency is a reliable predictor of reading proficiency (Fuchs,
Fuchs, Hosp, & Jenkins, 2001; Simmons et al., 2008). Section 2 contains additional
reports of researchers who found that oral reading fluency was a predictor of
performance on seven state-mandated reading assessments (Baker et al., 2008; Barger,
2003; Roehrig, Petscher, Nettles, Hudson, & Torgesen, 2008; Shapiro, Solari, &
Petscher, 2008; Vander Meer, Lentz, & Stollar, 2005; Wilson, 2005; Wood, 2006).
Section 2 also contains reviews of literature in which reading experts disagree on the
definition of fluency (Kame’enui & Simmons, 2001; Samuels, 2007). States define
reading proficiency by using curriculum standards rather than the verbal definitions of
3
NARAP (2006), and the curriculum standards may differ from state to state (Colorado
Department of Education, 2009; Florida Department of Education, 2005; TEA, 2004).
Researchers have established positive correlations between oral reading fluency
rates and reading proficiency (Hosp & Fuchs, 2005; Simmons et al., 2008). Additional
research is needed to confirm the specific relationship between oral reading fluency
assessments and other state-mandated tests for reading proficiency (Roehrig et al.,
2008). The purpose of this research project was to determine if a relationship existed
between oral reading fluency of Grade 3 students in a West Texas school district and
their performance on the state-mandated assessment in Texas in the 2008-2009 school
year.
Problem Statement
In Texas, the site of the current research, in 2009, 33,462 students, approximately
15% of all Grade 3 students, were unable to read proficiently enough to pass the Grade 3
Reading Texas Assessment of Knowledge and Skills (TEA, 2009c). According to the
Nation’s Report Card in Reading (U. S. Department of Education, 2007), 67% of the
fourth-grade students who took the National Assessment of Educational Progress (NAEP)
in 2007 scored at the basic level or above. However, 33% of the tested students did not
read well enough to score at the basic level. The majority of students learn to read
proficiently (Chard et al., 2008; Ehri, 2005). Nevertheless, many students do not reach an
acceptable level of reading by Grade 3 and are considered struggling readers (Applegate,
Applegate, McGeehan, Pinto, & Kong, 2009; Pressley, Gaskins, & Fingeret, 2006;
Torgesen & Hudson, 2006).
4
Educators have used oral reading fluency as a reliable indicator of students’
progress toward overall reading proficiency (Jenkins et al., 2007). The following
researchers have found significant correlations between oral reading fluency rates and
reading proficiency in seven states’ mandated reading assessments: Arizona (Wilson,
2005), Colorado (Shaw & Shaw, 2002; Wood, 2006), Florida (Buck & Torgesen, 2003;
Roehrig et al., 2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005),
Oregon (Baker et al., 2008), and Pennsylvania (Shapiro et al., 2008). However, it is
unknown whether a similar connection exists in the Texas school system (Roehrig et al.).
An investigation of this possible relationship is important for providing evidence to aid
elementary school administrators, teachers, and reading specialists to identify at-risk
readers. Early identification of struggling readers in concurrence with research-driven
interventions can close the gap between these readers and their proficient peers before the
end of Grade 3 (Simmons et al., 2008).
Nature of the Study
To determine if a statistically significant (p < .05) relationship existed between
students’ oral reading fluency rates and their reading proficiency, this nonexperimental
quantitative study compared the middle-of-year oral reading fluency rates and reading
proficiency of 155 Grade 3 students for the 2008-2009 school year in a West Texas
school district. The students resided in a small West Texas town with a population of
6,821, and in which all Grade 3 students attended the same elementary school. The
demographics of the school district are similar to those of all Grade 3 students in Texas
5
(TEA, 2009c). To obtain as wide a range of scores as possible, all of the Grade 3 students
in this district comprised the nonprobability convenience sample.
The Dynamic Indicators of Basic Early Literacy Skills Oral Reading Fluency
(DIBELS ORF) comprised the independent variable to measure students’ oral reading
fluency rates. The developers of DIBELS ORF found the assessments reliable and valid
(Good & Kaminski, 2002a). Additionally, other researchers have found DIBELS ORF
rates reliable (Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw &
Shaw, 2002; Vander Meer et al., 2005; Wilson, 2005, Wood, 2006). However, some
researchers disagreed with the definition of fluency used by the developers of DIBELS
(Hudson, Lane, & Pullen, 2005; Young & Rasinski, 2009) and the reliability and validity
of the assessment (Samuels, 2007). Section 2 contains a review of the literature on
DIBELS ORF, including the controversy.
The Grade 3 Reading Texas Assessment of Knowledge and Skills (TAKS) scale
scores (TEA, 2009b) comprised the dependent variable to measure students’ reading
proficiency. The TEA in conjunction with Pearson Education established the reliability
and validity of TAKS (TEA & Pearson, 2008). Staff members of these organizations
have worked regularly with teachers in Texas and national testing experts to ensure that
the TAKS is a quality assessment. Further information describing the reliability and
validity of TAKS is included in section 3.
To address the study problem, I used the middle-of-year 2008-2009 DIBELS ORF
benchmark rates of the Grade 3 sample from January 2009 and the scale score of the
2009 Grade 3 Reading TAKS. I applied the Statistical Package for the Social Sciences
6
(SPSS, 2009) software program, version 17.0, and conducted a Pearson correlation
analysis to determine if there was a statistically significant relationship (p < .05) between
oral reading fluency rates as measured by DIBELS ORF and reading proficiency as
measured by the scale score of the Grade 3 Reading TAKS.
Research Question and Hypothesis
The following research question guided this study: Is there a statistically
significant relationship between Grade 3 students’ oral reading fluency rates and their
reading proficiency? From this question, the following null and alternative hypotheses
were formulated:
H0: There is no statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for
the 2008-2009 school year.
H1: There is a statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for
the 2008-2009 school year.
Section 3 contains additional information regarding the research question,
hypotheses, methodology, and measurements.
Purpose of the Study
The purpose of this nonexperimental, quantitative study was to determine whether
a statistically significant relationship (p < .05) existed between Grade 3 students’ oral
7
reading fluency rates and their reading proficiency. With a nonprobability convenience
sample of 155 Grade 3 students in a West Texas school district, I sought to determine if a
statistically significant relationship existed between oral reading fluency rates and
students’ reading proficiency on the summative, high-stakes reading assessment results
for the 2008-2009 school year. The independent variable was oral reading fluency, and
the dependent variable was reading proficiency. The outcome of the study identified
additional ways in which elementary teachers can screen struggling readers so that
interventions can take place to help these students to increase their reading ability.
Theoretical Framework
The automaticity theory developed by LaBerge and Samuels (1974) and expanded
upon by Samuels (2006) formed the theoretical basis of this study. The automaticity
theory explains the relationship between fluency and comprehension. When initially
proposing the automaticity theory, LaBerge and Samuels found that accurately reading
words alone was not sufficient for reading proficiency. They posited that for students to
devote attention to other aspects of reading, they must first accurately and fluently read
words.
In 2006, Samuels identified four components of the reading process: decoding,
comprehension, metacognition, and attention; the last he viewed as limited. When
students spend too much time trying to sound out words, they cannot comprehend what
they have read by the end of a line or a page. Unless students are able to fluently decode
the words in a text, they do not have sufficient attention to devote to comprehend what is
read or to use metacognitive strategies to improve their comprehension. This theory
8
explains why slow readers who may accurately identify all or most of the words in a
passage may still not read proficiently enough to pass state-mandated reading
assessments.
Several researchers have tested the automaticity theory and found a relationship
between oral reading fluency and reading proficiency in general (Baker et al., 2008;
Daane, Campbell, Grigg, Goodman, & Oranje, 2005; Kuhn, 2005; Morgan & Sideridis,
2006; Riedel, 2007). However, scholars disagree on the definition of oral reading fluency.
Some define oral reading fluency as rate and accuracy alone (Kame’enui & Simmons,
2001; Riedel), and others include prosody and comprehension (Allington, 2009; Samuels,
2006). I will further discuss the automaticity theory, including controversies in the
literature, in section 2.
Operational Definitions
The following terms as defined were used throughout this study.
Automaticity theory: The four-component process by which proficient readers
decode words fluently, enabling them to focus attention on higher comprehension skills
of the passages read (Samuels, 2006).
Dynamic Indicators of Basic Early Literacy Skills (DIBELS): Short, 1-minute
assessments that educators use to measure the development of early literacy skills. These
assessments include oral reading fluency, retell fluency, and nonsense word fluency
(Good & Kaminski, 2002b).
Matthew effects: The gap between readers and struggling readers. Matthew
effects are based on the Biblical concept found in Matthew 25:29, a verse which refers to
9
the rich becoming richer and the poor becoming poorer. In the context of reading, this
term indicates that students who read consistently increase their knowledge base and
vocabulary, and readers who read less will learn less and continue to be struggling
readers. The gap between proficient readers and struggling readers widens as they
progress through school (Morgan et al., 2008; Stanovich, 1998).
Oral reading fluency (ORF): The ability to read words accurately and quickly
(Fuchs et al., 2001; Roehrig et al., 2008) with proper expression (Eldredge, 2005; Hudson
et al., 2005) and comprehension (Marcell, 2007; Samuels, 2007). In the current study,
Grade 3 students’ DIBELS ORF rates were used as the independent variable (Good &
Kaminski, 2002b).
Proficiency level: A scale score of 2100 or more on the Grade 3 Reading TAKS
(TEA, 2009b). Out of the 36 questions, proficient readers answer at least 24 correctly to
demonstrate proficiency.
Reading proficiency: The ability of a reader to decode the words in a passage and
comprehend what it says. Proficient readers employ a variety of strategies to comprehend
a passage, such as determining the purpose for reading, using context clues and the nature
of the passage, and using background knowledge (Cline, Johnstone, & King, 2006).
Proficient readers demonstrate a basic understanding of reading, apply knowledge of
literary elements, use a variety of strategies to analyze a passage of text, and apply
critical-thinking skills to analyze a passage by scoring 2100 or more on the Grade 3
Reading TAKS (TEA, 2004). In the current study, I used students’ scale scores from the
Grade 3 Reading TAKS as the dependent variable to measure reading proficiency.
10
Scale score: In 2009, the TEA measured performance on the TAKS using a scale
score. A raw score of zero had a scale score of 1399, and a perfect raw score of 36 had a
scale score of 2630. Students who achieved a scale score of 2100 or above were
considered proficient; students who received a scale score of 2400 or above received
commended performance (TEA, 2009b).
Struggling readers: Struggling readers have at least average intelligence, but they
read more slowly than other readers their age and may be at risk for long-term reading
difficulties (Pressley et al., 2006).
Texas Assessment of Knowledge and Skills (TAKS): The TAKS is a state-
mandated, summative, high-stakes assessment in Texas that assesses specific
proficiencies in Grades 3 to 11 in accordance with the state curriculum (TEA, 2009a).
The TAKS is scored with a scale score, defined above. For this study, I used the scale
score of the Grade 3 Reading TAKS to measure the dependent variable of reading
proficiency (TEA, 2006, 2009e).
Assumptions, Limitations, Scope, and Delimitations
In this study, I made four assumptions. First, the DIBELS ORF was a reliable
measure of oral reading fluency. Second, the Grade 3 Reading TAKS was a reliable
measurement of reading proficiency. Third, the Grade 3 students’ DIBELS ORF rates and
Grade 3 Reading TAKS scale scores reported for the target West Texas school district
were accurate and reliable. Finally, the statistical method chosen for this study was
appropriate.
11
I acknowledged four limitations for this study. The first was the use of a
nonprobability convenience sample, and the second was the limitation of the study
population to students in Grade 3. Because of these factors, the results may not generalize
to students in other elementary grades. Third, the study took place in a single
geographical location, a small town in West Texas. Similarly, the results may not
generalize to other locations in Texas or other states. Finally, quantitative correlational
analysis does not prove the existence of relationships between variables. Rather, in
correlational studies, a statistical procedure may establish a relationship between two
variables, but neither the method nor results prove that one variable causes another
variable (Gravetter & Wallnau, 2005). Thus, study results do not prove the existence of a
relationship between oral reading fluency rates and reading proficiency or that low
fluency rates cause poor reading comprehension. Rather, results indicate whether a
statistically significant relationship exists between the two variables.
The scope focused on the oral reading fluency rates and reading proficiency of
Grade 3 students in a Western school district. I limited the study to 155 students in Grade
3 in a West Texas elementary school district. I investigated only the two variables of oral
reading fluency and reading proficiency for a single school year, 2008-2009. Each of the
variables was operationalized by a single state-mandated assessment, for oral reading
fluency the DIBELS ORF and for reading proficiency the Grade 3 Reading TAKS.
Significance of the Study
This study was significant in several ways. With regard to the literature, the study
filled a gap that existed regarding students’ oral reading fluency and reading proficiency.
12
Researchers in seven states (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008;
Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006) used DIBELS
ORF rates to determine if there was a significant relationship between oral reading
fluency rates and reading proficiency as measured by their states’ mandated reading
assessment. Although each of these studies found that oral reading fluency rates
correlated with reading proficiency, additional studies were needed for other states since
the curriculum standards that define reading proficiency differ from state to state
(Roehrig et al.). This study filled a gap in the literature on the assessment of struggling
readers by determining if such a relationship exists with Grade 3 students in Texas.
In addition, with regard to professional application of the study, the results can
help teachers in Texas. NCLB (2002) mandates that school districts must make adequate
yearly progress in students’ reading scores. In situations where the school or district has
not met adequate yearly progress for 2 consecutive years, state education agencies are
required to impose corrective actions, such as removing the principal, replacing the staff,
receiving advice from outside experts, or requiring the implementation of an entirely new
curriculum.
The study results can help Texas educators identify struggling readers before the
administration of the Grade 3 Reading TAKS. Educators can then provide interventions
designed to improve basic reading skills, such as decoding, fluency, vocabulary, and
comprehension skills. Such interventions may help struggling readers’ possibilities of
scoring in the proficient range on the state-mandated assessment (Simmons et al., 2008).
13
In turn, the school districts may increase the likelihood of meeting adequate yearly
progress.
Study results may also contribute to positive social change. Elementary educators
may more easily identify struggling readers so that interventions can take place before
students take the Grade 3 Reading TAKS. Such interventions would target basic literacy
skills and include greater oral reading fluency and decoding strategies to improve
struggling readers’ reading proficiency (Jenkins et al., 2007). Diagnosis of struggling
readers and skills implementation help reduce the risk of students’ failing the state tests
and increase their possibilities of academic success (Ehri, 2005; Shapiro et al., 2008).
Struggling readers who are identified and remediated before the end of Grade 3
are more likely to improve their reading skills, thus helping to close the academic gap
with more proficient peers (Simmons et al., 2008). When struggling readers experience
success in reading, they are more likely to continue academic achievement through the
grades and graduate from high school (Houge, Peyton, Geier, & Petrie, 2007; Rumberger
& Palardy, 2005). Graduation from high school with proficient reading skills increases an
individual’s opportunities for employment and contribution to society (Katsiyannis et al.,
2007).
Transition Statement
It is important for students to learn to read proficiently by the end of Grade 3
(Ehri, 2005; NCLB, 2002). Nationwide, most students learn to read before the fourth
grade (U. S. Department of Education, 2007); however, between 15% (TEA, 2009c) and
33% (U.S. Department of Education, 2007) of students do not read proficiently.
14
Researchers in Arizona (Wilson, 2005), Colorado (Wood, 2006), Florida (Roehrig et al.,
2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et
al., 2008), and Pennsylvania (Shapiro et al., 2008) have found that oral reading fluency
rates correlated with reading proficiency as assessed on their states’ mandated reading
assessment. The two variables have been studied only in these seven states.
This nonexperimental quantitative study, based on the automaticity theory
(LaBerge & Samuels, 1974) and using Pearson correlation, filled a gap in the literature to
determine with a sample of 155 Grade 3 students in Texas if a statistically significant
relationship (p < .05) existed between their oral reading fluency rates (middle-of-year
DIBELS ORF) and their reading proficiency (end-of-year TAKS). Study findings may
result in positive social change by providing Texas educators with a valuable tool for
identifying struggling readers in Grade 3 (Simmons et al., 2008). Once struggling readers
are identified, educators can provide scientifically-based reading interventions to
strengthen phonemic awareness, phonics, fluency, vocabulary, and comprehension skills
(Chard et al., 2008). The identification and assistance of struggling readers can help them
improve academically and increase their success later in life (Katsiyannis et al., 2007;
Shaw & Berg, 2009).
In section 2, I review pertinent literature on the automaticity theory, oral reading
fluency, reading proficiency, and related concepts. In section 3, I describe the study
methodology in greater detail. Aspects include the sample, instruments, protection of
participants’ rights, data collection, and data analysis. In section 4, I report the study
findings. In section 5, I analyze the findings and discuss data from the study in light of
15
the previous studies that compared DIBELS ORF rates with student performance on
state-mandated assessments. Additionally, I make recommendations in section 5
regarding future actions and ideas for future studies.
16
Section 2: Literature Review
Introduction
In this section I review literature pertinent to this study in terms of the theory and
studies relevant to the variables. The section is organized as follows: (a) automaticity
theory; (b) oral reading fluency, including definitions, the role of oral reading fluency in
learning to read, and fluency instruction; (c) reading proficiency, including reading
proficiency and automaticity theory, definitions of reading proficiency, vocabulary and
Matthew effects, and comprehension; and (d) the relationship between oral reading
fluency and reading proficiency in state-mandated assessments, including studies of
Grade 3 only, of Reading First schools, and of Grade 3 and other grades, as well as
limitations of these studies. The section is concluded with a summary.
To investigate the literature for this study, I searched electronic databases, such as
Academic Search Premier, EBSCO, Dissertations and Theses, and ERIC, using the
keywords oral reading fluency/ORF, phonemic awareness, phonics, reading
comprehension, reading proficiency, and vocabulary. I also used bibliographies of
articles to search for additional literature and reviewed peer-reviewed articles. In
addition, I read several recently published books on the subject of oral reading fluency.
Automaticity Theory
LaBerge and Samuels (1974) used the automaticity theory to explain the
relationship between fluency and reading proficiency. They acknowledged that reading is
a complex skill that takes years to develop. Fluent readers are able to process the
necessary information in less than a second. However, some readers never reach a level at
17
which they fluently read a text, even though they are able to adequately communicate
orally.
Reading is a complex skill with many subskills, in which readers must recognize
letters, spelling patterns, and words as well as attach meanings of words and context.
According to this theory, attention is a limited faculty, and when students are beginning
to learn to read, their attention is focused on factors other than meanings, such as shapes
of letters (LaBerge & Samuels, 1974). In prekindergarten and kindergarten, students must
initially focus on the shapes within a letter, the length of lines, and the direction letters
are facing. As beginning readers, students have great difficulty distinguishing the
difference between letters. In addition, at first, students focus their attention on
recognizing and naming letters and do not have enough attention to focus on the sounds.
In addition to letter naming, LaBerge and Samuels (1974) found that a certain
amount of attention devoted to other subskills is also necessary for development of fluent
reading. Once beginning readers automatically recognize letters, they must learn to
associate sounds with the letters. Then they can focus their attention on reading parts of
words and entire words.
To test their theory of automaticity, LaBerge and Samuels (1974) designed an
experiment with eight college students. They measured the time the students took to
identify patterns of familiar and unfamiliar letters. The students were able to accurately
identify patterns of familiar letters more quickly than those of unfamiliar letters. Next, the
students received training on the unfamiliar letters. At the end of a 20-day period, they
were able to recognize the unfamiliar letters more accurately although still not as fast as
18
the familiar letters. LaBerge and Samuels concluded that some degree of attention was
still needed for the students to make the association for unfamiliar letters.
From these results, LaBerge and Samuels (1974) expressed concerns regarding
their observations of teaching reading. They had found that teachers often directly teach
letter names until students are able to correctly identify the letters. Then teachers move to
direct instruction in other aspects of reading, such as letter sounds and blending.
However, students may still not be able to fluently identify letters. Because their attention
is focused on this task, they may not have adequate attention to devote to learning sounds
and how to blend them together to make words. Thus, LaBerge and Samuels
recommended that teachers continue teaching and testing letter naming until students
reach a level of automaticity. The researchers maintained that only when students can
automatically recognize letters can they focus a significant amount of attention on the
sounds the letters make.
In 1974, when LaBerge and Samuels first proposed the automaticity theory, they
focused on automaticity at the word level. Their study showed how readers who had not
reached a level of automatically decoding words would have trouble comprehending. At
that time, LaBerge and Samuels did not conceptualize that automaticity in other areas
was also important.
However, as researchers continued to develop the automaticity theory, they
recognized that metacognition also played an important role in comprehension and that
aspects of metacognition could be automaticized (Samuels, Ediger, Willcutt, & Palumbo,
2005). Metacognition is the readers’ awareness of what is being read and whether or not
19
comprehension is taking place. Readers who use metacognition are thinking about
whether or not they are thinking about what the passage says. Factors such as readers’
motivation to read, their attitudes and beliefs about their reading ability, their interest in
the topic, and the amount of attention they devote to reading can affect how much they
comprehend. Whether or not readers are distracted by other factors, such as the
environment, noises, or other thoughts, can affect comprehension.
Metacognition also involves the insight readers bring to a text derived from
background knowledge. When readers read something with which they are highly
familiar, gaining new insights and adding to their cognitive repertoire of knowledge
seems easy (Samuels et al., 2005). However, when readers know very little about the
topic, they may have to work harder to comprehend the passage. Good readers learn to
recognize when cognitive aspects interfere with their ability to comprehend and to make
adjustments. Samuels et al. recommended that teachers instruct readers to be aware of
such factors by modeling thinking strategies and measuring students’ use through rubrics
until they automatically use metacognition strategies when reading.
According to automaticity theory, students who have low fluency rates struggle
with reading proficiency (Goffreda, Diperna, & Pedersen, 2009). In order to read
proficiently, students must be able to recognize words in context and use a variety of
strategies to comprehend a passage. Samuels (2006) summarized the automaticity theory
by identifying four components as elements of the relationship between oral reading
fluency and reading proficiency: decoding, comprehension, metacognition, and attention.
20
First, a reader must decode the words in a text (LaBerge & Samuels, 1974).
Beginning or struggling readers may devote so much attention to decoding words that
comprehension is difficult, if not impossible (Samuels, 2006). However, proficient
readers are able to decode words and quickly attach meaning to them (Shaywitz, 2003).
Skilled readers can read a word and, within 150 milliseconds, less than the time it takes
for the heart to beat once, the brain has attached meaning to the word. However,
beginning and struggling readers process reading differently. Whereas skilled readers see
whole words, beginning and struggling readers, such as dyslexics, may only see a few
letters or sounds at a time (Samuels, 2006).
In 1974, LaBerge and Samuels could only presume that oral reading fluency
would affect comprehension. By 2006, Samuels had identified comprehension as another
component of the reading process. Fluent readers spend less attention on decoding words
and therefore can devote more attention to comprehending. Proficient readers combine
the information in a text with their background knowledge and critically analyze the text.
Samuels believed that readers must automatically decode before they can devote
sufficient attention to comprehension.
Samuel’s (2006) next component of the reading process is metacognition or one’s
own awareness of the thought processes. Proficient readers use metacognition when they
do not understand a text and make adaptations so they can comprehend it. Walczyk and
Griffith-Ross (2007) found that readers can comprehend by using metacognition to know
when they do not comprehend and adapt by using reading strategies such as slowing
down, reading out loud, rereading, or sounding out difficult words. Some readers use
21
metacognition to improve their use of reading strategies until the strategies become
automatic. For example, readers can learn the skill of making inferences and use this skill
repeatedly until they automatically infer meaning while reading a passage. Proficient
readers use metacognition as they read and analyze text (Samuels, 2006).
Samuels’ (2006) final component of the reading process is attention. This is an
outcome of the previous three components, decoding, comprehension, and metacognition.
Beginning readers use excessive attention to decode words and have insufficient attention
remaining for comprehending or thinking about reading strategies they could use. As
readers become more proficient, they decode words automatically and devote more
attention to comprehension and metacognition. According to the automaticity theory,
readers are not proficient until they can apply their attention to decode, comprehend, and
use metacognition at the same time (Samuels, 2006).
Oral Reading Fluency
Originally LaBerge and Samuels (1974) used the automaticity theory to explain
why low reading fluency rates affect reading proficiency. The degree of expression can
indicate students’ understanding of the passage (Samuels, 2006). Most researchers
studying oral reading fluency and reading proficiency agree that oral reading fluency and
reading proficiency are related (Daane et al., 2005; Deeney, 2010; Miller &
Schwanenflugel, 2006). Researchers disagree, however, on how fluency is defined and
the validity of assessments used to assess it. To measure oral reading fluency, researchers
have developed assessments such as AIMSweb (Edformation, 2004), the DIBELS (Good
& Kaminiski, 2002a), the Reading Fluency Monitor (Read Naturally, 2002), and the
22
Texas Primary Reading Inventory (University of Houston, 1999). In particular, Reading
First schools have widely used the DIBELS with more than 1,800,000 children
(Allington, 2009; Baker et al., 2008; Glenn, 2007; Manzo, 2005).
Some researchers have claimed that political reasons may have motivated the
widespread use of the DIBELS (Allington, 2009; Goodman, 2006; Manzo, 2005;
Pearson, 2006; Pressley, Hilden, & Shankland, 2005; Riedel, 2007). One of the
developers of DIBELS, Good from the University of Oregon, served on national
committees that developed Reading First. Critics (e.g., Glenn, 2007) have alleged that
Good personally benefitted when applications for Reading First grants were denied and
schools felt pressured by federal officials and consultants to include DIBELS in their
grant applications (Manzo, 2005). However, despite such claims, researchers recognize
DIBELS as a respected way of measuring oral reading fluency (Baker et al., 2008; Riedel
et al., 2007).
Definition of Oral Reading Fluency
Many researchers have focused on the definition of fluency used by the
developers of the DIBELS, although definitions vary (Hudson et al., 2005; Samuels,
2006). A fluent reader can read text with speed, accuracy, and proper expression
(NICHHD, 2000). Worthy and Broaddus (2002) defined fluency as “not only rate,
accuracy, and automaticity, but also of phrasing, smoothness, and expressiveness” (p.
334). Effective readers do more than just read the words; they understand what they read
(Marcell, 2007). Hudson et al. included accuracy, rate, and prosody in their definition of
fluency. Samuels (2007) and others (Allington, 2009; Miller & Schwanenflugel, 2008;
23
Rasinski, 2006) contended that educators should consider prosody or the expression with
which students read a passage. Fuchs et al. (2001) found that prosody was difficult to
measure, so they chose to focus on rate and accuracy. The developers of the DIBELS
defined fluency as rate and accuracy (Good & Kaminski, 2002a).
Various researchers have asserted that the definition of fluency should include
comprehension (Kuhn, 2005; Marcell, 2007; Pikulski & Chard, 2005; Samuels, 2007).
Pikulski and Chard (2005) defined fluency as follows:
Reading fluency refers to efficient, effective word-recognition skills that permit a
reader to construct the meaning of text. Fluency is manifested in accurate, rapid,
expressive oral reading and is applied during, and makes possible, silent reading
comprehension. (p. 510)
Samuels (2006) not only agreed that the definition of fluency should include
comprehension but also stated that measures of fluency should assess reading and
comprehension at the same time. Samuels noted that beginning readers focus first on
decoding, and once they are able to decode automatically, they focus on comprehension.
Samuels emphasized that the automaticity theory, which he and LaBerge developed
(LaBerge & Samuels, 1974), requires students to decode words automatically so that they
can comprehend. He pointed out educators who assess fluency first and then assess
comprehension later or with a different assessment miss the point of automaticity.
Deeney (2010) included endurance in her definition of fluency. She agreed that
the definition of fluency consists of four components; accuracy, rate or speed, prosody,
and comprehension. However, in her work with students she saw the need to consider
24
endurance. In her opinion, 1-minute fluency probes provide useful information for
identifying which students are struggling. However, Deeney pointed out that these
problems did not provide adequate information for determining why students struggle
and what can be done to address their academic needs. She agreed with Pikulski and
Chard (2005), who called for a deeper view of fluency. Deeney believed that such a
deeper view includes rate, accuracy, prosody, comprehension, and endurance.
The developers of DIBELS (Kame’enui & Simmons, 2001) and others (Francis et
al., 2005; Harn, Stoolmiller, & Chard, 2008; Hasbrouck & Tindal, 2006; Jenkins et al.,
2007; Katzir et al., 2006) agreed with the definition of Roehrig et al. (2008) of oral
reading fluency as “accuracy and rate in connected text, or correct words per minute” (p.
345). Reading is a complex skill (Fuchs et al., 2001) that includes various components.
Kame’enui and Simmons discussed the simplicity and complexity of oral reading
fluency. They cited Watson (1968), the discoverer of DNA, who stated, “The idea is so
simple it had to be true” (as cited in Kame-enui & Simmons, p. 203). Like DNA, fluency
is a complex skill with many components, including phonemic awareness, alphabetic
principle, word reading, expression, and comprehension.
Fluency is also easily recognized. For example, a person listening to struggling
readers decoding words sound by sound can easily recognize they are not reading
fluently. Kame’enui and Simmons (2001) emphasized that even though educators have
debated the exact definition of oral reading fluency for decades, definition is not the
major point. Rather, as LaBerge and Samuels (1974) recognized when they developed the
automaticity theory, reading proficiency is a complex skill with many different aspects.
25
Although LaBerge and Samuels focused only on rate and accuracy and the subskills of
letter and word recognition in 1974, they recognized that other aspects are involved.
When educators use only rate and accuracy to define fluency, they are measuring only the
subskill of fluency. Different assessments can be used to measure other subskills of
reading proficiency.
Skills such as phonemic awareness, phonics, and word reading are necessary
components of fluency that enable students to read passages expressively and with
understanding. However, by defining fluency as accuracy and rate, researchers gain a
measureable, workable definition to analyze the progress of beginning readers toward the
goal of adequately comprehending what they read. The National Reading Panel
(NICHHD, 2000) identified fluency as one of the main components of early reading
instruction. Hasbrouck and Tindal (2006) analyzed thousands of students’ oral reading
fluency rates in order to develop norms for Grades 1 through 8. Educators can listen to a
student read for as little as 1 minute and compare the fluency rate with thousands of other
students at the same grade level at the beginning, middle, or end of the school year. At
the conclusion of their research, Hasbrouck and Tindal recognized the complexity of
reading and recommended that educators consider fluency in the context of the other
components necessary for mastery.
In the present study, I defined oral reading fluency as the accuracy and rate at
which students read a grade level text (Francis et al., 2008; Harn et al., 2008; Hasbrouck
& Tindal, 2006; Jenkins et al., 2007; Katzir et al., 2006). This definition provided a
specific way to measure oral reading fluency so that I could determine if a relationship
26
existed between oral reading fluency and reading proficiency. Although adopting the
limited definition of accuracy and rate (Roehrig et al. 2008), I also acknowledged that
prosody plays an important role in reading and fluency.
However, because prosody is difficult to measure (Fuchs et al., 2001;
Schwanenflugel et al., 2006), I chose to use the more measurable definition of oral
reading fluency. I also acknowledged that oral reading fluency is more that reading
quickly. Proficient readers are also able to comprehend what they read, as supported by
the research question, whether a relationship exists between oral reading fluency and
reading proficiency, as measured by Grade 3 students’ DIBELS ORF and the Grade 3
Reading TAKS.
The Role of Oral Reading Fluency in the Context of Learning to Read
As readers begin to read, fluency has a significant role (Harn et al., 2008).
Fluency for educators may be compared to temperature readings for physicians
(Hasbrouck & Tindal, 2006). A physician considers a patient’s temperature, and if it is
above normal, the physician looks for possible causes and then determines an appropriate
treatment. In a similar way, educators examine oral reading fluency. If a student’s oral
reading fluency rate is below average, educators look for possible causes and then
determine appropriate interventions (Hasbrouck & Tindal).
Ehri (2005) studied the process by which students learn to read words. She
identified five phases: (a) the prealphabetic phase, (b) the partial-alphabetic phase, (c) the
full-alphabetic phase, (d) the consolidated-alphabetic phase, and (e) the automatic-
27
alphabetic phase. These phases describe the process that beginning readers engage in as
they develop the skill of reading.
Phonemic awareness is one of the first skills beginning readers must learn. Ehri
(2005) discussed phonemic awareness in her first two phases: the prealphabetic and
partial-alphabetic phases. Beginning readers must learn to recognize that words they use
in their spoken vocabulary are constructed of individual sounds (Ehri & McCormick,
1998). Katzir et al. (2006) demonstrated the role of phonemic awareness in fluency. They
concluded that readers need phonemic awareness the most when decoding words they
have not seen before. After readers have developed a certain level of fluency, the brain
focuses less on phonemic awareness and more on other components of fluency, such as
the speed at which they are able to recognize letter patterns or word units. Researchers
have also found phonemic awareness in kindergarten to be a predictor of oral reading
fluency in later years (Katzir et al.).
As students learn phonemes, they begin to learn phonics, learning to associate the
phonemes to letters or graphemes (Ehri, 2005; Ehri & McCormick, 1998). Although
beginning readers focus on phonemic awareness, they deal only with the sounds of
language. When they begin to understand that letters represent sounds, they use phonics.
In Ehri’s second phase, the partial-alphabetic phase, children use the first and last letters
when identifying words. Children then learn to master the ability to sound out words.
Approximately half way through first grade, most readers enter what Ehri described in
her third phase as the full-alphabetic phase. During the first two phases, readers
laboriously sound out words. As they move into the full-alphabetic phase, they begin to
28
recognize that certain letters work together to make certain sounds. Recognizing more of
these letter patterns, students are able to read faster. Over time, students learn to
recognize more letter patterns and words allowing them to read more words correctly
each minute (Ehri & McCormick).
Researchers have established a relationship between the knowledge of phonetic
elements and fluency (Chard et al., 2008; Eldredge, 2005; Harn et al., 2008). Chard et al.
documented that students who had demonstrated mastery in the alphabetic principal by
the end of Grade 1 were more fluent readers than their struggling peers at the end of
Grades 2 and 3. The readers who struggled with the alphabetic principle in Grade 1 did
not progress at the same rate as their peers who had mastered the alphabetic principle.
Chard et al. emphasized that teachers’ assuring that students have a good grasp of
phonetic knowledge by the end of Grade 1 is critical, because phonetic knowledge serves
as a predictor of how fluently they will read in later years. Mastery at the full-alphabetic
phase is essential before moving to the next two stages, the consolidated-alphabetic and
the automatic-alphabetic phases.
Usually during Grade 2, students progress to Ehri’s (Ehri & McCormick, 1998)
fourth phase, the consolidated-alphabetic phase. Readers begin to recognize more
combinations of letters within words (Harn et al., 2008). They begin to read words
syllable by syllable as well as to recognize prefixes and suffixes in words. This ability to
identify groups of letters helps to facilitate fluent reading as readers recognize more
words by sight (Ehri, 2005). However, readers may not completely develop the concepts
of the full-alphabetic phase until Grade 8 (Ehri & McCormick). In fact, some students
29
learn to read and progress well until approximately Grade 4, when they begin to read
words with four or more syllables. As Ehri pointed out, educators may need to provide
further instruction to help these students see the syllabic units in larger words.
Fluency is the final phase, the automatic-alphabetic phase, in Ehri’s (Ehri &
McCormick, 1998) developmental sequence to early reading proficiency. During this
phase, students are able to read most words quickly and efficiently. When they encounter
a word they do not know, they use one or more of several methods they have developed
to ascertain meaning. Speece and Ritchey (2005) established that oral reading fluency
predicts reading comprehension. In their study with students in Grade 1 and Grade 2, a
significant gap was found between the oral reading fluency rates of students at risk for
reading difficulty and their peers who were not at risk. The researchers concluded that the
most unique variance for students’ second-grade increases in reading mastery and state
assessment achievement was predicted by increases in oral reading fluency for students in
the first grade.
Fluency Instruction
Oral reading fluency has been correlated with overall reading competence (Fuchs
et al., 2001), and studies have confirmed several strategies to improve reading fluency.
When students repeatedly read a passage, their fluency increases (Begeny, Daly, &
Valleley, 2006; Hiebert, 2005; Martens et al., 2007; Therrien, Gormley, & Kubina, 2006;
Therrien & Hughes, 2008). The repeated reading strategies can be implemented in a
variety of ways including readers’ theater, in which students repeatedly read a script as
they prepare to perform the reading for an audience (Corcorcan & Davis, 2005; Rasinski,
30
2006). Although studies have found repeated readings effective, Kuhn (2005) found that
wide reading is just as effective, and Morgan and Sideridis (2006) found motivational
strategies were more effective than repeated readings. In addition, other strategies have
improved fluency. Nes Ferrera (2005) found that pairing struggling readers with more
experienced readers helped the struggling readers to read more fluently. Rasinski and
Stevenson (2005) found parents an effective resource when they helped their at-risk
students by working daily at home with them on reading.
Reading Proficiency
Reading proficiency involves more than mechanical reading of a passage. Based
on their review of literature, the National Reading Panel (NICHHD, 2000) captured the
essence of reading by identifying its five most important components. These are
phonemic awareness, phonics, fluency, vocabulary, and reading comprehension.
Reading Proficiency and Automaticity Theory
Several researchers (Baker et al., 2008; Kuhn, 2005; Morgan & Sideridis, 2006;
Therrien & Hughes, 2008) have confirmed a relationship between reading proficiency
and automaticity. Baker et al. found that the DIBELS ORF rates of 4,696 Grade 3
students in Oregon significantly predicted their performance on the state-mandated
assessment. Students who read fluently also comprehended well enough to attain scores
of proficient on the state assessment.
Kuhn (2005) also confirmed the automaticity theory in a study of 24 Grade 2
students in a southeastern United States city with three student groups. The first group
participated in the repeated reading condition, in which students read the same story
31
repeatedly over a 3-day period using strategies such as modeling, repetition, choral
reading, and pair reading. Over the 6-week period, this procedure was followed with six
books.
The second group participated in the nonrepetitive reading condition, in which
students read each book one time. The students read the same six stories as the ones used
in the repeated reading condition, with an additional 12, so that a new story was available
at each session. The third group participated in the listening only condition, in which the
researcher expressively read the books aloud to the students. Over the 6-week period,
these students listened to the same 18 stories as the students in the nonrepetitive group
read. The fourth group was the control group, in which students received no interventions
outside of the normal classroom instruction (Kuhn, 2005).
Kuhn (2005) found that the repeated reading and nonrepetitive reading
intervention groups were a more effective way to help students decode words than simply
listening or providing no intervention. Thus, Kuhn’s work confirmed the automaticity
theory. The students in the repeated reading and nonrepetitive reading groups
demonstrated that they automaticized the knowledge of words and sounds better than the
students in the listening only and control groups. Students in the first two groups
demonstrated greater gains in their ability to read both real and nonsense words and their
oral reading fluency rates.
Therrien and Hughes (2008) studied the effects of repeated reading and question
generation on students’ reading fluency and comprehension. During a 2-week period,
students who read at Grade 2 or 3 instructional levels were randomly divided into two
32
groups. One group received the repeated reading intervention. Students were asked to
read a passage repeatedly until they reached the desired fluency level. On average,
students reached the fluency goal after 2.42 readings. The other group received the
question-generation intervention. Students read the passage once and received questions
to cue them to better comprehend the narrative passage, such as the following: (a) Who is
the main character?, (b) Where and when did the story take place?, and (c) What did the
main character do?
In both groups after the intervention was completed, tutors asked both factual and
inferential questions. Results documented that repeated reading improves fluency:
students in the repeated reading group read 22.5 more correct words per minute than
students in the question-generation group, who only read the passage one time.
Additionally, the students in the repeated reading group comprehended more factual
questions than the students in the question-generation group. There was no significance
difference in the two groups when they answered inferential questions (Therrien &
Hughes, 2008).
Therrien and Hughes (2008) concluded that repeated reading improved both
fluency and comprehension. However, they recommended that additional research be
conducted to determine the effects of text difficulty. Oral reading fluency rates are
important to consider when students are reading passages in which they cannot read a
significant percentage of the words. For example, in this study the researchers considered
instructional level to be one at which students correctly read 85% to 95% of the words.
Therefore, the students in the question-generation group may not have been able to
33
understand 5% to 15% of the words in the text. When students do not know 5% to 15%
of the words in a passage, their comprehension can be affected. The situation can be
further compounded when students read at their level of frustration, less than 85% of the
words in the passage.
Therrien and Hughes (2008) recommended that other research studies focus on
how comprehension is affected by readers’ levels of difficulty. The researchers also
recommended that studies with longer intervention time be conducted to indicate whether
more than 2 weeks of intervention would enable the students in the question-generation
group to use cued questions to answer the inferential questions. This study demonstrated
that repeated reading does improve both fluency and comprehension.
In contrast to these and other studies that found repeated reading to be an effective
intervention (Begeny et al., 2006; Hiebert, 2005; Hudson et al., 2005; Rasinski, 2006;
Therrien et al., 2006; Therrien & Kubina, 2007), Morgan and Sideridis (2006) found that
repeated reading was not as effective as other strategies. They conducted a meta-analysis
of 30 studies that used single-subject research designs to determine “the effectiveness of
different types of interventions on fluency for students with or at risk for learning
disabilities” (p. 200). The researchers categorized the interventions by the following
strategies: (a) keywords and previewing, (b) listening and repeated readings, (c) goal
setting plus performance feedback, (d) contingent reinforcement, (e) goal setting plus
feedback and reinforcement, (f) word recognition, and (g) tutoring.
According to Morgan and Sideridis’ (2006) findings, the most effective
interventions were reinforcement, goal setting plus feedback, and goal setting plus
34
feedback and reinforcement. When the researchers analyzed the students’ improvements
over time, the goal setting interventions showed significant growth, and the listening and
repeated readings interventions did not. Although Morgan and Sideridis’ findings did not
concur with other researchers’ findings regarding repeated readings, their results
nevertheless substantiate the automaticity theory. They found that when students were
motivated to set goals and were positively reinforced, they were able to improve their
automaticity, as demonstrated by increases in their oral reading fluency rates.
Definition of Reading Proficiency
After the authorization of NCLB (2002), the U.S. Department of Education
formed the NARAP to define reading and reading proficiency (2006). Since NCLB
mandated that all students be proficient in reading by 2013-2014, states and test
developers needed a definition of reading proficiency to create appropriate measurement
assessments. This task was not a simple one. The definition had to capture the essence of
reading and encompass the developmental attributes of reading across grade levels
(NARAP).
Each of the 50 states defines reading proficiency at each grade level in terms of
their curriculum expectations, as measured by the state’s reading assessment. NARAP’s
(2006) definition also had to include students with disabilities who access texts
differently from nondisabled students. For example, blind students access texts through
braille. Hearing-impaired students may not decode the words of a text in the same
manner as nonhearing-impaired students. Furthermore, students with learning disabilities
35
such as dyslexia may require accommodations, including extra time in order to read a
passage proficiently.
Thus, the national definition of reading proficiency had to allow for differences in
states’ definitions, curriculum differences, students’ ages and grades, and conditions of
handicap. After NARAP (2006) convened a panel of experts who were discussing
working definitions with focus groups, NARAP formulated the following definition of
reading proficiency:
Reading is the process of deriving meaning from text. For the majority of readers,
this process involves decoding written text. Some individuals require adaptations
such as braille or auditorization to support the decoding process. Understanding
text is determined by the purposes for reading, the context, the nature of the text,
and the readers’ strategies and knowledge. (Cline et al., 2006, para. 8)
Following from this definition, states were able to use it as a guide to examine
their curriculum expectations at each grade level and design assessments to measure
reading proficiency. Under the NCLB (2002), implementation means that 50 different
states are assessing reading proficiency at Grades 3, 5, 8, and exit level. Consequently,
200 definitions could exist for defining reading proficiency (four grades times 50 states).
Many states also give assessments of reading proficiency in more grades. In Texas, for
example, assessments are given in eight grades, Grades 3 through 10 (TEA, 2009d).
Definitions vary among states. TEA, for example, requires proficient readers in
Grade 3 to have a basic understanding of reading, apply knowledge of literary elements,
use a variety of strategies to analyze text, and apply critical-thinking skills to analyze the
36
passage (TEA, 2004). Florida’s state assessment that measures reading proficiency
contains four areas of assessment: (a) word phrases in context; (b) main idea, plot, and
purpose; (c) comparison and cause/effect; and (d) reference and research (Florida
Department of Education, 2005). In Colorado, the definition of reading proficiency
includes requiring students to “locate, select, and make use of relevant information from
a variety of media, reference, and technological sources” and to “read and recognize
literature as a record of human experience” (Colorado Department of Education, 2010,
para.7). Even though NARAP (2006) provided states with a general definition of reading
proficiency, as these examples show, each state assesses reading proficiency with its
own definition, based on their curriculum standards.
As the definition of reading proficiency in Texas shows, reading proficiency is a
complex skill. Proficient readers, as noted earlier, must attach meaning to the words they
read to understand what the author is communicating (NARAP, 2006). Proficient readers
not only read the words on a page, but they use strategies to read between the lines and
beyond the lines. Thus, a review of literature relating to vocabulary and comprehension is
warranted.
Vocabulary and Matthew Effects
Vocabulary is one of the National Reading Panel’s five essential components for
effective reading programs (NICHHD, 2000). Vocabulary is comprised of the words an
individual knows (Rosenthal & Ehri, 2008). People may identify words by the way they
sound, the way they are used in context, or the way they are spelled. New words are
introduced either through words that are heard or read. Thus, readers can read more
37
proficiently when they decode unknown words and attempt to determine the meaning of
unknown words (NICHHD, 2000). Ouellette (2006) found that readers are more likely to
decode an unknown word if it exists in their spoken vocabulary. When readers know 90%
to 95% of the words in a text, they are able to figure out the meaning of the other
unknown words (Hirsch, 2003).
Struggling readers have difficulty decoding words. Because reading is difficult for
them, they become frustrated (Ehri & McCormick, 1998). They must work hard to read
and often find numerous ways to avoid reading. Consequently, they do not want to read
widely, and their vocabulary does not increase. Conversely, proficient readers have a
larger vocabulary than struggling readers because they have read more. Proficient readers
are able to use their vocabulary to decode and determine the meaning of even more
unknown words. The more they read, the more they know. Stanovich (1998) referred to
this gap between proficient readers and their struggling peers as Matthew effects—a
reference to the Biblical passage, Matthew 25:29, that refers to the rich becoming richer
and the poor becoming poorer (p. 10). For struggling readers, the gap widens as they
progress through the grades (Morgan et al., 2008; Stanovich, 1998).
Several studies support Matthew effects. Katz, Stone, Carlisle, Corey, and Zeng
(2008) studied the difference in growth of reading skills between Grade 1 proficient
readers and struggling readers. The researchers reported that not only did the struggling
readers have lower oral reading fluency rates than the proficient readers, but they
improved at a slower rate. A study with students in kindergarten to Grade 3 in Oregon
and Texas documented the presence of Matthew effects for students in all four grades
38
(Chard et al., 2008). Rosenthal and Ehri (2008) studied the acquisition of new vocabulary
with students in Grades 2 and 5. For students at both grade levels, the researchers
documented the gap between struggling readers and proficient readers and its effect on
students’ learning of new words.
Consequently, struggling readers’ vocabulary is limited when compared to their
proficient peers (Rosenthal & Ehri, 2008). Struggling readers’ limited vocabulary hinders
them from learning as many new words as their proficient fellow students. Struggling
readers who have a limited vocabulary have difficulty understanding texts that they read,
decoding and learning new words, and developing skills that allow them to become
proficient readers.
Comprehension
The final component for effective reading programs identified by the National
Reading Panel is comprehension (NICHHD, 2000). Comprehension and oral reading
fluency rates are related, and researchers continue to examine the relationship from
various perspectives. These include questions such as the following: Are oral reading
fluency rates and comprehension related only in the beginning stages of reading? Do all
readers with low reading fluency rates struggle with comprehension? Are there readers
with high oral reading fluency rates who are unable to comprehend? The following
researchers have asked these questions.
Fuchs et al. (2001) were among the first to establish oral reading fluency as an
accurate and reliable measure that served as a predictor of general reading
comprehension. Other researchers (Baker et al., 2008; Chard et al., 2008; Katz et al.,
39
2008; Simmons et al., 2008) have since correlated oral reading fluency rates and
comprehension at different grade levels and on a variety of assessments. As beginning
readers progress through the various phases of learning to read (Ehri, 2005), teachers
design instructional strategies to meet their needs and help them improve their reading
skills. By the middle of first grade, a connection is established between how many words
a minute students read and how much they comprehend (Hosp & Fuchs, 2005).
For students in Grades 1 through 4, researchers (Hosp & Fuchs, 2005; Simmons
et al., 2008) have found that oral reading fluency correlated with reading performance on
the Woodcock Reading Mastery Test-Revised (Woodcock, 1973). For these same grades,
other researchers have studied the relationship between oral reading fluency and
comprehension (Daane et al., 2005). Baker et al. (2008) and Chard et al. (2008) found a
relationship between oral reading fluency rates and performance on SAT-10. The results
of Pressley et al. (2005) confirmed a relationship between oral reading fluency and
reading performance on the TerraNova. Speece and Ritchey (2005) also demonstrated the
relationship between oral reading fluency and the Comprehensive Test of Phonological
Processing (CTOPP) and the Test of Word Reading Efficiency (TOWRE).
Researchers have also studied the relationship between oral reading fluency and
comprehension in older students. With students above Grade 4, Therrien and Hughes
(2008) as well as Therrien and Kubina (2007) found that oral reading fluency and
comprehension were significantly related. In work with middle and junior high school
students, Fuchs et al. (2001) established a correlation between oral reading fluency and
reading proficiency as measured by the Stanford Achievement Test. However, for
40
students at various grade levels, additional research should be conducted to examine
whether a statistically significant relationship exists between oral reading fluency and
reading proficiency as defined by specific state reading assessments, such as the TAKS.
The current study filled this gap for Grade 3 students in Texas.
Older struggling readers with low oral reading fluency rates can learn to use
certain learning strategies to improve their comprehension (Walczyk & Griffith-Ross,
2007). Walczyk and Griffith-Ross worked with students in Grades 3, 5, and 7 to
determine if struggling readers at these grade levels could develop compensatory
strategies to help them read text. The researchers found that struggling readers sometimes
slowed down in order to facilitate their comprehension. For example, to compensate for
decoding unknown words, struggling readers would sound out unknown words or simply
skip them. When the students realized they were not comprehending, they might pause,
look back, or reread the text.
In addition, factors other than familiarity or understanding of words can affect
comprehension. These factors include degree of readers’ motivation or feelings of time
pressure. Walczyk and Griffith-Ross (2007) found that students who had low motivation
read slowly, but those who were interested in the text could comprehend by using
compensation strategies. Stage of maturation also seemed to play a role in the effect of
time pressure on comprehension. Students in Grade 3 comprehended less when under
time constraints than students in Grade 7. However, students in Grade 5 performed at
generally the same level with and without time constraints.
41
Walczyk and Griffith-Ross (2007) conjectured that the readers’ engagement made
the difference. The students in Grade 3 seemed to be engaged in the text as they tried to
decode the words, and time pressure negatively affected their comprehension. In contrast,
by Grade 7 many of the students had become less engaged with routine reading
assignments, but their engagement increased when they were timed. The students in
Grade 5 appeared to be in transition between the two modes of reading text. Thus,
Walczyk and Griffith-Ross demonstrated that students with low fluency rates can
comprehend text by using metacognitive strategies to compensate, such as sounding out
words, and engaging with greater motivation when experiencing time pressure.
Researchers indicate that oral reading fluency rates predict comprehension (Chard
et al., 2008; Daane et al., 2005) not because readers can read a passage quickly but
because fluent readers are able to use mental energy comprehending rather than
struggling with decoding (Fuchs et al., 2001; Samuels, 2006). Kuhn et al. (2005)
indicated that when the readers in their study focused on how fast they read the passage,
their comprehension was not as effective as those who focused on the passage itself.
Samuels (2007) reiterated that when readers focus too intensely on how fast they are
reading, their ability to comprehend suffers.
Fluency is only one component of reading (NICHHD, 2000). When Hasbrouck
and Tindal (2006) developed the oral reading fluency norms, they observed that once
readers reached the 50th percentile, teachers should not encourage them to read faster.
The researchers recommended that teachers only provide intervention for those students
who read 10 words per minute below the 50th percentile. This advice was based on the
42
principle that fluency intervention is reading fluently enough to focus on comprehension
rather than a race for readers of the most words in a minute.
Some readers read fluently but have trouble comprehending (Harrison, 2008;
Samuels, 2006). If readers are not actively engaged in the reading process, they can
mechanically read a passage but not understand what the author is saying (Schnick &
Knickelbine, 2000). Readers who are intrinsically motivated to read can understand what
the author is saying because they are curious and want to read more. In contrast, readers
who are not motivated to read are often not engaged and fail to make the necessary
connections for comprehension.
Marcell (2007) reported on a student who was able to read a passage fluently at
120 words correct per minute but could not retell what he had read. This student needed
further instruction in comprehension strategies in order to become a proficient reader.
Samuels (2006) pointed out that students whose second language is English may
successfully decode passages although they have very little comprehension. His work
with among students in St. Paul, Minnesota, revealed that although about 20% could read
fluently, they could not comprehend what they read. Furthermore, some high-
comprehending readers have low fluency rates (Samuels, 2006).
The goal of reading is to comprehend or understand text (Applegate et al., 2009;
NICHHD, 2000), and the studies in this section indicate a relationship between students’
oral reading fluency rates and comprehension at various stages of reading and grade
levels. Oral reading fluency rates are important not because of how fast students read but
43
because fluency demands little conscious attention and enables them to focus on
comprehension (Samuels, 2006).
Relationship Between Oral Reading Fluency and Reading Proficiency in State-
Mandated Assessments
Since LaBerge and Samuels’ (1974) development of the automaticity theory,
many studies (Daane et al., 2005; Kuhn, 2005; Riedel, 2007) have confirmed a
relationship between how fluently students read and how proficiently they comprehend
text at various grade levels. However, fewer studies have considered oral reading fluency
rates with the DIBELS ORF and reading proficiency as measured by state-mandated
assessments. Many school districts use the DIBELS ORF assessment to identify students
at risk for failing (Cusumano, 2007; Fletcher, Francis, Morris, & Lyon, 2005; Gersten &
Dimino, 2006; Jenkins et al., 2007; Katz et al., 2008; Stecker, Lembke, & Foegen, 2008;
Wood, Hill, Meyer, & Flowers, 2005). Studies have been conducted for only seven states
for positive correlations between students’ oral reading fluency rates, measured by the
DIBELS ORF, and reading proficiency, using their states’ mandated reading assessments.
These states are as follows: Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002;
Wood, 2006), Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina
(Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and
Pennsylvania (Shapiro et al., 2008).
44
Studies of Grade 3 Only
Researchers have conducted two studies with Grade 3 students only. In Colorado,
Shaw and Shaw (2002) conducted research using 52 Grade 3 students from a Colorado
elementary school to determine if the DIBELS ORF benchmarks were a predictor of
success on the Colorado State Assessment Program. The researchers found the DIBELS
ORF to be strongly correlated, .80, the highest correlation of the studies reviewed here.
For Florida Grade 3 students, Buck and Torgesen (2003) compared the DIBELS
ORF rates of 1,102 students in 13 elementary schools to their performance on the Florida
Comprehension Assessment Test. The researchers found a significant correlation, .70.
They thus concluded that for this sample the DIBELS ORF rate was a predictor of
success on the Florida Comprehension Assessment Test.
Studies of Reading First Schools
For studies in three states, the researchers (Baker et al., 2008; Roehrig et al.,
2008; Wilson, 2005) used the large databases from Reading First schools to determine if
DIBELS ORF correlated with state-mandated assessments. To identify and monitor the
progress of struggling readers, over 90% of the Reading First schools used DIBELS oral
reading fluency (Glenn, 2007; U.S. Department of Education, 2010). With an Arizona
Reading First school, Wilson conducted a study with 241 Grade 3 students. He found a
moderately large correlation (.74) between the DIBELS ORF and the Arizona Instrument
to Measure Standards. Roehrig et al. conducted a similar study with Florida students, in
which the DIBELS ORF rates of 35,207 Grade 3 students were compared to their
performance on the Florida Comprehensive Assessment. Correlations were also
45
moderately large, ranging from .70 to .71. In Oregon, Baker et al. used 34 randomly
selected Oregon Reading First schools located in 16 different school districts. The
researchers compared the DIBELS ORF rates of 4,696 Grade 3 students with their
performance on the Oregon State Reading Assessment. The correlations ranged from
0.58 to 0.68, somewhat lower than the Arizona and Florida studies.
Pressley et al. (2005) suggested that studies on the DIBELS should be conducted
by researchers not associated with Reading First. In response, Barger (2003) compared
the DIBELS ORF rates of 38 students to the North Carolina End of Grade 3 Test. Barger
also found a moderately large correlation (.73), similar to those found by Wilson (2005)
and Roehrig et al. (2008). These researchers documented the positive relationship
between the DIBELS ORF and four different state-mandated reading assessments for
Grade 3 students.
Studies of Grade 3 and Other Grades
However, Fuchs et al. (2001) suggested that the correlation between oral reading
fluency and comprehension may be stronger in Grade 3 than in other grades, for which
assessments measure higher levels of comprehension. Three groups of researchers
(Shapiro et al., 2008; Vander Meer et al., 2005; Wood, 2006) studied this issue by using
students in Grades 3, 4, and 5. For Colorado schools, Wood (2006) used 82 Grade 3
students, 101 Grade 4 students, and 98 Grade 5 students to determine if the DIBELS ORF
consistently predicted performance on the Colorado Student Assessment Program across
grade levels. Wood found significant correlations at all three levels: for Grade 3 at .70,
46
Grade 4 at .67, and Grade 5 at .75. These values were similar to those found by Wilson
(2005), Roehrig et al. (2008), and Barger (2003) for Reading First schools.
For two grade levels with students in Ohio, Vander Meer et al. (2005) correlated
the end-of-year Grade 3 DIBELS ORF with performance on the reading portion of the
Ohio Proficiency Test. A total of 318 Grade 4 students from three elementary schools
comprised the sample. Vander Meer et al. (2005) found a significant correlation, .65,
between the end-of-year Grade 3 ORF scores and the Grade 4 Ohio Proficiency Test
scores. Because this study considered the same students across two successive grades, the
researchers concluded that oral reading fluency was an accurate predictor of performance
on the reading portion of the Ohio Proficiency Test.
With students in Pennsylvania across grade levels, Shapiro et al. (2008) compared
the DIBELS ORF rates of 401 Grade 3 students, 394 Grade 4 students, and 205 Grade 5
students with their performance on the Pennsylvania System of School Assessment. The
researchers found significant correlations for all three grades: Grade 3 at.67, Grade 4 at
.64, and Grade 5 at.73. Again, these results were similar to those of the previous studies.
The findings of Wood (2006) and Shapiro et al. support the use of the DIBELS ORF as a
consistent predictor of performance on state-mandated reading assessments not only for
Grade 3 but also across grade levels.
Thus, for seven states of the 50 that require state-mandated assessments of
reading proficiency, studies have found significant positive relationship between the
DIBELS ORF and the state measures. However, because definitions of reading
proficiency differ across states and grade levels, the need existed for researchers to
47
determine if DIBELS ORF correlates significantly with state-mandated assessments in
other states and at other grade levels (Roehrig et al., 2008). The present study focused on
Grade 3 students in Texas. No similar study has been conducted to determine if a
significant relationship existed for Grade 3 students between DIBELS ORF rates
(independent variable) and reading proficiency (dependent variable), as measured by the
scores on the Grade 3 Reading TAKS.
Limitations of State Studies
In the studies reviewed above, the researchers all reported significant
relationships between the DIBELS ORF in relation to Grade 3 students and reading
proficiency as assessed on state-mandated assessments. However, the researchers also
recognized limitations of their studies. One limitation was sample composition and size.
For example, the Colorado sample (Wood, 2006) consisted of primarily White students.
The North Carolina sample (Barger, 2003) had only 38 students. On the other hand, the
Oregon sample (Baker et al., 2008) and one of the Florida samples (Roehrig et al., 2008)
had large statewide samples from students in Reading First schools. One of the criteria
for being a Reading First school was a high population of students from low
socioeconomic backgrounds, and thus these samples were not representative of the
general Grade 3 population. In all these studies, researchers recommended future research
that would include wider cross-sections of students as well as non-Reading First schools
for greater generalizability of results to other school settings.
Another limitation of these studies was recognition of other variables that might
affect the relationship between oral reading fluency and reading proficiency. In the
48
Oregon study, Baker et al. (2008) singled out diversity as a factor. The Reading First
sample schools had large poverty rates and low reading achievement, and Baker et al.
recommended additional research that focused on specific ethnic backgrounds as well as
subpopulations, such as special education students. In Arizona, Wilson (2005)
recommended future studies after Arizona implemented the 2005 Arizona’s Instrument to
Measure Standards Dual Purpose Assessment (AIMS DPA) measurement scale and
performance levels. Wilson posited that the relationships between scores would be
different with the new standards. When Wilson published his results, it was not clear
whether the expected performance levels would be higher or lower than the ones used in
his study.
For the Pennsylvania study, Shapiro et al. (2008) used data from the 4Sight
assessments in addition to DIBELS. They used the 4Sight assessments to provide further
information on students’ reading comprehension to predict reading proficiency. The
4Sight assessments are short tests similar in format and other aspects to state assessments
and are used throughout the school year to produce overall scores that may predict
students’ scores on state assessments. Many states use the 4Sight assessments, including
California, Pennsylvania, Tennessee, and Texas (Success for All Foundation, 2010).
In the Pennsylvania study (Shapiro et al., 2008), the 4Sight benchmark
assessments were administered to students as a group rather than individually. Whole
group administration enabled researchers to assess many students in approximately 1
hour. However, some reading skills, such as oral reading fluency, are best tested when
49
administered individually so educators can monitor the rate and accuracy with which a
student reads.
Shapiro et al. (2008) recommended future studies in other states to determine if
their findings could be replicated. Additional studies over time and in various locations
would help generalize the findings to larger populations. Shapiro et al. also noted that the
schools in their samples represented varying demographic characteristics but were similar
in their levels of poverty. The researchers recommended additional studies in other
schools with different poverty levels to eliminate poverty as a factor and for greater
generalization.
As a result of these limitations and recommendations, researchers indicate the
need for additional studies on the DIBELS ORF and state-mandated assessments, in
Grade 3 and other grades. Roehrig et al. (2008) pointed out this need especially in other
states. I responded to this need with this study to determine if a statistically significant
relationship existed between the DIBELS ORF and the TAKS for Grade 3 students in
Texas.
Summary
In section 2, literature regarding the automaticity theory, oral reading fluency, and
reading proficiency was reviewed. LaBerge and Samuels (1974) developed the
automaticity theory to explain why readers who struggle with low oral reading fluency
rates also struggle with comprehension. La Berge and Samuels and other researchers that
followed (Baker et al., 2008; Kuhn, 2005; Morgan & Sideridis, 2006) demonstrated that
readers have a limited amount of attention to devote to reading. When too much attention
50
is focused on decoding words, readers have insufficient attention to allocate to other
aspects of reading, such as comprehending.
Next, I reviewed oral reading fluency, beginning with various definitions of oral
reading fluency (Allington, 2009; Deeney, 2010; Hudson et al., 2005; Kame’enui &
Simmons, 2001; Marcell, 2007; Roehrig et al., 2008; Worthy & Broaddus, 2002).
Researchers have pointed out the complexity of oral reading fluency and demonstrated
norms (Hasbrouck & Tindal, 2006; Kame’enui & Simmons, 2001). The role of oral
reading fluency was then reviewed in the context of learning to read (Ehri, 2005; Ehri &
McCormick, 1998).
Finally, I reviewed literature relating to reading proficiency. In NCLB (2002),
legislators required the development of a definition of reading proficiency that is flexible
enough to be measured by state reading assessments in all 50 states (NARAP, 2006).
Vocabulary (Ouellette, 2006) and comprehension (Samuels, 2006) are important aspects
of reading proficiency that generally cannot take place until students reach a certain level
of oral reading fluency. Once readers are able to fluently decode text, they can devote
their attention to other aspects of reading, such as vocabulary and comprehension.
Researchers in Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002; Wood, 2006),
Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina (Barger, 2003),
Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and Pennsylvania (Shapiro
et al., 2008) have established a relationship between oral reading fluency rates and
reading proficiency as measured on their state-mandated reading assessment.
51
However, the specific definition of reading proficiency varies from state to state
because the state-mandated reading assessments are based on the reading objectives
defined in the state curricula (NARAP, 2006). It is therefore necessary for researchers in
other states to determine if a relationship exists between oral reading fluency rates and
their state-mandated assessment (Roehrig et al., 2008). This study investigated whether a
statistically significant relationship (p < .05) existed between oral reading fluency rates as
measured by the middle-of-Grade 3 DIBELS benchmark rates and the reading
proficiency measured by the scale score of the Grade 3 Reading TAKS. In section 3, I
focus on the methodology of this study. In section 4, I report the findings of the study,
and in section 5, I discuss implications of the findings in comparison with previous
studies, application to social change, and recommendations for action and further
research.
52
Section 3: Research Design
Introduction
The purpose of this study was to determine whether a statistically significant
relationship (p < .05) existed between Grade 3 students’ oral reading fluency rates and
their reading proficiency. In this section, I explain the research design of this study,
followed by a description of the sample and population. Then, I discuss the reliability and
validity of the two instruments used in this study and finally the procedures for data
collection and analysis.
Research Design
I designed this quantitative study to determine if a statistically significant
relationship (p < .05) existed between oral reading fluency and reading proficiency.
Specifically, I used the middle-of-year DIBELS ORF rates of 155 students in a Texas
school district as the independent variable to measure oral reading fluency rates, and the
scale scores on the Grade 3 Reading TAKS as the dependent variable to measure reading
proficiency. I did not conduct experimental research to determine, for example, if one
group performed better than another group with regard to either oral reading fluency rates
or reading proficiency, because the study purpose was to determine if a relationship
existed between oral reading proficiency and reading proficiency.
Neither a qualitative study nor a mixed study was appropriate because I focused
on quantitative statistical data to fulfill the study purpose. These data were used to
determine whether a statistically significant relationship (p < .05) existed between Grade
3 students’ oral reading fluency and their reading proficiency on a state-mandated
53
assessment. This purpose was in contrast to that of a qualitative study, which might ask
students about their attitudes, feelings, or self-efficacy during performance on the
examination.
I considered several nonexperimental research methods, including point-biserial
correlation, Spearman correlation, chi-square test for independence, and Pearson
correlation. The point-biserial correlation is a type of formula researchers use to
determine relationships between two variables when one variable can represent only one
of two components (Gravetter & Wallnau, 2005). For example, in a study in which a
student can either pass or fail an examination, researchers assign each of the options of
the pass-fail dichotomous variable a number. A researcher might assign 1 if the student
passes the examination and 0 if the student fails it. However, I did not use a point-biserial
correlation because neither of the variables was dichotomous.
Researchers also use Spearman correlations to determine relationships between
two variables in which the variables are rank ordered (Gravetter & Wallnau, 2005). For
example, this study could have used the students’ middle-of-year DIBELS ORF rates to
rank the students from the fastest to the slowest readers. I could have also ranked students
by the scale scores received on the Grade 3 Reading TAKS. I then would have used the
Spearman correlation to determine the relationship between the students’ performance on
both assessments.
However, I chose not to use a Spearman correlation because students’ same
scores could skew the data and render inaccurate results. Specifically, all the students
who reached a scale score of exactly 2100 on the Grade 3 Reading TAKS would have
54
received the same ranking. Because there are 36 questions on the Grade 3 Reading
TAKS, I could have used 36 ranks. However, the oral reading fluency rates could then
yield a greater number of ranks because the students could read anywhere between 0 to
150 or more words correct per minute. The Spearman correlation was additionally
inappropriate because the results would yield a different number of ranks for each
variable.
I also considered using a chi-square test for goodness of fit that uses frequencies
to determine if the frequencies in the sample fit the frequencies in the population
(Gravetter & Wallnau, 2005). For example, in this study a chi-square could have been
used to determine the percentage of Grade 3 students in each of three categories, that is,
at-risk, some-risk, and low-risk readers, who passed and failed the TAKS. I would then
calculate and compare the percentage of frequencies in both the sample and population.
However, I did not include the categorization of students in the study purpose.
I chose a Pearson correlation as the most appropriate nonexperimental statistical
method for this study. Pearson correlations “measure and describe a relationship between
two variables” (Gravetter & Wallnau, 2005, p. 412). In this study, I described and
measured the relationship between the middle-of-year Grade 3 students’ DIBELS ORF
rates and their scale scores on the Grade 3 Reading TAKS. If I had used the point-biserial
method, results would have simply indicated whether students passed or failed the
assessment. However, using the Pearson correlation, I obtained additional information
that described the relationship between the two variables. For example, instead of
indicating solely whether the students passed or failed, I could determine the range of
55
scores with greater precision, such as whether the students passed with a minimum scale
score of 2100 on the Grade 3 Reading TAKS or whether they passed with a high score of
2400 or more.
Although I could have obtained useful information from a chi-square analysis, but
with a Pearson correlation I acquired the most accurate data analysis. A chi-square test is
a nonparametric test, and the Pearson correlation is a parametric test. Researchers prefer
parametric tests because they require statistical data on individual participants (Gravetter
& Wallnau, 2005). However, some researchers use nonparametric tests in studies when
detailed data on individual participants are not available. Because detailed data of
individual participants were available for the present study, I chose the more reliable
parametric test, the Pearson correlation.
Setting and Sample
Setting
This study took place in a West Texas town with a population of 6,821.
According to the community’s economic development corporation, the average annual
income of the 2,946 households at the time of the study was $37,000. For reasons of
confidentiality, the source for this information was not referenced in this study.
Approximately 38% of the households had school-age children, and 160 students in the
district were in Grade 3. All of these students attended the same elementary school.
Characteristics of the Sample
In Table 1, I show the characteristics of the sample and compare them to the
population of Grade 3 students in Texas who took the Grade 3 Reading TAKS in March
56
2009. Of the students in Grade 3 in the district, 52% were male and 48% were female. In
terms of ethnicity, 54% of the students were Hispanic, 38% were White, 6% were Black,
and 1% was Native American. The district listed 56% of the students as economically
disadvantaged, with10% in special education.
Table 1
Characteristics of Grade 3 Students in Texas Overall and the Research Site
Characteristic Texas Overall (%) Local District (%)
Male 50 52
Female 50 48
White 37 38
Hispanic 44 54
Black 15 6
Native American <1 1
Asian 4 0
Economically disadvantaged 56 56
Special education 5 10
Total participants 316,319 159
Note. From Texas Education Agency (TEA, 2009c, 2009d).
As shown in Table 1, the characteristics of the district population were similar to
those of the state overall. Consequently, the sample of Grade 3 students from this school
district was representative of the population of Grade 3 Texas students overall. In 2009,
57
all 316,319 Texas students took the first administration of the Grade 3 Reading TAKS
(TEA, 2009c). In the district at the research site, all 155 students took the Grade 3
Reading TAKS (TEA, 2009d).
Sampling Method
The sampling method for this study was a nonprobability convenience sample of
Grade 3 students. Since all Grade 3 students attended the same elementary school, all
these students comprised the study sample. Although random selection would have more
closely ensured that participants represented the population (Creswell, 2009), a
convenience sample, such as that for the present study, used an already formed sample.
However, because the total number of students was 155 (Table 1), I chose the larger
convenience sample rather than a randomly selected smaller sample to increase
representativeness.
Sample Size
The sample was comprised of 155 students. I considered a random sample size of
50 but recognized concerns of sample variance in inferring the results of a sample to an
entire population. When researchers use a sample of 20 or 50 to make inferences to a
population of thousands, the sample variance is generally higher than with a sample of
100 or more (Gravetter & Wallnau, 2005).
In addition, I conducted an a priori power analysis to determine a statistically
acceptable sample size. For one-tailed (unidirectional) bivariate correlation analysis, the
power indicated was .80, a power acceptable for research studies. The effect size
indicated for correlation was medium, .30. The population effect size is the measure of
58
the strength (effect) of the alternative hypothesis. The alpha level was set at .05, widely
acceptable in educational research (Creswell, 2009). This value indicates a 5%
probability of committing a Type I error, in which a researcher would reject the null
hypothesis when it was actually true (Gravetter & Wallnau, 2005). I used the G*Power
statistical program (Faul, Erdfelder, Lang & Buchner, 2007), and results showed a
minimum required sample size of 64. The power analysis provided additional support for
the nonprobability convenience sample of 155 students.
Eligibility Criteria for Study Participants
I set three criteria for students to participate in this study. First, students must
have attended Grade 3 in the district during the 2008-2009 school year. Second, teachers
would have screened these students with the middle-of-year DIBELS benchmark in
January 2009. Third, students must have taken the Grade 3 TAKS Reading assessment in
March 2009.
Instrumentation and Materials
I used two instruments in this study to determine if a statistically significant
relationship (p < .05) existed between Grade 3 students’ oral reading fluency and reading
proficiency. The first instrument was the DIBELS ORF, the independent variable, which
teachers administered to students in the middle of the school year. The second instrument
was the Grade 3 Reading TAKS, the dependent variable, which students took at the end
of the school year.
59
DIBELS ORF
Researchers have identified oral reading fluency as a predictor of reading
proficiency (Fuchs et al., 2001; Hasbrouck & Tindal, 2006; Pikulski & Chard, 2005).
Educators use oral reading fluency benchmarks to measure the number of correct words a
student correctly reads in 1 minute in three passages. The original study for DIBELS was
conducted in Lane County, Oregon, with a sample of 673 students in kindergarten
through Grade 3 (DIBELS, 2002). For Grade 3, Good and Kaminski (2002a) used the
Spache Readability Formula and determined the readability of the passages at 2.9, 3.0,
and 3.1 grade levels, respectively. The developers selected these passages for Grade 3 so
that the first one, “The Field Trip,” represented the easiest passages (2.9). The second
one, “Keiko the Killer Whale,” represented passages of medium difficulty (3.0). The third
one, “Getting Email,” represented passages of greatest difficulty (3.1).
The developers of the DIBELS (University of Oregon Center in Teaching and
Learning, 2008) established categories that educators can use to determine if students are
at low risk, some risk, or at risk for poor language and reading outcomes. Students who
read less than 67 correct words per minute on the middle-of-year ORF benchmark in
Grade 3 are at risk for poor language and reading outcomes; students who read between
67 and 91 correct words per minute are at some risk; and students who read 92 or more
words per minute are at low risk for poor language and reading outcomes. From their
sample, the University of Oregon Center on Teaching and Learning determined that for
number of correct words a student correctly reads in 1 minute, some students in the low-
60
risk category are more likely to pass reading outcomes, and conversely, students in the at-
risk category are more likely to fail reading outcomes than pass.
Barger (2003) found these categories to be accurate in correlating the end-of-year
DIBELS ORF of 38 students to the North Carolina End-of-Grade Test. In his study,
100% of the students in the low-risk category passed the assessment, and 67% of the
students in the at-risk category failed the assessment. Baker et al. (2008) also confirmed
the findings of the University of Oregon Center on Teaching and Learning (2008) in a
study of 9,602 Oregon students in Grades 1, 2, and 3. As discussed in section 2, other
researchers found similar results (Buck & Torgesen, 2003; Shaw & Shaw, 2002; Vander
Meer et al., 2005; Wilson, 2005).
In addition to the number of correct words a student correctly reads in 1 minute,
educators use another component of the DIBELS ORF to assess students on retelling
what they read (Good & Kaminski, 2002b). The purpose of the retell fluency assessment
is to verify that students understand what they read, as opposed to simply word calling.
The developers of the DIBELS, Good and Kaminski, established that students should use
at least 25% of the number of words read when they retell a passage. Students who read
100 correct words per minute should use at least 25 words when they retell a passage.
Furthermore, Good and Kaminski specified that students who are progressing
appropriately toward their reading goal must read at their determined goal and use at least
25% of the words read correctly in 1 minute when retelling a passage.
Reliability and validity of the DIBELS ORF. The developers of the DIBELS
ORF probes established their reliability and validity. Researchers use reliability to
61
indicate whether an assessment consistently provides a score that researchers can trust
(Creswell, 2009). When the assessment committee analyzed DIBELS (DIBELS, 2002),
they found test-retest coefficients between .91 and .96 when the DIBELS ORF was
compared to other comparable assessments (DIBELS). The 1-minute fluency probes
provided reliable results. However, the developers recommended administering three to
five 1-minute probes to improve reliability.
Researchers use validity to indicate whether an assessment measures what it is
intended to measure (Creswell, 2009). They use concurrent validity to establish whether
the scores from one assessment compare to the scores from similar assessments
(Creswell, 2008). Hasbrouck and Tindal (1992) developed national oral reading fluency
norms by comparing the oral reading fluency rates of students throughout the country.
The DIBELS Assessment Committee (2002) indicated that the Hasbrouck and Tindal’s
(1992) research corresponded with the DIBELS’ benchmark goals.
In 2006, Hasbrouck and Tindal updated the national norms of ORF and expanded
them to include Grades 1 through 8. Their study included the DIBELS ORF probes along
with the Texas Primary Reading Inventory (University of Houston, 1999); AIMSweb
(Edformation, 2004); and the Reading Fluency Monitor (Read Naturally, 2002).
Hasbrouck and Tindal found similar results to the developers of DIBELS (Good &
Kaminski, 2002b). Examining the oral reading fluency rates of 17,383 students in the
middle-of-year Grade 3, Hasbrouck and Tindal found that students in the bottom 25th
percentile read less than 63 words correctly in a minute. The top half of the Grade 3
students in the middle of the year read more than 92 words correctly per minute.
62
Jenkins et al. (2007) reviewed studies that used ORF screenings during the
previous 10 years and concluded that the DIBELS ORF screenings were comparable to
other fluency assessments. This conclusion applied especially to students scoring in the
lowest category, at risk, and those scoring in the highest category, low risk. Jenkins et al.
found that the DIBELS cut scores tended to overestimate students in the middle category,
some risk. For comparison of results, they referred to Buck and Torgesen’s (2003) study
conducted in Florida Reading First schools, in which students in the middle group were
equally likely to perform satisfactorily or unsatisfactory on other assessments.
A study conducted in Oregon and Texas (Harn et al., 2008) confirmed that the
results of the DIBELS ORF rates provided comparable results. Harn et al. based their
study on Ehri’s (2005) phase theory, explained in section 2. In accordance with the
model, students in the prealphabetic phase progress from painstakingly sounding out
every letter in a word to recognizing units of sound within words. Eventually, they
progress to the more advanced consolidated alphabetic phase, in which they can fluently
read passages.
Jenkins et al. (2007) confirmed Ehri’s (2005) phase theory. The researchers
measured entering first-grade students’ knowledge of letter sounds using DIBEL’s
Nonsense Word Fluency assessment and compared it with their oral reading fluency rate
at the end of the Grade 1 using the DIBELS’ oral reading fluency assessment. DIBELS
ORF rates produced reliable results, confirming that first-grade students at the beginning
of the year in the prealphabetic phase who were able to recognize letter sounds
progressed to the consolidated alphabetic phase and were able to fluently read connected
63
text by the end of Grade 1. Thus, these researchers, as well as others, have established the
concurrent validity of DIBELS ORF (Begeny et al., 2006; Chard et al., 2008; Francis et
al., 2005; Katz et al., 2008; Pressley et al., 2005; Riedel, 2007).
Studies have also correlated the DIBELS ORF to reading proficiency on state-
mandated assessments in Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002;
Wood, 2006), Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina
(Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and
Pennsylvania (Shapiro et al., 2008). In addition, Pressley et al. (2005) found a
correlational relationship between DIBELS ORF and the reading portion of the Terra
Nova assessment to a greater degree than the Qualitative Reading Inventory (Leslie &
Caldwell, 1995) or teachers’ predictions. Researchers (Baker et al., 2008; Chard et al.,
2008) have also correlated the DIBELS ORF probes to the Stanford Achievement Test
(Harcourt Brace Educational Measurement, 2002) and the Iowa Test of Basic Skills (Katz
et al., 2008). The results of these studies further indicate the validity of the DIBELS ORF
benchmarks.
However, other researchers have questioned the validity of the DIBELS ORF.
(Allington, 2009; Goodman, 2006; Pearson, 2006; Pressley et al., 2005; Samuels, 2006).
Goodman criticized DIBELS ORF for not predicting reading proficiency. Pressley et al.
and Riedel (2007) claimed that the DIBELS ORF mispredicts reading comprehension on
other assessments. Concerned about bias, Pressley et al. called for researchers other than
the developers of DIBELS or those associated with Reading First to perform research
regarding the relationship of DIBELS ORF and reading proficiency.
64
In a study with 17,409 students, Roehrig et al. (2008) confirmed a significant
relationship between DIBELS ORF and reading proficiency as measured by the Florida
Comprehensive Assessment Test. However, the researchers recommended that the
developers of DIBELS ORF change the cut points. Roehrig et al. determined that
students reading less than 45 words correct per minute were at risk for unsatisfactory
performance on other measures of language and reading outcomes. The category of some
risk would consist of students reading between 46 and 75 words correct per minute.
Those reading more than 76 words correct per minute would be in the low risk category.
When Roehrig et al. evaluated the data using the cut scores, they found that 86% of the
students in the low-risk category, 62% of students in the some-risk category, and 20% of
the students in the at-risk category met grade level expectations at the end of the year.
Currently, the developers of the DIBELS categorize students who read 77 or more
words correct per minute at the beginning of Grade 3 as at low risk for not meeting grade
level expectations in other measures of language and reading outcomes. The developers
categorized students who read less than 53 words correct per minute as at high risk and
students who read between 53 and 76 words correct per minute as at some risk
(University of Oregon Center on Teaching and Learning, 2008). Based on these cut
scores, the University of Oregon Center on Teaching and Learning found that 90% of the
students in their low risk category, 34% of those in the some risk, and 3% of those in the
at-risk category met the goals expected of them at the end of Grade 3.
Roehrig et al. (2008) recommended “improved efficiency of classifications” (p.
353), and their educators screened students four times a year. The cut scores developed
65
by the University of Oregon Center on Teaching and Learning (2008) used in the current
study are based on studies that benchmarked students three times a year. Consequently,
the Roehrig et al. middle-of-year benchmark goals are different from those discussed
above in section 3. However, the beginning of year benchmark goals from the Roehrig et
al. study and the DIBELS benchmark goals are comparable.
Similar to Roehrig et al. (2008), Jenkins et al. (2007), after conducting a meta-
analysis of universal screening studies, recommended a reexamination of the cut points.
They pointed out that although many studies have found the DIBELS ORF valid, a
change in cut points might improve the assessment. More precise cut points should allow
educators to more accurately identify students who are at risk for not meeting grade level
language arts and reading outcomes. Such improved information will allow educators to
provide interventions only for the students who need it the most (Jenkins et al., Roehrig
et al.).
Administration of the DIBELS ORF. In the administration of the Grade 3
middle-of-year DIBELS ORF, each student read three passages (Good & Kaminski,
2002b). Trained teachers individually administered the middle-of-year DIBELS ORF
benchmark, timing students as they read three passages: (a) "The Field Trip," (b) "Keiko
the Killer Whale," and (c) "Getting Email" (Good & Kaminski). The passages were
arranged in order from the easiest to the most difficult. The student read each passage for
1 minute while the teacher recorded substitutions and omissions as errors. In addition, if
the student hesitated longer than 3 seconds, the teacher read the word and recorded it as
66
an error. Conversely, the teacher did not consider as errors words that the student self-
corrected within 3 seconds.
At the end of each passage, the teacher recorded the total number of words the
student read correctly in 1 minute as the oral reading fluency rate. This number was
determined by subtracting the number of errors from the total number of words the
student read. Then the teacher asked the student to retell what was read and recorded the
number of words the student used as the retell fluency. After the student read all three
passages, the teacher examined the oral reading fluency rate of the three passages and
recorded the mean rate as the middle-of-year oral reading fluency rate.
Scoring of the DIBELS ORF. The developers of DIBELS (Good & Kaminski,
2002b) categorized oral reading fluency rates as at risk, some risk, or low risk. On the
middle-of-year Grade 3 DIBELS ORF, students who read fewer than 66 words correct in
1 minute were considered at risk for poor language and reading outcomes. The
developers recommended that teachers provide these students with intensive intervention
for 60 minutes daily to target basic reading skills. Students who read between 67 and 91
correct words in 1 minute were considered at some risk for poor language and reading
outcomes. The developers recommended that teachers provide these students with
strategic intervention for 30 minutes daily to address their reading skills. Students who
read more than 92 words correctly in 1 minute were in the highest category and
considered at low risk for difficulties with language and reading outcomes (Good &
Kaminski). After administration and scoring of the DIBELS by the teachers, as described
above, student information was stored on a secured website.
67
The developers of DIBELS (Good & Kaminski, 2002b) did not establish
benchmark goals for the retell fluency portion. However, they explained that students
should use at least 25% of the words read correctly in 1 minute when they retell what
they read. For example, students who read 100 words correctly in 1 minute should use at
least 25 words when they retell the passage they read. Although benchmark goals have
not been established, the developers recommended that students who use less than 25%
of the words they read to retell a passage are not considered developed readers (Good &
Kaminski).
TAKS
The purpose of the TAKS is to determine which students are reading proficiently
on their grade level. The TAKS was developed by teams of educational specialists with
consultation from many educators (TEA & Pearson, 2008). Advisory committees
comprised of teachers of each subject at each grade level worked with test developers and
TEA staff to review and assess the TAKS at several points in the development process.
The advisory committees were comprised of educators from throughout Texas and
represented diverse ethnicities (TEA & Pearson).
The first advisory committee convened to identify which student expectations in
the state curriculum TAKS should be tested (TEA & Pearson, 2008). After testing, the
specialists and TEA staff compiled an assessment of the outcome. Subsequently, a second
advisory committee met to examine whether the wording of the passages, questions, and
answer choices measured the assigned student expectations. Based on this committee’s
68
report, test writers made the recommended changes, and field testing followed this
process.
The team at TEA and Pearson then analyzed the field testing results, and another
advisory committee met and evaluated the passages, questions, and answer choices to
identify any biases (TEA & Pearson, 2008). This committee also recommended
elimination of specific questions that appeared to reflect biases. In addition to these
processes, members of TEA and Pearson frequently consulted with national testing
experts during development of the TAKS. These detailed procedures resulted in an
assessment instrument that reflected the state curriculum and measured specific student
expectations.
Reliability and validity of the TAKS. Researchers have established both the
reliability and validity of the TAKS. The TEA and Pearson (2008) established reliability
and verified the internal consistency of the TAKS with the Kuder-Richardson Formula
20. With perfect reliability given the value of 1.0 (Gravetter & Wallnau, 2005),
reliabilities for dichotomously scored TAKS assessments ranged from .87 to .90,
respectively.
The TEA and Pearson (2008) also established concurrent validity of the TAKS.
Researchers use concurrent validity to establish how an assessment compares with other
similar assessments (Creswell, 2009). The TEA and Pearson conducted a grade
correlation study to determine how scores on the TAKS compared with performance in
respective grades or courses. The grade correlation study included all TAKS assessments
69
from Grade 3 to the Exit Level. TEA and Pearson reported the outcome indicated high
concurrent validity:
Results indicated that a high percentage of students who pass the TAKS tests also
pass their related courses. Small percentages of students passed the TAKS tests
but did not pass their related courses, passed their related courses but did not pass
the TAKS tests, or failed to pass the TAKS test or their related courses. (p. 166)
Administration of the Grade 3 Reading TAKS. The Grade 3 Reading TAKS
was a criterion-referenced assessment that measured four objectives:
1. The student will demonstrate a basic understanding of culturally diverse
written texts.
2. The student will apply knowledge of literary elements to understand culturally
diverse written texts.
3. The student will use a variety of strategies to analyze culturally diverse
written texts.
4. The student will apply critical-thinking skills to analyze culturally diverse
written texts. (TEA, 2004, p. 4)
To measure these objectives, students completed testing booklets containing three
passages of between 500 and 700 words. The passages were accompanied by a total of 36
multiple-choice questions, with the number of questions for each passage varying
between 11 and 13. The TAKS was an untimed assessment, and students took as much
time as they need during one school day to answer the questions. Students answered the
questions directly in the test booklet.
70
School administrators trained Grade 3 teachers to ensure the fidelity of testing
administration. The teachers administered the first Grade 3 Reading TAKS in March.
Students who failed this round (scored under 2100; see scoring explanation below) were
offered intensive intervention, developed by student success teams comprised of the
students’ parents, teachers, and administrators. Then the students were offered additional
opportunities in April and June to retake the Grade 3 Reading TAKS. Students unable to
score above 2100 after three attempts were retained in Grade 3 or provided intensive
intervention determined by the student support teams in Grade 4.
After administration of the TAKS by teachers, as described above, school
administrators collected students’ test booklets and kept them in a secure location. The
administrators then sent the booklets to the TEA staff in the headquarters in Austin,
Texas (TEA, 2009f). Staff members scored the booklets and returned the results to the
district. Educators who have access to the TAKS information signed oaths to maintain
confidentiality; those who break this confidentiality are subject to consequences that can
include loss of certification.
Scoring of the TAKS. The possible range of scores for the TAKS is from 1381 to
2630 (TEA, 2009b). Students were required to answer 24 questions correctly to obtain
the minimum expectancy scale score of 2100, indicating they were reading proficiently at
Grade 3 level. Students who answered 34 questions correctly received the commended
score of 2400, indicating they had demonstrated high academic achievement and
possessed a thorough understanding of the reading curriculum (TEA, 2006).
71
Data Collection and Analysis
The data were collected and analyzed to test the null and alternative hypotheses
for this study. They were the following:
H0: There is no statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS.
H1: There is a statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS.
To test the null hypothesis, after appropriate permissions were granted, I collected
archival data from Grade 3 students’ DIBELS ORF and TAKS. The school district at the
research site screened all students in Grade 3 three times a year with DIBELS ORF
benchmarks, and I used the middle-of-year DIBELS ORF for the 2008-2009 school year.
All Grade 3 students in Texas were required to take the Reading TAKS during the spring
semester, and the researcher used this TAKS for the 2008-2009 school year.
Data Collection
First I sought permission to conduct the study from Walden University’s
Institutional Review Board (IRB), and permission was granted (IRB Approval Number
06-06-10-0334942). I requested and was granted permission from the superintendent of
the school district to use the Grade 3 students’ DIBELS ORF and TAKS scores for the
2008-2009 school year (Appendix A).
72
For data collection of the DIBELS ORF scores, the students’ scores were stored
on a secure district website. Only individuals who signed an agreement of confidentiality
could obtain access to the site. The district involved in this study purchased and agreed to
the terms of confidentiality, allowing me access during the 2008-2009 school year. When
I became the district dyslexia teacher in 2009, I obtained access to the DIBEL’s ORF
rates for all the students in the district. The district superintendent granted me permission
to use these data for the study (Appendices A and B).
For data collection of the TAKS scores, the TEA sent reports to the district listing
individual test scores. After return of the students’ TAKS scores by the TEA to the
district, school administrators kept students’ individual information confidential,
accessible only to the students, their parents, and teachers. I had access to these data to
carry out my duties as the district dyslexia teacher. In addition, the district superintendent
granted me approval to use the data. I used the TAKS scores in the aggregate for the
2008-2009 school year.
With access to the school records, I used the Grade 3 students’ middle-of-year
DIBELS ORF rates and the end-of-year TAKS scale scores. To preserve students’
anonymity, I assigned code numbers to all students’ scores instead of using their names.
Then I entered the DIBELS benchmark rates and TAKS scale scores into the chosen
software program.
73
Data Analysis
I employed SPSS, version 17.0 (SPSS, 2009) for data analysis. SPSS is a widely
used statistical software package in educational research (Creswell, 2009). Following the
SPSS instructions, I entered the data into the computer.
I performed a Pearson correlation to answer the research question. Pearson
correlations “measure and describe a relationship between two variables” (Gravetter &
Wallnau, 2005, p. 412). I entered into the SPSS program the DIBELS ORF scores for
each student as the independent variable and the TAKS scores for each student as the
dependent variable. The level of significance was .05, a level widely used in educational
research (Creswell, 2009). The resulting statistic indicated whether a significant
relationship existed for Grade 3 students between their DIBELS ORF scores and TAKS
scores for the 2008-2009 school year.
With regard to the direction of the relationship, a correlation coefficient (Pearson
r) of 0 would indicate no linear relationship between two variables. A positive correlation
of 1.0 would indicate the maximum strong relationship, in which the scores of the two
variables increase together. A negative correlation of -1.0 would indicate the maximum
weak relationship, in which the scores of the two variables decrease together (Creswell,
2008; Gravetter & Wallnau, 2005). With regard to the strength of the relationship,
whether negative or positive as represented by Pearson’s r, value of generally .20 to .35
indicate a slight relationship, values of .35 to .65 indicate a somewhat strong relationship,
values of .66 to .85 indicate a very good relationship, and values of .86 and above (to the
perfect 1.0) indicate an excellent relationship (Creswell, 2008).
74
Researcher’s Role
During the 2008-2009 school year, I was a teacher of Grade 2 at the research site
of this study. In this position, I volunteered to tutor four dyslexic Grade 3 students, the
only ones identified as such in the third grade. In March 2009, I administered the Grade 3
Reading TAKS to these students. As of the 2009-2010 school year, I became the district
dyslexia teacher. In this capacity, I previewed all the assessment data from the students
on the campus and recommended which students needed additional reading instruction. I
teach some of these students in small groups, addressing their dyslexic tendencies. In
addition, when principals in the district request dyslexic assessments, I individually
administer a battery of assessments and report the information to a committee that
decides whether the student is dyslexic. I provide direct, explicit, systematic, sequential
instruction in all aspects of reading to most of the dyslexic students in the district.
Because I was a second-grade teacher during the 2008-2009 school year, to
remain neutral and removed from the testing administration, I did not administer any of
the DIBELS benchmarks. However, as described above, I worked with four identified
dyslexic students in an afterschool tutorial program and administered the Reading TAKS
to them. Because of my involvement in the administration of their TAKS, I did not
include the data from these four students in this study.
Protection of Participants’ Rights
I protected the rights of participants in several ways, first seeking and obtaining
approval for the study from Walden University’s IRB. The district granted me permission
to use the archival data of Grade 3 students’ DIBELS ORF and TAKS scores from the
75
2008-2009 school year (Appendices A and B). In addition, the Dynamic Measurement
Group granted me permission to use DIBELS in the study (Appendix C). The TEA
granted me copyright permission to use the 2009 Grade 3 Reading TAKS in this study
with the understanding that the TEA and Pearson Corporation are the owners of the
TAKS and with the understanding that individual student data were to be kept
confidential (Appendix D).
I did not include the children’s names or other information that could identify the
children in any reports of the study, and individual children’s rates and scores were not
reported. All oral reading fluency rates and scale scores were assigned code numbers and
were reported in group form only. Letters of consent or assent were not needed for the
study participants because the data were archival. The DIBELS ORF and TAKS are
routine assessments all students are required to take.
Summary
The purpose of this nonexperimental, quantitative study was to determine whether
the middle-of-year DIBELS ORF rates correlated with the scale scores of the Grade 3
Reading TAKS for students in a West Texas school district during 2008-2009 school
year. I used a quantitative correlational design to determine whether a statistically
significant relationship (p >.05) existed between the independent variable, the DIBELS
ORF rates, and the dependent variable, the scale scores of the Grade 3 Reading TAKS. A
nonprobability convenience sample of 155 Grade 3 students participated in the study, and
their demographic composition was representative of Grade 3 students in Texas overall.
76
In this retrospective study, archival records were used for two instruments, the
DIBELS ORF and the TAKS for Grade 3 students. Both instruments have been reported
with good reliability and validity in many studies. Data collection took place with the
permission of Walden University’s IRB and the district superintendent. The
superintendent provided a letter of cooperation and data use agreement for my use of
archival data (Appendices A and B), and I accessed archival records of Grade 3 students’
DIBELS ORF rates and TAKS scores for the 2008-2009 school year. In addition, the
Dynamic Measurement Group granted me permission to use DIBELS (Appendix C). The
TEA granted me copyright permission to use the 2009 Grade 3 Reading TAKS with the
understanding that the TEA and Pearson Corporation were the owners of the TAKS and
that individual student information would be kept confidential (Appendix D). Data
analysis took place with the SPSS software package, and I used Pearson correlation to
test the null hypothesis that there is no statistically significant relationship (p >.05)
between Grade 3 students’ middle-of-year DIBELS ORF rates and TAKS scores.
Although I taught second grade at the research site school, I have not taught third
grade, and I am presently the district dyslexia teacher. My professional role did not
compromise data collection or analysis. Protection of participants was assured, and
potential harm to participants was minimal. Data were coded by numbers only, and
students’ names did not appear. Further, results were presented solely in group form. I
report the results of the study in section 4 and discuss the findings in section 5.
77
Section 4: Results of the Study
Introduction
The purpose of this quantitative, retrospective study was to determine if a
statistically significant relationship (p < .05) existed between students’ oral reading
fluency and reading proficiency. Pearson correlation analysis was used to examine 155
Grade 3 students’ oral reading fluency rates and reading proficiency scores in a West
Texas school district. The data were analyzed to investigate whether a significant
relationship existed between the students’ middle-of-year Grade 3 DIBELS ORF rates
and their scale scores on the Grade 3 Reading TAKS during the 2008-2009 school year.
Research Question and Hypothesis
The following research question guided this study: Is there a statistically
significant relationship between Grade 3 students’ oral reading fluency rates and their
reading proficiency? From this question, the following null and alternative hypotheses
were formulated:
H0: There is no statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for
the 2008-2009 school year.
H1: There is a statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for
the 2008-2009 school year.
78
Research Tools
The dependent variable, oral reading fluency, was measured by the middle-of-
year 2009 Grade 3 DIBELS ORF benchmark rates. The independent variable, reading
proficiency, was measured by the scale score on the 2009 Grade 3 Reading TAKS. In this
section, I describe how these research tools were used in this study.
The Grade 3 teachers in the small West Texas school district were trained in the
administration of DIBELS. During January 2009, they listened to each student read three
passages: (a) "The Field Trip," (b) "Keiko the Killer Whale," and (c) "Getting Email"
(Good & Kaminski, 2002b). As the student read a passage for exactly 1 minute, the
teacher recorded the number of words read. The teacher marked substitutions and
omissions as errors and subtracted the number of errors made from the total number of
words read to ascertain the words read correctly in 1 minute. After the student read the
passage, the teacher asked the student to retell what was read (Good & Kaminski).
The teacher used the same procedure for all three passages and recorded the
median oral reading fluency score as the student’s middle-of-year DIBELS ORF
benchmark rate. These rates were stored on a secure website to which only authorized
personnel had access. As district dyslexia teacher, I was authorized to view the rates
(Appendix A).
The Grade 3 Reading TAKS was administered according to the guidelines set by
the TEA in compliance with the NCLB (2002) mandates for determining adequate yearly
progress. The teachers were trained to administer the Grade 3 Reading TAKS before
March 2009. The students were given 1 school day to read three passages and answer 36
79
questions. Upon completion, the booklets were securely transported to Austin for scoring
by the TEA, and the scale scores were electronically sent back to the district. The district
stored the scale scores on a secured website accessible only by authorized personnel. As
the district dyslexia teacher, I had authorized access to the scale scores
(Appendix A).
The district superintendent signed a letter of cooperation and a data use agreement
(Appendices A and B). These gave me permission to use the middle-of-year DIBELS
benchmark rates and the Grade 3 Reading TAKS scale scores from the 2008-2009 school
year. In addition, Dynamic Measurement Group granted me permission to use the
DIBELS (Appendix C). The TEA granted me copyright permission with the
understanding that the TEA and Pearson Corporation are the owners of TAKS and no
individual student information would be used in the study (Appendix D).
Data were collected with the following procedure. On an Excel spreadsheet, I
matched the DIBELS ORF benchmark rates and the Grade 3 Reading TAKS scale scores
for each individual student. Although 160 students were enrolled in the district’s Grade 3
during the 2008-2009 school year, the data for four students were not included because I
had administered the TAKS to them. Additionally, one student was excluded because no
score was available for the Grade 3 Reading TAKS. Thus, the oral reading fluency rates
and scale scores of the 155 remaining students were used in the study.
The DIBELS ORF benchmark rates and Grade 3 Reading TAKS scale scores
were imported into SPSS 17.0. To verify the accuracy of data entry, I compared the sum
of the benchmark rates on the Excel spreadsheet to the sum of the benchmark rates
80
entered into SPSS 17.0. The same procedure was followed with the Grade 3 Reading
TAKS scale scores. To verify accuracy and obtain the means and standard deviations for
both variables, I used an Excel spreadsheet to compute the formula for the Pearson
correlation and obtained the same results.
Data Analysis
A Pearson correlation analysis was conducted to determine if a statistically
significant relationship (p < .05) existed between the middle-of-Grade 3 DIBELS ORF
rates and the scale scores from the Grade 3 Reading TAKS for the sample of 155
students. The bivariate Pearson correlation analysis was conducted with the two
variables, with a one-tailed test of significance. Table 2 shows the results.
Table 2
Correlation of DIBELS ORF Rates and TAKS Scores for Grade 3 Students
Measure 1 2 Mean Standard
Deviation
1. DIBELS
ORF
--- .655* 86.9 33
2. TAKS .655* --- 2268 187
*p < .01.
As Table 2 shows, for the sample of 155 students, the mean for DIBELS ORF
was 86.9 (SD = 33), and the mean for the TAKS was 2268 (SD = 187). A statistically
significant correlation was found, r = .66 (p = .01), between the students’ DIBELS ORF
81
rates and TAKS scale scores for the 2008-2009 school year. This correlation was strong
enough to reject the null hypothesis and was in the very good range (.66-.85; Creswell,
2008). This correlation indicated a linear relationship between the two variables. Students
who had a high DIBELS ORF rate tended to have a high Grade 3 Reading TAKS scale
score and vice versa (Gravetter & Wallnau, 2005). Thus, Null Hypothesis 1 was rejected.
A statistically significant relationship was found between students’ oral reading fluency
rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading
proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the
2008-2009 school year.
Although the stated level of significance was p < .05, the finding of p < .01 is a
stronger level of significance and also results in rejection of the null hypothesis. For
example, in a study with 100 participants, if r were less than .16, I would reject the null
hypothesis if the alpha level for the one-tailed Pearson correlation was .05. However, if r
were less than .23, I would reject the null hypothesis if the alpha level for the one-tailed
Pearson correlation was .01 (Gravetter & Wallnau, 2005).
The study findings confirmed LaBerge and Samuels’ (1974) proposal regarding
fluency in the automaticity theory. LaBerge and Samuels (2006) posited that when
readers are able to read fluently, they have more attention to focus on other areas, such as
reading proficiency. The present results indicate a relationship between oral reading
fluency, as measured by the DIBELS ORF rates and reading proficiency, as measured by
the scale score on the Grade 3 Reading TAKS. Thus, the results support H1: There was a
statistically significant relationship between students’ oral reading fluency rates, as
82
measured by middle-of-year DIBELS ORF rates, and their reading proficiency, as
measured by scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year.
In addition, I investigated how the students in each of the DIBELS ORF
categories performed on the Texas Grade 3 Reading TAKS, although this aspect was not
part of the study purpose. However, this investigation is similar to Buck and Torgesen’s
(2003) classification of information in their correlational study comparing DIBELS ORF
rates to performance on the Florida Comprehensive Assessment Test (FCAT) to show
how the students in each of the DIBELS ORF categories performed on the FCAT.
To understand the present results, the ranges for the sample are reported for both
the DIBELS ORF and TAKS. Ranges are reported in relation to the cut points and
performance categories of the DIBELS ORF and TAKS. Table 3 displays these values.
Table 3
Ranges and Categorization Cut Points of DIBELS ORF and Grade 3 TAKS
DIBELS ORF
Possible Range Sample Range At risk Some risk Low risk
0 – 92+ 3-169 -67 67-91 92+
TAKS
Possible Range Sample Range Not proficient Proficient Commended
1381-2630 1835-2630 -2100 2100+ 2400+
Note. All DIBELS ORF scores are expressed in words read correctly per minute.
83
As Table 3 shows, for the present sample, the DIBELS ORF scores ranged from 3
words read correctly per minute (wcpm) to 169 wcpm. As mentioned in section 3, those
students whose oral reading fluency rates are less than 67 wcpm are considered to be at
risk for poor learning outcomes on other reading and language arts assessments. Students
whose oral reading fluency rates are between 67 and 91 wcpm are considered to be at
some risk, and students whose oral reading fluency rates are 92 or more wcpm are
considered at the lowest risk for poor outcomes on other reading and language arts
assessments. Table 3 shows that the range of Grade 3 Reading TAKS scale scores in this
sample of 155 students was 1835-2630. As stated in section 3, students are expected to
have a scale score of 2100 or more to be considered proficient. Students with scale scores
of 2400 or greater receive a rating of commended.
With this summary of the ranges and cut score categorizations of the DIBELS
ORF and TAKS, the results of the study sample per category can be better understood.
These were calculated with frequencies and percentages. Table 4 displays the Grade 3
DIBELS ORF and TAKS performance of the study sample.
As Table 4 shows, of the 155 students, 44 were considered to be at risk. About
half of the at-risk students (52%) failed the Grade 3 Reading TAKS with scale scores less
than 2100. About half (48%) scored proficient on the assessment. Five (11%) of the at-
risk students scored in the commended range.
84
Table 4
Comparison of Students’ Performance by Categories for DIBELS ORF and Grade 3
Reading TAKS
Number of
Students
DIBELS ORF
Categories
TAKS
Not proficient
Performance
Proficient
Categories
Commended
44 At risk 23 (52%) 21 (48%) 5 (11%)
47 Some risk 9 (19%) 39 (79%) 14 (30%)
64 Low risk 0 (0%) 64 (100%) 41 (64%)
155 Total 32 (21%) 123 (79%) 60 (39%)
As Table 4 also shows, the 47 students in the some-risk category performed much
better than the 44 students in the at-risk category on the Grade 3 Reading TAKS. Only
19% of the some-risk students did not score proficient on the Grade 3 Reading TAKS,
and 79% scored proficient. About three times as many of the some-risk students (30%)
scored in the commended range compared to those in the at-risk category (11%). In the
low-risk category, all (100%) of the 64 students passed the Grade 3 Reading TAKS, and
over half (64%) scored in the commended range.
The results as shown in Table 4 confirmed the values used by the developers of
DIBELS ORF to define their categories (University of Oregon Center in Teaching and
Learning, 2008). Such information as that in Table 4 could additionally help educators
and reading specialists identify which students could be at risk of failing the Grade 3
85
Reading TAKS before it is administered. This information could aid educators especially
in benchmarking, setting oral reading fluency goals for students, and monitoring their
progress (Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008).
The results of the current study also confirmed the findings of Hasbrouck and
Tindal’s (2006) oral reading fluency norms. Hasbrouck and Tindal found that Grade 3
students at the 50th percentile at the middle of the year were reading 92 wcpm. They
recommended that students who read less than 10 wcpm of the 50th percentile (82 wcpm)
be considered for intervention program. In the current study, all but 2 of the 86 students
(98%) who read 82 or more wcpm scored proficient on the Grade 3 Reading TAKS. One
hundred percent of the students who read 84 or more wcpm scored proficient. These
findings seem to support Hasbrouck and Tindal’s research.
Although the study results indicated a strong positive correlation between
students’ DIBELS ORF rates and Grade 3 Reading TAKS scores, the findings
demonstrate a relationship between oral reading fluency rates and reading proficiency.
They do not prove a causal relationship (Gravetter & Wallanau, 2005). In this study, the
significant linear relationship indicated that when oral reading fluency rates increased, the
Grade 3 Reading TAKS scale score tended to increase, with the converse also true. The
results do not, however, prove that high oral reading fluency rates cause high scores on
the Grade 3 Reading TAKS. Other variables could be contributing factors, such as
comprehension strategies students may use to answer questions on the Grade 3 Reading
TAKS, and such strategies could affect their performance.
86
This caution is supported by the study results. They do not prove that good oral
reading fluency always results in proficient reading. For example, 5 of the 44 at-risk
students not only scored proficient on the Grade 3 Reading TAKS but scored in the
commended range. Similarly, Buck and Torgesen (2003) reported that 42 of the 220
students in their high-risk category scored adequate on the FCAT. Another factor that
could affect reading proficiency is students’ knowledge of the English language. As
noted earlier, Samuels (2006) documented cases in which Hmong students whose second
language was English could fluently read English but could not read proficiently.
The use of compensational strategies could also have an effect on students with
low fluency rates but score proficient on reading assessments. In the current study, 21
(48%) of the at-risk students scored proficient on the Grade 3 Reading TAKS. In fact, 5
(11%) scored commended. In the some-risk category, 38 (79%) scored in the proficient
range and 14 (30%) scored in the commended range. These scores do not indicate that
students with low fluency rates cannot comprehend text but rather that they find it more
difficult to read proficiently.
Walczyk and Griffith-Ross (2007) reported that struggling readers in Grades 3, 5,
and 7 can compensate by applying strategies such as pausing, slowing down, reading
aloud, and looking back in the text. The Grade 3 students read at a constant rate and were
more affected by constraints, such as time pressure, than the Grade 7 students in the
study. Walczyk and Griffith-Ross conjectured that the Grade 3 students read at a slower
rate and were more likely to take the time to sound out words they did not know or use
87
context clues to read than the Grade 7 students. By using these skills, the struggling
readers in Grade 3 were able to comprehend.
Walczyk and Griffith-Ross’s (2007) conclusions may explain why 48% of the at-
risk students in the present study and 79% of the some-risk students were able to score
proficient despite their low fluency rates. Although the use of compensation strategies
may account for struggling readers’ ability to score in the proficient range, it should be
noted that the faster students read, the higher their chances of scoring proficient (at-risk
students, 48%; some-risk students, 79%; 100% low-risk students). In addition, the faster
students read, the higher their chance of scoring commended (at-risk students, 11%;
some-risk students 30%; low-risk students, 64%).
Summary
The purpose of this study was to determine if a statistically significant
relationship existed between oral reading fluency and reading proficiency for Grade 3
students in a West Texas school district. I used the following research question to guide
the study: Is there a statistically significant relationship between Grade 3 students’ oral
reading fluency rates and their reading proficiency? The resulting null hypothesis was as
follows: There is no statistically significant relationship between students’ oral reading
fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their
reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for
the 2008-2009 school year.
To test the hypothesis, I matched the students’ DIBELS ORF rate with their
Grade 3 Reading TAKS scale scores on an Excel spreadsheet and then imported them
88
into SPSS 17.0 (SPSS, 2009), after which a Pearson correlational analysis was
performed. Results indicated a strong significant correlational relationship (r = .66,
p < .01) between oral reading fluency and reading proficiency. Thus, the null hypothesis
was rejected. The study results confirmed LaBerge and Samuels’ (1974) proposal in the
automaticity theory: Students who read fluently can devote their attention to other
reading skills, such as comprehension.
This research indicated a linear relationship between oral reading fluency and
reading proficiency; as students’ oral reading fluency rates increased, reading proficiency
also tended to increase. Although a positive correlational relationship was found, this
result did not prove that oral reading fluency rates cause reading proficiency. Other
factors such as comprehension strategies and knowledge of English as a second language
may affect reading proficiency. Nevertheless, the strong positive correlation found for
Grade 3 students’ DIBELS ORF scores and Reading TAKS performance is important for
interpretation of the findings in terms of practical applications and implications for social
change. In section 5, I discuss these issues, as well as recommendations for action and
further study.
89
Section 5: Discussion, Conclusions, and Recommendations
Overview
The purpose of this study was to determine if a statistically significant (p < .05)
relationship existed between oral reading fluency and reading proficiency. The middle-
of-Grade 3 DIBELS ORF was used to measure oral reading fluency and the Grade 3
Reading TAKS scale scores was used to measure reading proficiency with 155 students
in a West Texas school district. In this section, the findings are reviewed and the
correlations found in this study are compared with the correlations of nine other similar
previous studies. The implications for social change are discussed and recommendations
are made both for future actions and further studies.
Interpretation of Findings
Based on the results of the study, I concluded that for Grade 3 students, oral
reading fluency rates are significantly positively related to their performance on state-
mandated assessments. These findings specific to students in Texas supported those of
previous studies correlating state-mandated assessments to the DIBELS ORF in other
states (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008; Shapiro et al., 2008; Vander
Meer et al., 2005; Wilson, 2005; Wood, 2006). For ease of comparison, Table 5 shows
the other studies by state, grade levels, number of students in the sample, and the
correlation statistic. Some researchers studied more than one grade level.
90
Table 5
Studies Correlating State-Mandated Assessments to the DIBELS ORF
Authors
State
assessment Grade level
Number in
sample
Correlation
found
Baker et al.,
2008
Oregon State
Reading
Assessment
3 4,696 .58 to .68
Buck &
Torgesen, 2003
Florida
Comprehensive
Assessment
Test
3 1,102 .70
Roehrig et al.,
2008
Florida
Comprehensive
Assessment
Test
3 5,207 .70 -.71
moderately
strong
Shapiro et al.,
2008
Pennsylvania
System of
School
Assessments
3 401 .67
Shapiro et al.,
2008
Pennsylvania
System of
School
Assessments
4 394 .64
Shapiro et al.,
2008
Pennsylvania
System of
School
Assessments
5 205 .73
moderately
strong
Shaw & Shaw,
2002
Colorado State
Assessment
Program
3 52 .80
Strong
(continued)
91
Authors
State
assessment Grade level
Number in
sample
Correlation
found
Wilson, 2005 Arizona
Instrument to
Measure
Standards
3 241 .74 moderately
strong
Wood, 2006 Colorado
Student
Assessment
Program
3 82 .70
moderately
strong
Wood, 2006 Colorado
Student
Assessment
Program
4 101 .67
Wood, 2006 Colorado
Student
Assessment
Program
5 98 .75
moderately
strong
The current study, in relation to these studies, compared the DIBELS ORF and
the TAKS with 155 Grade 3 students, and the results of Pearson correlation showed a
significance level of .66 (p < .05). These results confirmed the findings of the nine other
studies, as Table 5 illustrates, all of which reported significant correlations between
DIBELS ORF and state-mandated assessments. The lowest ranges were .58 to .68 (Baker
et al., 2008, .58 to .68; Shapiro et al., 2008, Grade 3 .67, Grade 4 .64; Vander Meer et al.,
2005, .65; Wood, 2006, Grade 4, .67).
92
Some of the researchers found higher correlations. Moderately strong
correlations, .70-.75, were found by Wood (2006, Grade 3, .70, Grade 5, .75), Barger
(2003, .73), and Shapiro et al. (2008, .73). The strongest correlation was found by Shaw
and Shaw (2002), 80.
In the context of these studies, the present results confirmed a significant
relationship between the Texas state-mandated assessments and the DIBELS ORF.
However, the present correlation was lower, .66, than those reporting moderate or strong
correlations (Barger, 2003; Shapiro et al., 2008; Shaw & Shaw, 2002; Wilson, 2005;
Wood, 2006). Several reasons may be cited for this difference.
There were 155 students in the current sample. Several of the studies with higher
correlations had smaller samples. The smallest sample of 38 was that of Barger (2003),
with a moderately strong correlation of .73. Shaw and Shaw (2002), with a sample of 52
students, had a strong correlation, the highest correlation of all the studies (.80). Two of
the grade levels in Wood (2006) had smaller sampler sizes, for the Grade 3 sample 82
students and a moderately strong correlation of .70; and for the Grade 5 sample 98
students and a moderately strong correlation of .75. Wood’s Grade 4 sample with a
population size similar to the one in this study (101) also had a similar significant
correlation (.67).
The other previous studies with correlations closest to the one found in the present
study had larger sample sizes. In the Vander Meer et al. (2005) study, the sample was 318
students, with a .65 correlation. Likewise, in the Shapiro et al. (2008) study, the Grade 3
sample was 401 and a similar significant correlation was found, .67. One of the Reading
93
First studies, that of Baker et al. (2008), had a large sample (4,696) and found significant
correlations ranging from .58 to .68.
Demographics could also have been a factor that contributed to varying
correlation values. The student sample used in the Wood (2006) study was mostly White.
Wood recommended replication with samples that represented other demographic
characteristics. The studies conducted with Reading First schools (Baker et al., 2008;
Buck & Torgesen, 2003; Roehrig et al., 2008; Wilson, 2005) had large numbers of
students from low socioeconomic backgrounds. Baker et al. recommended other studies
with samples of varying socioeconomic backgrounds to rule out poverty as a factor. The
present sample of the district Grade 3 students had approximately half Hispanic, a third
White, and half economically disadvantaged students. However, these characteristics
reflected all Grade 3 students in Texas (Table 1).
Another factor that may explain differences in study results was the rigor and
degree of testing higher-level thinking skills of the assessment used to measure reading
proficiency. Although all state-mandated assessments are designed to assess reading
proficiency as required by the NCLB (2002) legislation, they differ from state to state and
from grade to grade. State curricula determine how reading proficiency is defined and
assessed. For example, Wilson (2005) studied the relationship between DIBELS ORF
and performance on the Arizona Instrument to Measure Standards. With his sample of
241 students, he reported a moderately strong correlation (.74). However, in Wilson’s
discussion of limitations, he recommended that the study be repeated after Arizona began
94
using the next reading assessment designed to assess higher reading comprehension
skills.
In the study by Baker et al. (2008), when the SAT-10 was used to measure
reading proficiency, the researchers found a stronger correlation (.63 to .80) in their
Grade 2 sample than in their Grade 3 sample (.58 to .68), in which the Oregon State
Reading Assessment was used to measure reading proficiency. The authors speculated
that the rigor of the two reading proficiency assessments might have affected the
correlational differences. Fuchs et al. (2001) recommended additional studies beyond
Grade 3 that measured higher-level thinking. In the exploration by Applegate et al.
(2009) of state-mandated reading assessments in several states, the researchers observed
that the TAKS contained questions which required higher-level thinking than many of the
assessments in other states.
Another factor influencing results may be whether the assessment is timed. In
Texas, students are allowed as long as needed in 1 school day to answer the questions on
the Grade 3 Reading TAKS. However, in the North Carolina study (Barger, 2003), in
which a moderately strong correlation (.73) was found, students had 115 minutes to
answer 56 questions. Walczyk and Griffith-Ross (2007) found that Grade 3 struggling
readers were negatively affected by time constraints because they were struggling to
decode words. In the same study, however, time constraints positively affected Grade 7
struggling readers. These readers seemed to be more engaged when the reading was
timed, in contrast to untimed routine reading assignments. Time constraints should be
95
considered in comparisons of studies and future studies of oral reading fluency rates and
reading proficiency.
All of these correlational studies used DIBELS ORF to measure oral reading
fluency and compared the DIBELS ORF to a state-mandated assessment, but not all these
studies used the same DIBELS ORF benchmark. DIBELS ORF benchmarks are
administered three or four times a year. In the current study, I found a correlation of .66
between the middle-of-year DIBELS ORF benchmark administered in January 2009 and
the Grade 3 Reading TAKS administered in March 2009. Barger (2003) found a
moderately strong correlation (.73) between the North Carolina End-of-Grade 3
assessment and the DIBELS ORF benchmark administered 1 week prior to the state-
mandated assessment. Vander Meer et al. (2005) found a .65 correlation when they
compared the end-of-Grade 3 DIBELS ORF to performance on the Ohio Proficiency Test
administered in Grade 4. Baker et al. (2008) took a more comprehensive approach by
determining the correlational relationship between each of the DIBELS ORF benchmarks
over a 2-year period and the Oregon State Reading Assessment at the end of Grade 3
(beginning of Grade 2, .58; middle of Grade 2, .63; end of Grade 2, .63; beginning of
Grade 3, .65; middle of Grade 3, .68; end of Grade 3; .67). Thus, the varying time periods
between the administration of the DIBELS ORF and the administration of state-mandated
assessments could influence the different strengths of correlational values.
Several factors could explain the varying correlational values between this study
and the other nine studies conducted in seven other states. The samples sizes varied from
38 (Barger, 2003) to 5,207 (Roehrig et al., 2008). The demographic representation
96
differed widely. The Wood (2006) sample consisted of mostly White students and the
studies conducted in Reading First schools (Baker et al., 2008; Buck & Torgesen, 2003;
Roehrig et al., 2008; Wilson, 2005) had wide socioeconomic representations. Another
factor is the rigor of the state-mandated assessments. Although all are designed to follow
the stringent standards of NCLB (2002), the assessments differ in how reading
proficiency is defined and assessed. A salient factor is whether the assessment is timed; a
discrepancy can affect correlational relationships. Additionally, the time between the
administration of the DIBELS ORF benchmark and administration of state-mandated
assessments can account for varying correlational levels.
Based on the present study findings, several practical applications for
administrators, teachers, parents, and students can be suggested. Based on longitudinal
studies, such as those by Baker et al. (2008) and Vander Meer et al. (2005),
administrators can use oral reading fluency rates to predict which students are at risk for
failing a state-mandated assessment in years prior to the year of assessment
administration. According to Baker et al., oral reading fluency rates at the beginning of
Grade 2 showed a significant correlation (.63) to the state-mandated assessment that was
administered almost 2 years later. Vander Meer et al. showed similar results (.65)
between the end-of-Grade 3 DIBELS ORF and the state-mandated assessment
administered in Grade 4. Wood (2006) demonstrated that a significant correlational
relationship continues to exist between oral reading fluency and reading proficiency
through Grade 5 (Grade 3, .70; Grade 4, .67; Grade 5, .75). As Wood recommended,
administrators could use such information to identify in an early and timely manner
97
students at risk for failing state-mandated assessments and implement intervention
programs to address the needs of at-risk readers before they fail.
Teachers can benefit from the study results showing that oral reading fluency is
significantly related to reading proficiency. Hasbrouck and Tindal (2006) have developed
oral reading fluency norms that identify national percentiles for the beginning, middle,
and end of Grades 1 to 8. For example, the 50th percentile for Grade 3 students at the
middle of the year is 92 wcpm. According to Hasbrouck and Tindal’s instructions, Grade
3 students who are reading less than 82 wcpm should be receiving interventions. In the
current study, 84 of the 86 (98%) students who read more than 82 wcpm on the middle of
year DIBELS ORF scored proficient on the Grade 3 Reading TAKS. Of the remaining 69
students, 43% failed the Grade 3 Reading TAKS.
Teachers could use the information from these data. Students reading within 10
wcpm of the 50th percentile of Hasbrouck and Tindal’s (2006) oral reading fluency
norms have a very good chance of passing the state-mandated reading assessment.
Hasbrouck and Tindal’s oral reading fluency norms were similar to the cut points
established by DIBELS for the at-risk, some-risk, and low-risk categories (University of
Oregon Center in Teaching and Learning, 2008).
Researchers in several of the other state studies (Barger, 2003; Baker et al., 2008;
Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002) confirmed the
reliability of the DIBELS ORF categories in accurately identifying at-risk readers. Thus,
regardless of the grade level, teachers can use the DIBELS ORF benchmarks and
progress monitoring probes to set goals to ensure that as many children as possible score
98
within 10 wcpm of the 50th percentile of Hasbrouck and Tindal’s (2006) oral reading
fluency norms or in the low-risk category of DIBELS (University of Oregon Center in
Teaching and Learning, 2008).
Parents can benefit from the present study results as well. Once parents realize the
importance of their children’s ability to read fluently in the current as well as subsequent
grades, parents can be trained to work with their children at home. For example, teachers
can train parents how to administer a 1-minute oral reading fluency probe in parent-
teacher conferences or through written instructions. Schools can also present meetings for
training purposes. Rasinski and Stevenson (2005) demonstrated that positive changes in
students’ oral reading fluency resulted from parents working with their children daily at
home.
Students can also be encouraged by the present study results. Morgan and
Sideridis (2006) found that when students set goals, they were motivated to improve their
oral reading fluency scores. Once students understand the importance of oral reading
fluency and its relationship to their success on state-mandated assessments, they can set
their own goals to read more fluently. Students can be made aware of a variety of
strategies, such as repeatedly reading text (Begeny et al., 2006; Hiebert, 2005; Hudson et
al., 2005; Rasinski, 2006; Therrien et al., 2006; Therrien & Kubina, 2007); reading a
wide variety of texts (Kuhn, 2006); reading with a partner (Nes Ferrera, 2005); and
participating in reader’s theatre (Young & Rasinski, 2009).
99
Implications for Social Change
With the implementation of the NCLB (2002) legislation, schools have been
charged with demonstration of adequate yearly progress. Each year a significant number
of students must pass the state-mandated assessment. For example, for schools and
districts to meet the NCLB requirements for academic acceptability in 2009, the schools
and districts had to have certain percentages of students in each content area scoring
proficient on the state assessments: 70% in reading/English language arts, writing, and
social studies assessments; 55% in mathematics; and 50% in science. Four levels were
possible: Academically Unacceptable, Academically Acceptable, Recognized, and
Exemplary. For schools and districts to receive a designation of Recognized , 75% of
students in all subjects had to score proficient; for schools and districts to be considered
Exemplary, 90% of students in all subjects had to score proficient (TEA, 2009f).
To meet such goals, elementary educators need to be able to identify struggling
readers before they take assessments such as the Grade 3 Reading TAKS so that
interventions to improve their academic skills can take place. For elementary educators in
Texas, the findings of the present study provide useful information with which to identify
struggling readers. Once they are identified, interventions targeting basic literacy skills,
improvement of oral reading fluency skills, and teaching of decoding strategies can be
provided (Jenkins et al., 2007).
This identification of struggling readers is important for several reasons. In
kindergarten through Grade 3, students learn to read (Ehri, 2005). As they progress
through later years of elementary school, they make the transition to reading to learn in
100
other academic areas (Musti-Rao et al., 2009). When struggling readers are identified and
given remediation, the possibilities of their passing state-mandated reading assessments
in Grade 3 and above improve, as well as their success in other academic areas (Ehri,
2005; Shapiro et al., 2008). If struggling readers are able to improve their reading skills
by the end of Grade 3, they will more likely close the academic gap with their peers who
are more proficient in reading (Simmons et al., 2008). Furthermore, if the needs of
struggling readers are not met by the end of Grade 3, they are more likely to fall farther
and farther behind their more proficient peers (Morgan et al., 2008).
With reading mastery, Grade 3 students are more likely to improve academically
as they progress through the grades. They are also more likely to graduate from high
school (Houge et al., 2007; Rumberger & Palardy, 2005). When students graduate from
high school with proficient reading skills, they are more likely to obtain employment and
become contributing members of society.
Recommendations for Action
In this study, my results supported those of nine other studies (Baker et al., 2008;
Barger, 2003; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002;
Shapiro et al., 2008;Vander Meer et al., 2005; Wilson, 2005; Wood, 2006), confirming a
correlational relationship between oral reading fluency and reading proficiency. Thus, the
first recommended action is for administrators and teachers to recognize the importance
of oral reading fluency to the development of reading proficiency. Second, administrators
should cultivate learning communities (Senge, 2000) of district administrators, teachers,
parents, and students in which at-risk readers are identified early, provided interventions
101
that address their needs, and then monitored to determine if they are progressing
academically.
Third, the importance of the correlation between oral reading fluency and reading
proficiency should be communicated not only to educators but to students and parents.
By graphing oral reading fluency rates, students can quickly and easily see whether they
are improving. When these graphs include a goal line, students can see how fast they
need be reading by the end of the year. Once students have a visual representation of
where they need to be, they can also participate in setting their oral reading fluency goals.
Parents can also be trained to listen to their child read for exactly 1 minute and
then count the number of words their child read correctly. Parental training can be
provided through written instructions, parent-teacher conferences, and group training
sessions. Once the methods are established, oral reading fluency rates can be
administered daily or weekly. Rasinski and Stevenson (2005) demonstrated that parents
can be trained to be effective partners in programs designed to improve oral reading
fluency. Morgan and Sideridis (2006) demonstrated that students were motivated to
improve their oral reading fluency skills when they set goals to improve them.
Fourth, teachers in Grades 1 to 3 can be encouraged by administrators and in
professional development workshops to track oral reading fluency rates for their students.
Regardless of the grade level, elementary teachers can use oral reading fluency norms to
determine the developmental level of their students, monitor students’ progress, set goals
to motivate growth, and include instructional activities to build oral reading fluency. With
regard to monitoring students’ progress and setting goals, teachers can graph the oral
102
reading fluency of their students with charts and graphic organizers and set goals for
students to reach them, with the students contributing their own goals. Morgan and
Sideridis (2006) found that setting goals improves oral reading fluency. Further, during
reading periods, teachers can pair struggling readers with their more proficient peers so
the struggling students benefit from the modeling of the proficient students (Nes Ferrera,
2005).
With regard to activities, several studies (Begeny et al., 2006; Hiebert, 2005;
Martens et al., 2007; Therrien et al., 2006; Therrien & Hughes, 2008) demonstrated that
repeated reading of passages can improve oral reading fluency rates. Teachers can
motivate students to read a text repeatedly by organizing such activities as a reader’s
theatre. In this activity, students practice reading scripts to perform them before an
audience (Corcorcan & Davis, 2005; Rasinski, 2006).
Researchers have shown that wide reading improves oral reading fluency (Kuhn,
2005). Based on such findings, teachers could organize instructional programs that track
the number of books students read and reward them for reading certain amounts of books.
Students could also be introduced to software that keeps track of their books read and
update their records themselves.
When students are reading books at their instructional levels, they are more likely
to be actively engaged and understand what authors are saying (Schnick & Knickelbine,
2000). Lexiles are one tool teachers can use to determine either a readers’ ability or the
difficulty of text (MetaMetrics, 2010). Texts are analyzed and assigned Lexile numbers,
with lower numbers (from -200L) indicating lower reading ability and higher numbers (to
103
+1700L) indicating high ability (MetaMetrics, 2010). Educators can assign students
Lexile levels based on their performance on assessments such as TAKS. For example,
students with a raw score of 2100 on the 2009 Grade 3 Reading TAKS had a Lexile level
of 380L (TEA, 2009b). The Lexile range for that student would be from 280L to 480L.
Once students’ Lexile levels and ranges are determined, students’ abilities to read can be
matched with books which they are likely to be able to read. However, Lexile levels are
not fixed, and many factors can influence readability formulas. For example, interest can
affect Lexile levels. When students are interested in a subject, they may be able to read
text above their Lexile range (Schnick & Knickelbine, 2000). Therefore, a student with a
strong interest in airplanes may be able to read a book about airplanes with a Lexile level
above his Lexile range.
When assigning reading in cognitive subjects, such as social studies and science,
teachers can also make books available for students within their Lexile range. Students
should then feel confident they will be able to understand the text and will feel more
comfortable and motivated in reading. Readers who are intrinsically motivated to read
can understand what the author is saying because they are curious and want to read more.
In contrast, readers who are not motivated to read are often not engaged and fail to make
the necessary connections for comprehension (Schnick & Knickelbine, 2000).
In addition, teacher effectiveness has been identified as a crucial variable in
improved student test scores (Lane et al., 2009; Luster, 2010). Lane et al. found that
students with teachers who had a greater knowledge of the definition of oral reading
fluency and the skills required to test it demonstrated greater gains in oral reading fluency
104
than students with teachers who had less knowledge and skills. Professional development
workshops can educate and train teachers on the roles of oral reading fluency and reading
proficiency After training in how to administer oral reading fluency assessments, teachers
can be introduced to instructional activities that have been shown to improve both oral
reading fluency and reading proficiency and enhance students’ skills and motivation,
such as repeated reading (Therrien & Hughes, 2008), reader’s theatre (Young & Rasinski,
2009), goal setting (Morgan & Sideridis, 2006), and wide reading (Kuhn, 2005).
Finally, greater parental involvement is recommended. Although many parents
are involved daily with their children’s learning, parents often need support to become
involved (Persampieri, Gortmaker, Daly, Sheridan, & McCurdy, 2006; Senge, 2000).
School partnership with parents has been shown to have strong positive effects on their
child’s educational experience. Persampieri et al. documented dramatic growth in oral
reading fluency rates with two struggling readers whose parents were trained to work
with them for 10 to 15 minutes 3 times a week for 3 weeks. In a training session with
each individual parent, one of the researchers described the intervention, modeled it, and
observed the parent implementing the routine. Parents were then given a calendar with
stickers to track intervention dates and reward the child. In addition, each child was
assessed 3 times at school each week. One student’s initial reading of a passage on his
reading level went from 43 wcpm to 61 wcpm 3 weeks later. The other student’s initial
reading of a passage went from 36 wcpm to 60 wcpm.
Rasinski and Stevenson (2005) studied two groups of 30 first graders. In one
group, the parents worked with the children on reading assignments an average of 10
105
minutes daily for 11 weeks. In the other group, parents did not assist. The results showed
significantly greater gains for the experimental group in oral reading fluency rates than
for the control group.
Parents can be trained to help their children through parent-teacher conferences
and workshops specifically for parents on activities at home that encourage their children
to read. Activities may include small-group demonstrations, written instructions, and
modeling (Persampieri et al., 2006). To verify that parents are correctly implementing the
method and encourage them, teachers can follow up with procedural checklists, audio
tapes, video tapes, and phone calls (Persampieri et al.).
Recommendations for Further Study
Based on the results of this study, I present several recommendations for further
studies with quantitative, qualitative, and mixed methods. Some of these studies could
replicate the current research to test generalizability. Other researchers could extend the
present results for greater understanding of the relationship between students’ ratings on
the DIBELS ORF and state-mandated assessments as diagnostic tools to improve and
accelerate students’ reading proficiency.
Quantitative Studies
In this study and others (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008;
Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006), a
correlational relationship was established between oral reading fluency and reading
proficiency as measured by state-mandated reading assessments. However, because the
definition of reading proficiency differs from state to state and grade to grade, it is
106
important that additional studies be conducted to determine if a statistically significant
relationship exists between oral reading fluency and specific state-mandated reading
assessments (Roehrig et al.).
Thus, I suggest quantitative studies replicating the present study in other states
and at other grade levels than Grade 3. Further, beginning in the 2011-2012 year, the
TEA will replace the TAKS with the State of Texas Assessments of Academic Readiness
(STAAR; TEA, 2009f). An additional study is suggested to determine if a statistically
significant relationship exists between the DIBELS ORF benchmark rates and the Grade
3 STAAR assessment. Results could be compared with those of the present study not
only for additional understanding but also for determination of which assessment is more
strongly related to the DIBELS ORF.
Other quantitative studies could be conducted to explore the correlational
relationship of the DIBELS ORF to other types of reading proficiency assessments which
may measure various comprehension skills. Specifically, within the district in which this
study was conducted, a study could explore whether a statistically significant relationship
existed between DIBELS ORF benchmark rates and reading comprehension measured
with other assessments, such as istation, which assesses students’ reading ability through
Internet-delivered tests (istation, 2010). Additionally, a study could be conducted to
determine if a stronger correlational relationship existed between DIBELS ORF and the
state-mandated assessment or between the istation reading comprehension score and the
state-mandated reading assessment.
107
Each spring, the district involved in this study also administers the
Comprehensive Test of Basic Skills (CTBS; Kids IQ Test Center, 2010) to students in
Grades 1 and 2. A quantitative study could be conducted to determine if a statistically
significant relationship existed between DIBELS ORF rates and the comprehensive
reading score on CTBS. A study could also compare the correlational relationship
between the Grade 2 CTBS comprehensive reading scores and the Grade 3 DIBELS ORF
benchmark rates to determine which had the highest correlational relationship to the
Grade 3 state-mandated reading assessment.
Students are motivated to read books when using programs such as the
Accelerated Reader (AR) program (Renaissance Learning, 2010). With the AR, students
receive AR points after they read a book and take a test on the computer. Such
assessments frequently contain many explicit questions to which students can find the
specific answers in the book. In these assessments, few implicit questions are asked in
which students are required to infer or conclude, reading between the lines or beyond the
lines to answer questions. Students who take the Grade 3 Reading TAKS must
demonstrate that they can use higher-order comprehension skills of inference and
concluding.
Following from these observations, a study could be conducted on students’ use
of higher-order and lower-order comprehension skills. A correlational analysis could
investigate whether the DIBELS ORF and a reading proficiency assessment demonstrate
a greater statistically significant relationship if the assessment used required only lower-
order comprehension skills. Such a study would help teachers determine if instruction in
108
higher-order comprehension skills needs to be included in the instructional program.
Students could be more strongly motivated to learn if they recognize the relationship
between reading and answering explicit questions could help them in the state-mandated
reading assessment. Furthermore, they could be encouraged to learn additional strategies
to increase the possibilities of answering higher-order comprehension questions on the
state-mandated reading assessment. In addition, struggling readers could benefit from
development of meta-cognitive strategies to improve their scores on both the explicit and
implicit questions.
Finally, as Baker et al. (2008) suggested, researchers could use quantitative
studies to focus on students of specific ethnic backgrounds and subpopulations, such as
economically disadvantaged and special education students. In the current study, 56% of
the sample was economically disadvantaged and 54% was Hispanic. Although this
demographical representation was similar to that of all Grade 3 students in Texas, a study
could be conducted in which subpopulations, such as economically disadvantaged,
Hispanic, or special education students, were larger than the 64, the minimal sample size
determined by the G*Power Statistical Program for a study of this nature (Faul et al.,
2007). Results could help educators identify factors such as poverty and ethnicity that
might affect test scores. Once factors were identified, intervention programs could be
specifically designed and implemented for students in these subpopulations.
Qualitative Studies
I also recommend follow-up qualitative studies to the present research. Using
qualitative studies, researchers could explore factors such as participants’ experiences,
109
attitudes, perceptions, and understanding of a phenomenon (Creswell, 2009). In the
present study, I showed a statistically significant relationship between oral reading
fluency and reading proficiency for Grade 3 students. A natural next step could be for
researchers to explore teachers’ attitudes toward oral reading fluency and reading
proficiency. Teachers could be asked if and why they felt that oral reading fluency and
reading proficiency were related, if and why they felt that time spent improving oral
reading fluency was effective, and if and why they felt that periods spent teaching other
reading strategies were more effective. Researchers could include investigation of
teachers’ perceptions of effective teaching strategies that positively impact oral reading
fluency or reading proficiency. Teachers’ experiences with severely struggling readers in
relation to such strategies could be documented as well.
Attitudes and practices of parents could also be studied. Studies (Persampieri et
al., 2006; Rasinski & Stevenson, 2005) have found that when parents were trained to
work with their children, the students’ oral reading fluency increased. Researchers could
use qualitative studies to explore the attitudes and strategies of parents in assisting their
children with reading, as well as the children’s attitudes and experiences working with
their parents to improve oral reading fluency and/or reading proficiency.
Mixed-Method Studies
Mixed-method studies are effective because they combine quantitative and
qualitative designs. Use of the combination may provide a more comprehensive
understanding of the research problem and answers to the research questions than either
single method (Creswell, 2009). A mixed-methods study could be conducted to
110
determine how the administration of each of the assessments affects students’ scores. In
this study and the other state studies (Baker et al., 2008; Barger, 2003; Roehrig et al.,
2008; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006), no
one-to-one correspondence was found between oral reading fluency and the state-
mandated reading assessment. That is, none of the studies found that all the students with
a specific rate on the DIBELS ORF benchmark also achieved the same specific score on
the reading assessment.
Although a perfect one-to-one correspondence would be rare, studies could
investigate the factors that may account for the less-than-perfect results. For example, do
White students have a higher chance of passing the state-mandated assessment than
Hispanic students? In situations in which more White students pass than Hispanic
students, are there indications of bias? Researchers could conduct a mixed-method study
to determine if there were statistical differences in demographic characteristics and
passing rates (quantitative) and the views of teachers and administrators as to the reasons
for the results (qualitative).
Researchers could also conduct a mixed-method study to explore the effect of
testing environments on student performance in reading assessments. The DIBELS ORF
benchmarks are individually administered (Good & Kaminski, 2002b). When the
DIBELS ORF is administered, various factors could affect the results, such as the
presence of other students if the assessment was conducted in a corner of the classroom,
or administration in a quiet room with only the teacher and the student. In the first case, a
student could be distracted by the other students in the room and not read as fluently as if
111
he or she would have if in a quiet room alone with the teacher. The TAKS is most
frequently administered in a group setting (TEA, 2009e). Schools and teachers generally
make every possible effort to maintain test security, but factors such as the behavior of
other students in the classroom and eye contact with the teacher could have an effect on a
student’s performance. Using a mixed-method study, researchers could quantitatively
measure variables, such as student scores from both individual administration and group
administration, and then qualitatively explore participants’ and facilitators’ opinions of
the effects on the students of each variable.
Further, researchers could explore the attitudes of the teacher toward the student’s
reading level and regarding administration of the test. Although test directions
specifically guide the teacher’s verbal expressions, students are often aware of the
teachers’ body language. The teacher’s body language and assumptions about the
student’s difficulties in reading may affect the student’s oral reading fluency (Childs &
Fung, 2009; Singer & Goldin-Meadow, 2005).
Researchers could conduct a mixed-method studies to determine how such factors
might influence the results of the assessments. Quantitative explorations could include
surveys on teachers’ attitudes, as well as correlational analyses between the two tests. In
addition, researchers could ask open-ended questions during individual interviews with
participants on factors such as the tests themselves, distractions during administration,
and perceptions about students’ reading levels.
112
Conclusion
The purpose of this study was to determine if a statistically significant
relationship existed between oral reading fluency and reading proficiency. The middle-
of-year DIBELS oral reading fluency rates for 155 Grade 3 students in a West Texas
school district was used to measure oral reading fluency. Reading proficiency was
measured using their Reading TAKS scale scores. In this archival study, I used the data
from the 2008-2009 school year to conduct a Pearson correlation analysis. The analysis
demonstrated a statistically significant positive correlation of .66 (p < .01).
My results were comparable to those of nine other studies conducted in seven
other states (Baker et al., 2008; Barger, 2003; Buck & Torgesen, 2003; Roehrig et al.,
2008; Shaw & Shaw, 2002; Shapiro et al., 2008;Vander Meer et al., 2005; Wilson, 2005;
Wood, 2006). All of these studies found a significant correlation between oral reading
fluency and reading proficiency (range .58 to .80). Shaw and Shaw showed a strong
correlation, .80. Five other researchers (Barger, Roehrig et al., Shapiro et al., Vander
Meer et al., Wilson) reported moderately strong correlations. The difference in levels of
significance might be attributed to factors such as sample size, the rigor of the assessment
used to measure reading proficiency, and the sample demographic composition.
This study is the first to investigate the relationship in Texas between students’
oral reading fluency and reading proficiency. The study fills a gap in the literature by the
investigation of whether a statistically significant relationship existed between oral
reading fluency and reading proficiency as defined by the state-mandated assessment in
Texas (TAKS). Positive social change can occur when educators in Texas use the results
113
of this study to provide them with additional information and tools for helping struggling
readers gain greater fluency and proficiency. Because of the positive correlation between
the DIBELS ORF and the Grade 3 Reading TAKS, educators can use oral reading
fluency as a means of identifying struggling readers as early as Grade 1. Educators can
then provide scientifically-based interventions designed to improve students’ basic
literacy skills before they take the high-stakes Grade 3 Reading TAKS.
If the needs of struggling readers are addressed in the early grades, they have a
better chance of learning to read (Jenkins et al., 2007) and academic success on
assessments such as TAKS and in later course work throughout school (Ehri, 2005;
Shapiro et al., 2008). Struggling readers whose needs have been addressed also improve
their chances of graduating from high school (Houge et al., 2007; Rumberger & Palardy,
2005), as well as obtaining employment after they graduate (Katsiyannis et al., 2007).
I made several recommendations for action in the field to improve oral reading
fluency rates and to help students to become more proficient readers. Teachers, especially
in Grades 1 to 3, can use oral reading fluency rates to monitor their students’ progress.
Teachers can also employ teaching strategies such as repeated reading (Martens et al.,
2007), reader’s theatre (Corcorcan & Davis, 2005), motivational strategies (Morgan &
Sideridis, 2006), and pair reading (Nes Ferrera, 2005). Teachers can also enlist parental
support, with school-designed training programs for parents (Rasinski & Stevenson,
2005). Further, teachers can encourage students’ wide reading (Kuhn, 2005) and help
them monitor their progress. Lexile levels (Schnick & Knickelbine, 2000) can be used to
114
determine students’ instructional reading ranges, and teachers can then help students
select reading material on their level, increasing students’ motivation to read.
I recommended several possibilities for further study with quantitative,
qualitative, and mixed-method research. These recommendations included additional
quantitative studies replicating the present research in other states and grade levels, as
well as quantitative studies on the relationship between the DIBELS ORF and
assessments other than the TAKS, such as the new STAAR assessment to be
implemented in Texas in 2012 (TEA, 2009f). I suggested researchers use qualitative
studies to explore the attitudes of students, teachers, and parents regarding oral reading
fluency and reading proficiency. I suggested further that researchers use mixed-method
studies to explore the differences in the performance of various subgroups and the
reasons and thoughts of teachers and administrators regarding those differences.
Researchers could also conduct mixed-method studies to explore the possible impact of
testing environments on results.
In the current study, I confirmed a statistically significant relationship between
oral reading fluency rates and reading proficiency. Although researchers in other states
(Baker et al., 2008; Barger, 2003; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw &
Shaw, 2002; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006)
confirmed similar relationships, I was the first to document the relationship between oral
reading fluency rates and reading proficiency for Grade 3 students on the state-mandated
assessment in Texas. Administrators and teachers in districts and schools can use the
study findings to identify struggling readers in the early grades and provide immediate
115
and appropriate interventions to address these students’ reading needs. Through
identification and remediation, the students will not only improve their scores on state-
mandated assessments but will learn to read more proficiently and achieve greater
academic success in the elementary grades and subsequent grades.
116
References
Allington, R. L. (2009). What really matters in fluency: Research-based practices across
the curriculum. Boston, MA: Pearson.
Applegate, A. J., Applegate, M. D., McGeehan, C. M., Pinto, C. M., & Kong, A. (2009).
The assessment of thoughtful literacy in NAEP: Why the states aren’t measuring
up. Reading Teacher, 62(5), 372-381. doi:10:1598/RT.62.5.1
Baker, S. K., Smolkowski, K., Katz, R., Fien, H., Seeley, J. R., Kame’enui, E. J., & Beck,
C T. (2008). Reading fluency as a predictor of reading proficiency in low-
performing, high-poverty schools. School Psychology Review, 37(1), 18-37.
Barger, J. (2003). Comparing the DIBELS oral reading fluency indicator and the North
Carolina end of grade reading assessment (Technical Report). Asheville, NC:
North Carolina Teacher Academy.
Begeny, J., Daly III, E., & Valleley, R. (2006). Improving oral reading fluency through
response opportunities: A comparison of phrase drill error correction with
repeated readings. Journal of Behavioral Education, 15(4), 229-235.
doi:10.1007/s10864-006-9028-4
Buck, J., & Torgesen, J. (2003). The relationship between performance on a measure of
oral reading fluency and performance on the Florida Comprehensive Assessment
Test, I. Tallahassee, FL: Florida Center for Reading Research.
117
Chard, D. J., Stoolmiller, M., Harn, B. A., Wanzek, J., Vaughn, S., Linan-Thompson,
S., et al. (2008). Predicting reading success in a multilevel schoolwide reading
model. Journal of Learning Disabilities, 41(2), 174-188.
doi:10:1177/0022219407313588
Childs, R. A., & Fung, L. (2009). “The first year, they cried”: How teachers address test
stress. Canadian Journal of Educational Administration and Policy, 96, 1-14.
Cline, F., Johnstone, C., & King, T. (2006). Focus group reactions to three definitions of
reading (as originally developed in support of NARAP goal 1). Minneapolis, MN:
National Accessible Reading Assessment Projects.
Colorado Department of Education. (2009). Colorado model content standards for
reading and writing. Retrieved August 18, 2010, from http://www.cde.state.co.us/
cdeassess/documents/OSA/standards/read.htm
Corcoran, C. A., & Davis, A. D. (2005). A study of the effects of readers’ theatre on
second and third grade special education students’ fluency growth. Reading
Improvement, 42(2), 105-111.
Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating
quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson
Merrill Prentice-Hall.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches (3rd ed.). Los Angeles, CA: Sage.
118
Cusumano, D. L. (2007). Is it working? An overview of curriculum based measurement
and its uses for assessing instructional, intervention, or program effectiveness.
Behavior Analysis Today, 8(1), 24-34.
Daane, M. C., Campbell, J. R., Grigg, W.S., Goodman, M. J., & Oranje, A. (2005).
Fourth-grade students reading aloud: NAEP 2002 special study of oral reading
(NCES Report No. 2006-469). U.S. Department of Education. Washington, DC:
U.S. Department of Education.
Deeney, T. A. (2010). One-minute fluency measures: Mixed messages in assessment and
instruction. Reading Teacher, 63(6) 440-450. doi:10.1598/RT.63.6.1
Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Assessment Committee.
(2002). Analysis of reading assessment measures. Retrieved August, 20, 2010,
from https://dibels.uoregon.edu/techreports
Edformation. (2004). AIMSweb progress monitoring and assessment system. Retrieved
August 18, 2010, from http://www.edinformation.com
Ehri, L. C. (2005). Learning to read words: Theory, findings, and issues. Scientific
Studies of Reading, 9(2), 167-188.
Ehri, L. C., & McCormick, S. (1998). Phases of word learning: Implications for
instruction with delayed and disabled readers. Reading and Writing Quarterly,
14(2), 135-164.
Eldredge, J. (2005). Foundations of fluency: An exploration. Reading Psychology, 26(2),
161-181. doi:10.1080/02702710590930519
119
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible
statistical power analysis for the social, behavioral, and biomedical sciences.
Behavior Research Methods, 39, 175-191.
Fletcher, J. M., Francis, D. J., Morris, R. D., & Lyon, G. R. (2005). Evidence-based
assessment of learning disabilities in children and adolescents. Journal of Clinical
Child and Adolescent Psychology, 34(3), 506-522.
Florida Department of Education. (2005). FCAT handbook: A resource for educators.
Tallahassee, FL: Author.
Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., &
Shaywitz, S. E. (2005). Psychometric approaches to the identification of LD:
IQ and achievement scores are not sufficient. Journal of Learning Disabilities,
38(2), 98-108.
Fuchs, L., Fuchs, D., Hosp, M., & Jenkins, J. (2001). Oral reading fluency as an indicator
of reading competence: A theoretical, empirical, and historical analysis. Scientific
Studies of Reading, 5(3), 239-256.
Gersten, R., & Dimino, J. A. (2006). RTI (response to intervention): Rethinking special
education for students with reading difficulties (yet again). Reading Research
Quarterly, 41(1), 99-108.
Glenn, D. (2007, February 2). Reading for profit. Chronicle of Higher Education, 53
(22), p. A8. Retrieved August 18, 2010, from
http://chronicle.com/weekly/v53/i22/22a00801.htm
120
Goffreda, C. T., Diperna, J. C., & Pedersen, J. A. (2009). Preventive screening for early
readers: Predictive validity of the Dynamic Indicators of Basic Early Literacy
skills (DIBELS). Psychology in the Schools, 46(6), 539-552.
doi:10:1002/pits.20396
Good, R. H., & Kaminski, R. A. (2002a). DIBELS oral reading fluency passages for first
through third grades (Technical Report No. 10). Eugene, OR: University of
Oregon.
Good, R. H., & Kaminski, R. A. (Eds.). (2002b). Dynamic indicators of basic early
literacy skills (6th ed.). Eugene, OR: Institute for the Development of Educational
Achievement. Retrieved August 20, 2010, from http://dibels.uoregon.edu/
Goodman, K. S. (2006). The truth about DIBELS: What it is, what it does. Portsmouth,
NH: Heinemann.
Gravetter, F. J., & Wallnau, L. B. (2005). Essentials of statistics for the behavioral
sciences (5th ed.). Belmont, CA: Wadsworth/Thomson Learning.
Harcourt Brace Educational Measurement. (2002). Stanford Achievement Test (10th ed.).
San Antonio, TX: Psychological Corporation.
Harn, B. A., Stoolmiller, M., & Chard, D. J. (2008). Measuring the dimensions of
alphabetic principle on the reading development of first graders. Journal of
Learning Disabilities, 41(2), 143-157. doi:10:1177/0022219407313585
Harrison, C. (2008). The brown bear and the American eagle of reading research: Steven
A. Stahl and Michael Pressley remembered. Reading Research Quarterly, 43(2),
188-198. doi:10.1598/RRQ.43.2.5
121
Hasbrouck, J., & Tindal, G. A. (1992). Curriculum-based oral reading fluency norms for
students in grades 2-5. Teaching Exceptional Children, 24(3), 41-44.
Hasbrouck, J., & Tindal, G. A. (2006). Oral reading fluency norms: A valuable
assessment tool for reading teachers. Reading Teacher, 59(7), 636-644.
Hiebert, E. (2005). The effects of text difficulty on second graders' fluency
development. Reading Psychology, 26(2), 183-209.
doi:10.1080/02702710590930528
Hirsch, E. D. (2003). Reading comprehension requires knowledge—Of words and the
world. American Educator, 27(1), 10-13, 16-22, 28-29, 48.
Hosp, M. K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word
reading, and comprehension: Do the relations change with grade? School
Psychology Review, 34(1), 9-26.
Houge, T. T., Peyton, D., Geier, C., & Petrie, B. (2007). Adolescent literacy tutoring:
Face-to-face and via webcam technology. Reading Psychology, 28(3), 283-300.
Hudson, R. F., Lane, H. B., & Pullen, P. C. (2005). Reading fluency assessment and
instruction: What, why, and how? Reading Teacher, 58(8), 702-714.
doi:10:1598/RQ.58.8.1
istation. (2010). ISIP: istation's Indicators of Progress. Retrieved August 18,
2010, from http://www2.istation.com/products/isip.asp
Jenkins, J. R., Hudson, R. F., & Johnson, E. S. (2007). Screening for at-risk readers in a
response to intervention framework. School Psychology Review, 36(4), 582-600.
122
Jimerson, S. R., Anderson, G. E., & Whipple, A. D. (2002). Winning the battle and losing
the war: Examining the relation between grade retention and dropping out of high
school. Psychology in the Schools, 39(4), 441-457. doi:10.1002/pits.10046
Kame’enui, E. J., & Simmons, D. C. (2001). Introduction to this special issue: The DNA
of reading fluency. Scientific Studies of Reading, 53(3), 203-210.
Katsiyannis, A., Zhang, D., Ryan, J. B., & Jones, J. (2007). High-stakes testing and
students with disabilities: Challenges and promises. Journal of Disability Policy
Studies, 18(3), 160-167.
Katz, L. A., Stone, C. A., Carlisle, J. F., Corey, D. L., & Zeng, J. (2008). Initial progress
of children identified with disabilities in Michigan’s Reading First schools.
Council for Exceptional Children, 74(2), 235-256.
Katzir, T., Kim, Y., Wolf, M., O’Brien, B., Kennedy, B., Lovett, M., et al. (2006).
Reading fluency: The whole is more than the parts. Annals of Dyslexia, 56(1),
51-82.
Kids IQ Test Center. (2010). Comprehensive Test of Basic Skills. Retrieved August 20,
2010, from http://www.kids-iq-tests.com/CTBS.html
Kuhn, M. (2005). A comparative study of small group fluency instruction. Reading
Psychology, 26(2), 127-146. doi:10.1080/02702710590930492
LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information
processing in reading. Cognitive Psychology, 6, 293-323.
123
Lane, H. B., Hudson, R. F., Leite, W. L., Kosanovich, M. L., Strout, M. T., Fenty, N. S.,
& Wright, T. L. (2009). Teacher knowledge about reading fluency and indicators
of students’ fluency growth in Reading First schools. Reading and Writing
Quarterly, 25, 57-86. doi:10:1080/10573560802491232
Leslie, L., & Caldwell, J. (1995). Qualitative reading inventory (2nd ed.). Reading, MA:
Addison-Wesley Longman.
Luster, J. (2010). Why states should require a teaching performance assessment and a
subject matter assessment for a preliminary teaching credential. Research in
Higher Education Journal, 8, 1-16.
Manzo, K. (2005). National clout of DIBELS test draws scrutiny. Education Week,
25(5),1-12. Retrieved August 18, 2010, from http://www.edweek.org/login.html?
source=http://www.edweek.org/ew/articles/2005/09/28/05dibels.h25.html&destin
ation= http://www.edweek.org/ew/articles/2005/09/28/
05dibels.h25.html&levelId=2100
Marcell, B. (2007). Traffic light reading: Fostering the independent usage of
comprehension strategies with informational text. Reading Teacher, 60(8),
778-781. doi:10.1598/RT.60.8.8
Martens, B., Eckert, T., Begeny, J., Lewandowski, L., DiGennaro, F., Montarello, S., et
al. (2007). Effects of a fluency-building program on the reading performance of
low-achieving second and third grade students. Journal of Behavioral Education,
16(1), 38-53. doi:10.1007/s10864-006-9022-x
124
MetaMetrics. (2010). What is a Lexile measure? Retrieved August 18, 2010, from
http://www.lexile.com/ about-lexile/lexile-overview/
Miller, J., & Schwanenflugel, P. (2006). Prosody of syntactically complex sentences in
the oral reading of young children. Journal of Educational Psychology, 98(4),
839-853. doi:10.1037/0022-0663.98.4.839
Morgan, P. I., Farkas, G., & Hibel, J. (2008). Matthew effects for whom? Learning
Disability Quarterly, 31, 187-198.
Morgan, P. L. & Sideridis, G. D. (2006). Contrasting the effectiveness of fluency
interventions for students at risk for learning disabilities: A multilevel random
coefficient modeling meta-analysis. Learning Disabilities Research and Practice,
21(4), 191-210.
Musti-Rao, S., Hawkins, R., & Barkley, E. (2009). Effects of repeated reading on the oral
reading fluency of urban fourth-grade students: Implications for practice.
Preventing School Failure, 54(1), 12-23.
National Accessible Reading Assessment Project. (NARAP). (2006). Defining reading
proficiency for accessible large-scale assessments: Some guiding principles and
issues. Minneapolis, MN: Author.
National Institute of Child Health and Human Development. (NICHHD). (2000). Report
of the National Reading Panel. Teaching children to read: An evidence-based
assessment of the scientific research literature on reading and its implications for
reading instruction (NIH Publication No. 00-4769). Washington, DC: Author.
125
Nes Ferrara, S. (2005). Reading fluency and self efficacy: A case study. International
Journal of Disability, Development and Education, 52(3), 215-231.
No Child Left Behind Act of 2001, Pub. L. No. 107-110, § 115 Stat. 1425 (2002).
Ouellette, G. P. (2006). What’s meaning got to do with it: The role of vocabulary in word
reading and reading comprehension. Journal of Educational Psychology, 98(3),
554-566.
Pearson, P. D. (2006). Foreword. In K. S. Goodman (Ed.), The truth about DIBELS:
What it is, what it does (pp. v-xix). Portsmouth, NH: Heinemann.
Persampieri, M. P., Gortmaker, V., Daly, E. J., Sheridan, S. M., & McCurdy, M. (2006).
Promoting parent use of empirically supported reading interventions: Two
experimental investigations of child outcomes. Behavioral Interventions, 21,
31-57. doi:10.1002/bin.210
Pikulski, J., & Chard, D. (2005). Fluency: Bridge between decoding and reading
comprehension. Reading Teacher, 58, 510-519. doi:10:1598/RT.58.6.2
Pressley, M., Gaskins, I. W., & Fingeret, L. (2006). Instruction and development of
reading fluency in struggling readers. In S. J. Samuels & A. E. Farstrup (Eds.),
What research has to say about fluency instruction (pp. 24-46). Newark, DE:
International Reading Association.
Pressley, M., Hilden, K., & Shankland, R. (2005). An evaluation of end grade-3 Dynamic
Indicators of Basic Early Literacy Skills (DIBELS): Speed reading without
comprehension, predicting little. East Lansing, MI: Literacy Achievement
Research Center.
126
Rasinski, T. V. (2006). Reading fluency instruction: Moving beyond accuracy,
automaticity, and prosody. Reading Teacher, 59(7), 704-706.
Rasinski, T. V., & Stevenson, B. (2005). The effects of fast start reading: A fluency
based home involvement reading program, on the reading achievement of
beginning readers. Reading Psychology, 26(2), 109-125.
doi:10.1080/02702710590930483
Read Naturally. (2002). Reading fluency monitor. Retrieved August 18, 2010, from
www.readnaturally.com
Renaissance Learning. (2020). Accelerated Reader. Retrieved August 18, 2010, from
http://www.renlearn.com/ar/
Riedel, B. (2007). The relation between DIBELS, reading comprehension, and
vocabulary in urban first-grade students. Reading Research Quarterly, 42(4),
546-562.
Roehrig, A., Petscher, Y., Nettles, S., Hudson, R., & Torgesen, J. (2008). Accuracy of the
DIBELS oral reading fluency measure for predicting third grade reading
comprehension outcomes. Journal of School Psychology, 46(3), 343-366.
doi:10.1016/j.jsp2007.06.006
Rosenthal, J., & Ehri, L. C. (2008). The mnemonic value of orthography for vocabulary
learning. Journal of Educational Psychology, 100(1), 175-191.
Rumberger, R. W., & Palardy, G. J. (2005). Test scores, dropout rates, and transfer rates
as alternative indicators of high school performance. American Educational
Research Journal, 42(1), 3-42.
127
Samuels, S. J. (2006). Toward a model of reading fluency. In S. J. Samuels & A. E.
Farstrup (Eds.), What research has to say about fluency instruction (pp. 24-46).
Newark, DE: International Reading Association.
Samuels, S. (2007). The DIBELS tests: Is speed of barking at print what we mean by
reading fluency? Reading Research Quarterly, 42(4), 563-566.
Samuels, S. J., Ediger, K. M., Willcutt, J. R., & Palumbo, T. J. (2005). In S. E. Israel, C.
C. Block, K. L. Bauserman, & K. Kinnucan-Welsch (Eds.), Metacognition in
literacy learning: Theory, assessment, instruction, and professional development
(pp. 41-59). Mahwah, NJ: Lawrence Erlbaum Associates.
Schnick, T. & Knicklebine, M. (2000). The lexile framework: An introduction for
educators. Durham, NC: MetaMetrics.
Schwanenflugel, P., Meisinger, E., Wisenbaker, J., Kuhn, M., Strauss, G., & Morris, R.
(2006). Becoming a fluent and automatic reader in the early elementary school
years. Reading Research Quarterly, 41(4), 496-522.
Senge, P. (2000). Schools that learn: A fifth discipline fieldbook for educators, parents,
and everyone who cares about education. New York, NY: Doubleday.
Shapiro, E. S., Solari, E., & Petscher, Y. (2008). Use of a measure of reading
comprehension to enhance prediction on the state high stakes assessment.
Learning and Individual Differences, 18(3), 316-328.
Shaw, D. M., & Berg, M. A. (2009). Jail participants actively study words. Journal of
Correctional Education, 60(2), 100-119.
128
Shaw, R., & Shaw, D. (2002). DIBELS’ oral reading fluency-based indicators of third
grade reading skills for Colorado State Assessment Program (Technical Report).
Eugene, OR: University of Oregon.
Shaywitz, S. (2003). Overcoming dyslexia: A new and complete science-based program
for reading problems at any level. New York, NY: Vintage Books.
Simmons, D. C., Coyne, M. D., Kwok, O., McDonagh, S. Harn, B. A., & Kame’enui, E.
J. (2008). Indexing response to intervention: A longitudinal study of reading risk
from kindergarten through grade 3. Journal of Learning Disabilities, 41(2), 158-
173. doi:10.1177/00222194073113587
Singer, M. A., & Goldin-Meadow, S. (2005). Children learn when their teacher's gestures
and speech differ. Psychological Science, 16(2), 85-89. doi: 10.1111/j.0956-
7976.2005.00786.x
Speece, D., & Ritchey, K. (2005). A longitudinal study of the development of oral
reading fluency in young children at risk for reading failure. Journal of Learning
Disabilities, 38(5), 387-399.
Stanovich, K. E. (1998). Progress in understanding reading: Scientific foundations and
new frontiers. New York, NY: The Guilford Press.
Statistical Package for the Social Sciences. (SPSS). (2009). Predictive analysis software
17.0. Chicago, IL: Author.
Stecker, P. M., Lembke, E. S., & Foegen, A. (2008). Using progress-monitoring data to
improve instructional decision making. Preventing School Failure, 52(2), 48-58.
129
Success for All Foundation. (2010). About SFAF. Retrieved August 18, 2010, from
http://www.successforall.com/
Texas Education Agency. (TEA). (2004). Grade 3 reading Texas Assessment of
Knowledge and Skills information booklet. Austin, TX: Author.
Texas Education Agency. (TEA). (2006). TAKS performance level descriptors. Retrieved
August 18, 2010, from http://www.tea.state.tx.us/
index3.aspx?id=3222&menu_id=793
Texas Education Agency. (TEA). (2008). Grade placement committee manual: For
grade advancement requirements of the student success initiative. Retrieved
March 18, 2010, from http://ritter.tea.state.tx.us/student.assessment/
resources/ssi/GPCManual.pdf
Texas Education Agency. (TEA). (2009a). TAKS campus aggregate results page.
Retrieved August 18, 2010, from http://www.tea.state.tx.us/index3.aspx?
id=3216&menu_id=793
Texas Education Agency. (TEA). (2009b). Texas Assessment of Knowledge and Skills
raw score conversion table reading—March 2009 administration grade 3.
Retrieved August 18, 2010, from http://ritter.tea.state.tx.us/student.assessment/
scoring/convtables/2009/taks/mar09_g03_read.pdf
Texas Education Agency. (TEA). (2009c). Texas Assessment of Knowledge and Skills
statewide summary report. Retrieved August 18, 2010, from
http://ritter.tea.state.tx.us/student.assessment/reporting/ results/summary/
2009/taks_mar09_g03.pdf
130
Texas Education Agency. (TEA). (2009d). Texas Assessment of Knowledge and Skills—
Summary report, group performance: District. Austin, TX: Author.
Texas Education Agency. (TEA). (2009e). Texas Student Assessment Program,
Accommodations manual, 2009-2010. Retrieved August 18, 2010, from
http://ritter.tea.state.tx.us/ student.assessment/resources/accommodations/
AccommManual_2009_10.pdf
Texas Education Agency. (TEA). (2009f). 2009 accountability manual. Retrieved August
18, 2010, from http://ritter.tea.state.tx.us/perfreport/account/2009/
manual/index.html
Texas Education Agency (TEA) & Pearson. (2008). Technical digest for the academic
year 2007-2008. Retrieved August 18, 2010, from
http://www.tea.state.tx.us/index3.aspx? id=4326&menu_id=793
Therrien, W., Gormley, S., & Kubina, R. (2006). Boosting fluency and comprehension to
improve reading achievement. Teaching Exceptional Children, 38(3), 22-26.
Therrien, W., & Hughes, C. (2008). Comparison of repeated reading and question
generation on students’ reading fluency and comprehension. Learning
Disabilities: A Contemporary Journal, 6(1), 1-16.
Therrien, W., & Kubina, R. (2007). The importance of context in repeated reading.
Reading Improvement, 44(4), 179-188.
131
Torgesen, J. K., & Hudson, R. F. (2006). Reading fluency: Critical issues for struggling
readers. In S. J. Samuels and A. Farstrup (Eds.), What research has to say about
fluency instruction (pp. 130-158). Newark, DE: International Reading
Association.
U. S. Department of Education. (2007). The nation’s report card in reading 2007:
National assessment of educational progress at grades 4 and 8. Institute of
Education Sciences, National Center for Education Statistics (NCES Publication
No. 2007-496). Retrieved August 20, 2010, from http://nationsreportcard.gov/
reading_2007/
U.S. Department of Education. (2010). Reading First. Retrieved August 18, 2010, from
http://www2.ed. gov/programs/readingfirst/index.html
University of Houston. (1999). Technical report: Texas Primary Reading Inventory.
Retrieved August 18, 2010, from http://www.tpri.org/ Researcher%5FInformation
University of Oregon Center on Teaching and Learning. (2008). DIBELS data
system: DIBELS benchmark goals three assessment periods per year. Eugene,
OR: University of Oregon.
Vander Meer, C. D., Lentz, F. E., & Stollar, S. (2005). The relationship between oral
reading fluency and Ohio proficiency testing in reading (Technical Report).
Eugene, OR: University of Oregon.
Walczyk, J., & Griffith-Ross, D. (2007). How important is reading skill fluency for
comprehension? Reading Teacher, 60(6), 560-569. doi:10.1598/RT.60.6.6
132
Watson, J. D. (1968). The double helix: A personal account of the discovery of the
structure of DNA. New York, NY: Atheneum.
Wilson, J. (2005). The relationship of Dynamic Indicators of Basic Early Literacy Skills
(DIBELS) oral reading fluency to performance on Arizona Instrument to Measure
Standards (AIMS). Tempe, AZ: Tempe School District No. 3.
Wood, D. E. (2006). Modeling the relationship between oral reading fluency and
performance on a statewide reading test. Educational Assessment, 11(2), 85-104.
Wood, F., Hill, D., Meyer, M., & Flowers, D. (2005). Predictive assessment of reading.
Annals of Dyslexia, 55(2), 193-216.
Woodcock, R. M. (1973). Woodcock Reading Mastery Test-Revised. Circle Pines, MN:
American Guidance Corp.
Worthy, J., & Broaddus, K. (2002). Fluency beyond the primary grades: From group
performance to silent, independent reading. Reading Teacher, 55(4), 334-343.
Young, C., & Rasinski, T. (2009). Implementing reader’s theatre as an approach to
classroom fluency. Reading Teacher, 63(1), 4-13. doi: 10.1598/RT.63.1.1
133
Appendix A: Permission to Collect Data
134
135
Appendix B: Data Use Agreement
136
137
138
139
Appendix C: Permission to Use DIBELS
140
141
Appendix D: Permission to Use TAKS
142
TEXAS EDUCATION AGENCY - COPYRIGHT LICENSE AND PERMISSION
FORM
Applicant Information (to be completed by applicant):
Name: Kathy Jones
Title: Doctoral Student
City: Odessa
Zip: 79764
Email: jonesks1956@yahoo.com
Company: Walden University
Address: 9060 W.
UniversityState/Province: TX Country:
USAPhone: 432.230.0130
Details of Request (to be completed by applicant):
1. Title/Author of Works (the “Works”) for Which License is Sought (for example,,
Texas Assessment of Knowledge and Skills (TAKS), G/T Teacher Toolkit II, Videos
from series “Accommodations and Modifications in CTE Classroom Instruction”):
Use of TAKS as a resource for completing doctoral study at Walden University
2. Indicate How Works will be Used: Business/Commercial Non-
Profit/Government Personal Use
3. Briefly State Purpose for Which License is sought:
5 10-2010
I am a doctoral student at Walden University. My research will determine if a statistically
significant relationship exists between oral reading fluency (as measured by the middle-
of-year Grade 3 DIBELS benchmark) and reading proficiency (as measured by the scale
score on the February, 2009 Grade 3 Reading TAKS). I would like to obtain permission
from TEA to use TAKS in my study.
As you requested in our phone conversation, I am attaching a copy of my proposal with
all the references to TAKS highlighted in green. The specific websites and resources that
I referenced are listed below:
Texas Education Agency (2004). Grade 3 reading Texas Assessment of Knowledge and
Skills information booklet. Austin, TX: Author.
Texas Education Agency. (TEA). (2006). TAKS performance level descriptors.
Retrieved from http://www.tea.state.tx.us/index3.aspx?id=3222&menu_id=793
Texas Education Agency. (TEA). (2009a). TAKS campus aggregate results page.
Retrieved from http://ritter.tea.state.tx.us/cgi/sas/broker
143
Texas Education Agency. (TEA). (2009b). Texas Assessment of Knowledge and Skills
raw score conversion table reading—March 2009 administration grade 3. Retrieved from
http://ritter.tea.state.tx.us/student.assessment/scoring/
convtables/2009/taks/mar09_g03_read.pdf
Texas Education Agency. (2009c). Texas Assessment of Knowledge and Skills statewide
summary report. Retrieved from http://ritter.tea.state.tx.us/student.assessment/
reporting/results/summary/2009/taks_mar09_g03.pdf
Texas Education Agency. (TEA). (2009d). Texas Assessment of Knowledge and Skills—
Summary report, group performance: District. Austin, TX: Author.
Texas Education Agency. (TEA). (2009e). Texas Student Assessment Program,
Accommodations manual, 2009-2010. Retrieved from http://ritter.tea.state.tx.us/
student.assessment/resources/accommodations/AccommManual_2009_10.pdf
Texas Education Agency (TEA) & Pearson. (2008). Technical digest for the academic
year 2007-2008. Retrieved from http://ritter.tea.state.tx.us/student.assessment/
resources/techdigest/2008/table_of_contents.pdf
My committee has approved my proposal. Several people have indicated that many
doctoral students complete all the requirements of their dissertation within six months of
this point. Consequently, my current goal for completion is December, 2010. On the
phone you indicated that you would probably extend permission to use TAKS in my
study until March, 2011.
Thank you for your time!
Kathy Jones
4. Identify the specific URL(s) or website(s) where the “Works” can be located:
http://www.tea.state.tx.us/index3.aspx?id=3222&menu_id=793
http://ritter.tea.state.tx.us/cgi/sas/broker
The Texas Education Agency is dedicated to improving educational performance. It is
the owner of various proprietary materials such as the Works listed above. Any use of
the Works is subject to the attached Terms of Use and the provisions listed below. Any
use shall include the following notice: Copyright © 2010. Texas Education Agency.
All Rights Reserved.
144
Terms of Copyright License Authorized, if any (to be completed by Texas Education
Agency)
A. Authorized Use (to be completed by Texas Education Agency): Authorized to use,
reproduce, display, publish, and distribute TEA materials in conjunction with Ms. Jones'
doctoral archival study at Walden University.
B. Additional Restrictions:Use by applicant of TEA copyrighted material is limited to
graduate study only. You may not charge a fee for your study, nor market or sell your
study containing TEA copyrighted materials without a license agreement from TEA.
C. Fee for Use of Works: None
D. Term (check one): From date of issue and ending on March 31, 2011
E. Licensed Territory: Worldwide Country/Province U.S. & its possessions and
territories State(s) (please specify): Texas
F. Payment Schedule: None
G. Reporting: None
COPYRIGHT LICENSE AFFIRMATION OR DENIAL (To be completed by the
Texas Education Agency)
I, the undersigned, on behalf of the Texas Education Agency, grant a license for
the person or entity identified above to use the Works on a non-exclusive,
nontransferable, non-assignable basis pursuant to the above and the Terms of Use
set forth below.
I am unable to grant a copyright license for use of the specified Works.
By Texas Education Agency
Printed Name: Robert N. Jocius
Title: Manager, Office of Intellectual Property
Date: 6/23/2010
By Applicant
Printed Name: Kathy Jones
PLEASE RETURN THIS FORM TO: Robert N. Jocius
Manager, Office of Intellectual
Property, Room 5-125C
Texas Education Agency
1701 N. Congress Avenue
Austin, TX 78701
145
Copyrights@tea.state.tx.us
Ph.: (512) 463-9270
TERMS OF USE
1. Definitions.
“Agreement” means the above Copyright License and Permission Form and these
Terms of Use.
“Authorized Use” means the purpose for which the Works are to be used and the
approved use of the Works granted by TEA.
“Intellectual property rights” means the worldwide intangible legal rights or
interests evidenced by or embodied in: (a) any idea, design, concept, method,
process, technique, apparatus, invention, discovery, or improvement, including any
patents, trade secrets, and know-how; (b) any work of authorship, including any
copyrights, moral rights or neighboring rights; (c) any trademark, service mark,
trade dress, trade name, or other indicia of source or origin; (d) domain name
registrations; and (e) any other similar rights. The intellectual property rights of a
party include all worldwide intangible legal rights or interests that the party may
have acquired by assignment or license with the right to grant sublicenses.
“Licensee” means the applicant specified above, if applicant’s Copyright License
and Permission Form is approved by TEA.
“Licensed Territory” means the specific Territory (district, area or geographic
location) in which Licensee is located and for which the license, if any, is granted by
TEA.
“TEA” means the Texas Education Agency.
“Works” means the works of authorship, written materials or other tangible items
specifically set forth above.
2. Grant of License.
For the consideration set forth above, TEA grants to Licensee, and Licensee accepts
from TEA, a revocable, non-exclusive, non-transferable, non-assignable license to
utilize the Works on or in connection with the Authorized Use for educational
purposes in the Licensed Territory for the Term specified above.
146
3. Term and Termination.
(a) The license granted herein will be effective from the date the Agreement is
signed by TEA, and shall continue in effect for the Term identified above, unless
sooner terminated in TEA’s sole discretion.
(b) If Licensee breaches any of its obligations under the terms of this Agreement,
TEA may terminate this Agreement effective immediately, without prejudice to any
other rights or remedies TEA may have, upon giving written notice of termination
to Licensee.
(c) If Licensee attempts to assign, sublicense or subcontract any of its rights under
this Agreement, without the prior written permission of TEA, TEA may terminate
this Agreement effective immediately, without prejudice to any other rights or
remedies TEA may have, upon giving written notice of termination to Licensee.
Notwithstanding any of the foregoing, TEA has the right to terminate this
Agreement, with or without cause, upon giving thirty (30) days written notice of its
intent to terminate the Agreement.
(d) To the extent permitted by law, Licensee agrees to protect and treat all
information provided by TEA to Licensee relating to the Works or this Agreement
as confidential, proprietary and trade secret information. Licensee will not use,
disclose, or cause to be used or disclosed, such confidential and trade secret
information, or otherwise impair or destroy the value of such confidential and trade
secret information. Licensee agrees not to disclose or cause to be disclosed the terms
of this Agreement, except as required by law.
4. Compensation.
(a) Licensee will furnish to TEA a full, complete and accurate report showing all
gross and net revenue received regarding the Works according to the reporting
requirements found in Section G. of this license.
(b) Licensee will keep accurate books of accounts and records covering all
transactions relating to this Agreement and the Works for all years during which
this Agreement is in effect, and will keep such books for a period of five (5) years
after the expiration or termination of this Agreement. TEA and the State of Texas
auditor, or their representatives, will have the right to examine such books of
account and records and other documents and material in Licensee’s possession or
under its control insofar as they relate to the Agreement or the Works.
147
5. Indemnification.
For local educational agencies (LEAs), regional education service centers (ESCs),
and institutions of higher education (IHEs): Licensee, to the extent permitted by
law, shall hold TEA harmless from and shall indemnify TEA against any and all
claims, demands, and causes of action of whatever kind or nature asserted by any
third party and occurring or in any way incident to, arising from, related to, or in
connection with, any acts of Licensee in performance of the Agreement or in
connection with Licensee’s use of the Works.
For all other grantees, subgrantees, contractors, and subcontractors, including
nonprofit organizations and for-profit businesses: Licensee shall hold TEA harmless
from and shall indemnify TEA against any and all claims, demands, and causes of
action of whatever kind or nature asserted by any third party and occurring or in
any way incident to, arising from, related to, or in connection with, any acts of
Licensee in performance of the Agreement or in connection with Licensee’s use of
the Works.
6. Intellectual Property Rights.
(a) As between TEA and Licensee, TEA retains all right, title and interest in and to
the Works, and any derivative works thereof, and any intellectual property rights
associated therewith, including goodwill, and regardless of whether registered or
not. Licensee’s use of the Works, or the intellectual property associated therewith,
and the goodwill therein, inures to the benefit of TEA. Licensee has no rights in or
to the Works or the intellectual property associated therewith, other than the right
of use as expressly granted herein.
(b) To the extent that Licensee adds any additional materials to the Works, or
creates any derivative works to the Works, Licensee agrees that the additional
materials or derivative works are, upon creation, works made for hire and the sole
property of TEA. If the additional materials or derivative works are, under
applicable law, not considered works made for hire, Licensee hereby assigns to TEA
all worldwide ownership of all rights, including all intellectual property rights, in
and to the additional materials or derivative works to the Works, without the
necessity of any further consideration, and TEA can obtain and hold in its own
name all such rights. Licensee agrees to maintain written agreements with all
officers, directors, employees, agents, representatives and subcontractors engaged
by Licensee regarding this Agreement, granting Licensee rights sufficient to support
the performance and grant of rights to TEA by Licensee. Copies of such agreements
shall be provided to TEA promptly upon request.
148
(c) TEA, in its sole discretion, may procure registration of the intellectual property
rights in and to the Works, or any derivative works thereof. Licensee will not seek to
register or secure any intellectual property rights in and to the Works, or any
derivative works thereof. Licensee shall notify TEA in writing of any infringements
or imitations by others of the Works to which it becomes aware. TEA may then, in
its sole discretion, commence or prosecute in its own name any claims or suits for
infringement of the Works or the intellectual property rights associated therewith,
or any derivative works thereto.
(d) Licensee shall not, during the Term of this Agreement, or at any time thereafter,
dispute or contest, directly or indirectly, the right, title or interest of TEA in or to
the Works, the intellectual property rights therein, the goodwill reflected thereby, or
as to the validity of this Agreement or the license terms therein. The provisions of
this section 6 shall survive the expiration or termination of the Agreement.
(e) Licensee will legibly and prominently display the following copyright notice in
connection with all Authorized Use of the Works: Copyright © 2010. Texas
Education Agency. All Rights Reserved.
7. Quality Standards.
(a) Licensee acknowledges that if the Authorized Use of the Works by Licensee were
of inferior quality in design, material or workmanship, the substantial goodwill
which TEA has built up and now possesses in the Works would be impaired.
Accordingly, Licensee will ensure that its use of the Works will meet or exceed any
and all relevant industry standards, and are of such style, appearance and quality as
will be reasonable, adequate and suited to their exploitation and to protect and
enhance such goodwill.
(b) To ensure that appropriate standards of style, appearance, and quality are
maintained, Licensee will provide samples to TEA of Licensee’s proposed use of the
Works prior to distribution to the intended recipient, for TEA’s approval.
8. Student Information.
Licensee understands that any unauthorized disclosure of confidential student
information is illegal as provided in the federal Family Educational Rights and
Privacy Act of 1974 (FERPA), 20 USC, Section 1232g, and implementing federal
regulations found in 34 CFR, Part 99.
149
9. Miscellaneous.
(a) All notices and statements provided for herein will be in writing and together
with all payments provided for herein will be mailed to the addresses set forth above
or such other address as may be designated in writing by TEA or Licensee from
time to time.
(b) This Agreement does not constitute and will not be construed as constituting an
agency, partnership, joint venture, master and servant relationship, employer-
employee relationship, or any other similar relationship between TEA and Licensee,
and no representation to the contrary shall be held binding on TEA.
(c) This Agreement will be construed in accordance with the laws of the State of
Texas (except to the extent that federal patent, copyright, or trademark laws apply,
in which case federal law shall govern), without reference to its choice of law
principles, and entirely independent of the forum in which construction,
interpretation or enforcement of the Agreement or any part of it may occur.
(d) The parties shall use the dispute resolution process provided for in Chapter 2260
of the Texas Government Code to attempt to resolve any claim for a breach of this
Agreement. All reference in this subparagraph to “subchapters” is to the
subchapters referenced in the Tex. Govt. C. Chapter 2260. Any claim for a breach
of this Agreement that the parties cannot resolve in the ordinary course of business
shall be submitted to the negotiation process provided in subchapter B. To initiate
the process, Licensee shall submit written notice to the Texas Commissioner of
Education. Such notice shall specify that the provisions of subchapter B are being
invoked. The contested case process provided in subchapter C is Licensee’s sole and
exclusive process for seeking a remedy for an alleged breach of this Agreement if the
parties are unable to resolve their disputes in accordance with the negotiation
process provided in subchapter B. Compliance with the contested case process
provided in subchapter C is a condition precedent to seek consent to sue from the
Texas Legislature under Chapter 107 of the Tex. Civ. Prac. & Rem. C. In the event
that consent to sue is granted by the Texas Legislature, then venue for any action or
claim brought against TEA regarding this Agreement shall be in the state and/or
federal courts located in Austin, Travis County, Texas, and the parties expressly
submit themselves to the personal jurisdiction of the state and/or federal courts
located in Austin, Travis County, Texas.
(e) If, but only to the extent that, any provision of this Agreement is declared or
found to be illegal, unenforceable or void, then TEA and Licensee shall be relieved
of all obligations arising under such provision, it being the intent and agreement of
the parties that this Agreement shall be deemed amended by modifying such
provision to the minimum extent necessary to make it legal and enforceable while
150
preserving its intent. It is the specific intent and request of TEA and Licensee that
the court, arbitrator or other adjudicative body called upon to interpret or enforce
this Agreement modify such provision to the minimum extent necessary so as to
render it enforceable. If such amendment is not possible, another provision that is
legal and enforceable and achieves the same objectives shall be substituted therefor.
If the remainder of this Agreement is not affected by such declaration or finding
and is capable of substantial performance by the parties, then the remainder shall
be enforced to the extent permitted by law.
(f) This Agreement contains the entire understanding of the parties with respect to
the subject matter hereof, and supersedes in all respects all prior oral or written
agreements or understandings between any of them pertaining to the transactions
contemplated by this Agreement. There are no representations, warranties,
promises, covenants or undertakings other than those hereinabove contained.
(g) No waiver or modification of any of the terms of this Agreement will be valid
unless in writing and signed by both parties. No waiver by any party of a breach
hereof or a default herein will be deemed a waiver by such party of a subsequent
breach or default of like or similar nature.
151
Curriculum Vitae
Kathy Jones
jonesks1956@yahoo.com
EDUCATION
2010 Doctor of Education in Administration, Walden University
• Dissertation Topic: “The Relationship Between Oral Reading Fluency and
Reading Proficiency”
o Conducted research to determine if there was a relationship between
the middle-of-year oral reading fluency rates of Grade 3 students and
performance on the state-mandated assessment for reading proficiency
• 4.0 grade GPA
• Served on the Walden Advisory Committee
• Completed residencies in Dallas, TX and Lansdowne, VA
2001 Master of Education, Reading Specialist
Abilene Christian University, Abilene, Texas
• 4.0 GPA
• Participant in the education service center program for training professionals
in other fields to become educators
• Passed all teacher examinations the first time they were attempted, including
the prestigious Master Reading Teacher
1997 Bachelor of Applied Studies Degree
Abilene Christian University, Abilene, Texas
• 4.0 GPA
• Member of W-Club, an honor society for Christian women at ACU
• Member of Kappa Delta Pi
• Secretary for the Bachelor of Applied Studies (a degree completion program
for older adult students)
o Motivated adult learners to achieve their educational goals
o Helped prospective adult students realize how their life experiences
could be used to further their education
152
• Secretary for the McNair Scholars Program
o Received inspiration from the story of Ronald McNair, an African
American who came from a segregated school, attended MIT, obtained
a doctorate in physics , became the second African American
astronaut, and died on the Space Shuttle Challenger
o Encouraged first-generation, low-income, undergraduate college
students to pursue graduate programs
1995 Third Year Teaching Certificate
University of Micronesia/Federated States of Micronesia, Kolonia, Pohnpei, FM
• 4.0 GPA
• Elected as the Class Secretary
• Student Teacher, Kolonia Elementary School
• Conducted research in which 100 native children were assessed to determine
which English sounds the students struggled to learn
• Attended the local university in this country in which my family served as
missionaries
• Only American student in classes
• Obtained the highest degree available in education at that time in that location
TEACHING CERTIFICATES
• Standard
o Master Reading Teacher (Grades EC-12), 9/12/2003-2/28/2015
o Classroom Teacher English as a Second Language (Grades EC-12),
9/12/2003-2/28/2015
o Reading Specialist (Grades PK-12), 9/12/2003-2/28/2015
• Provisional
o Elementary Reading (Grades 1-8). 2/27/1998-Life
o Elementary Self-Contained (Grades 1-8), 2/27/1998-Life
o Generic Special Education (Grades PK-12), 5/22/1998-Life
EMPLOYMENT EXPERIENCE
District Dyslexia Teacher
Monahans-Wickett-Pyote Independent School District, June 2009 to present
• Devised process for identifying and progress monitoring dyslexic students
• Worked collaboratively with teachers, principals, and district administrators to
meet the needs of dyslexic students
• Provided dyslexic therapy for dyslexic students in Grades 1 to 6
• Trained other teachers and aides to teach dyslexic students
• Analyzed data from students’ standardized assessments to group students for
intervention
153
• Conducted 504 meetings where 504 students were identified and plans were made
for accommodations to address their educational needs
Teacher
Tatom Elementary, Monahans-Wickett-Pyote Independent School District, January 2008
to June 2009
• Taught second graders
• Volunteered to work with dyslexic students
o Taught six dyslexic students after school
All of these students who took the state-mandated reading
assessment passed the first time exam was administered
o Made recommendations to the district for assessments to use when
determining if students are dyslexic
o Trained counselors to administer assessments
o Administered dyslexia assessments
• Taught summer school for students who had failed the state-mandated reading
assessment twice
o Disaggregated data to determine students’ strengths and weaknesses
o Targeted instruction to meet students’ needs
o 75% of these students passed on their third attempt to take the state-
mandated reading assessment
504/Dyslexia Coordinator
Ector County Independent School District, August 2006 to December 2007
• Supervised six teachers who assessed students referred for dyslexia
• Evaluated district process for identifying dyslexics
• Improved record-keeping process for maintaining district records
• Collaborated with principals and other administrators to serve the needs of 504
students
Reading Coordinator
Ector County Independent School District, January 2005 to August 2006
• Wrote scope and sequence for reading, Grades K to 8
• Wrote district benchmarks for reading, Grades K to 8
• Trained teachers to administer state-mandated early reading inventory
• Organized staff development days to provide a wide range of professional
development activities to enhance instruction in reading
• Evaluated effectiveness of district reading programs
• Conducted professional development workshops in reading and dyslexia
154
Teacher
Bowie Junior High School, Ector County Independent School District, August 2000 to
December 2004
• Taught reading, Grades 7 and 8
• Assessed and provided services for ESL students
• At principal’s request, mentored first-year teachers
• Received grants from local education foundation designed to encourage
innovative teaching projects
o Hidden in Plain View, $3,000.00
o Walking a Mile in Your Moccasins, $2,970.01
o Be Proud of Your Heritage, $860.00
o Because We Can Change the World, $3,790.25
o Wordplay at Work, $999.74
o Empowering Parents and Teens to be Practically Perfect, $4,874.90
o Reaching Out, $4,745.26
• Secretary and President-elect of local chapter of Texas Classroom Teachers
Association
o Met with superintendent each month to discuss teachers’ concerns
o Attended conferences in the state capitol to keep informed on political
issues relating to teachers
o Did not serve as president because the district hired me as an administrator
(TCTA membership does not include administrators)
• Featured in several education spotlights on local television stations and in
newspapers
• Invited to join Professional Development Cadre, in which outstanding teachers
were trained to present professional development workshops
Special Education Teacher
Franklin Middle School, Abilene Independent School District, December 1997 to May
2000
• Taught a self-contained unit of mentally-challenged students
• Chosen as Teacher of the Month by student organization
• Attended two-year dyslexia training program
• Taught mentally-challenged students how to read
Teacher,
Adult Education, Abilene Independent School District, September 1996 to May 1997
• Taught adult students whose reading level was less than Grade 3
• Attended workshops for teachers of adult students
155
Missionary
Church of Christ, Kolonia, Pohnpei, Federated States of Micronesia, May 1982 to June
1995
• Served as missionary in a third-world developing country
• Adopted into a local clan
• Used the Bible to teach English to over 1,000 individuals
• Home-schooled my children over a 10-year period
o Taught and inspired one daughter, who was designated as gifted and
talented and later became a lawyer
o Taught and identified one son as dyslexic who later graduated with a
masters degree in international development
Noticed he was not learning to read proficiently
Read numerous books to gather information
Implemented teaching strategies that resulted in his academic
growth from a 2nd-grade reading and writing level to a 7th-grade
reading and writing level during his 6th-grade year
o Taught one son with ADHD who is currently enrolled in a medical physics
Ph.D. program
• Organized a network of home school families
o Taught other home schooled students who were struggling with reading
o Taught a Social Studies unit on Australia for all the home schooled
children and culminated the unit with an afternoon spent with the
Australian ambassador
• Worked collaboratively with native leaders and expatriates to open a library
o Joined other interested individuals during the first meeting to discuss the
idea
o Recruited and organized volunteers to work in the library
o Served on the Friends of the Library Board
• Conducted a summer school
o Wrote curriculum for the summer school
o Trained my husband and children to teach in summer school
o Invited 40 children who could not read to attend summer school
o Taught a boy whom local teachers had declared uneducable how to write
his name
o Taught a 17-year-old boy to read for the first time
o Taught one boy who claimed that he had learned more in summer school
than in the previous 9 months of school
• Accepted principals’ requests to train their teachers how to teach reading
156
COMMUNITY SERVICE
Visiting Committee, Brown Library, Abilene Christian University, 2004 to 2006
• Accepted appointment by the Director of Library Services to serve on committee
• Served with other experts to
o Review library’s operations
o Report to provost of the university on the library’s status
School Board Member, Odessa Christian Schools, 2007 to present
REFERENCES
Dr. Thomas Schnick
Chairman of Doctoral Committee
Walden University
608-242-4629
Thomas.Schnick@waldenu.edu
Dr. Barbara Bailey
Member of Doctoral Committee
404-272-2222
baileyba@mindspring.com
Susan Myers
Adjunct Professor
Grand Canyon University
(928) 333-4190
(928) 245-3101
susanmyers@frontiernet.net
Doug Doege
Supervisor
Tatom Elementary School
Monahans-Wickett-Pyote Independent School District
432-943-2769
ddoege@mwpisd.esc18.net

The Relationship between Oral Reading Fluency and Reading Proficiency.

  • 1.
    Walden University COLLEGE OFEDUCATION This is to certify that the doctoral study by Kathy Jones has been found to be complete and satisfactory in all respects, and that any and all revisions required by the review committee have been made. Review Committee Dr. Thomas Schnick, Committee Chairperson, Education Faculty Dr. Barbara Bailey, Committee Member, Education Faculty Dr. Brett Welch, University Reviewer, Education Faculty Chief Academic Officer David Clinefelter, Ph.D. Walden University 2010 Abstract
  • 2.
    The Relationship BetweenOral Reading Fluency and Reading Proficiency by Kathy S. Jones MA, Abilene Christian University, 2001 BS, Abilene Christian University, 1997 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Education Administration Walden University August 2010
  • 3.
    Abstract Students who strugglewith reading in Grade 3 often fall behind their peers in reading proficiency. Failure to meet minimum reading proficiencies of state-mandated tests can negatively affect children’s success in subsequent grades. Educators have used oral reading fluency tests as reliable indicators of students’ later reading proficiency. Studies in 7 states found that oral reading fluency predicted performance on state reading assessments. One aspect of the automaticity theory was used to explain why struggling readers have insufficient attention to devote to reading proficiently. This nonexperimental, quantitative study investigated whether a statistically significant relationship existed between Grade 3 students’ oral reading fluency rates and their reading proficiency when assessed by the state-mandated assessment. Pearson correlation was used to compare the middle-of-year oral reading fluency rates, measured by the Dynamics Indicators of Basic Literacy Skills oral reading fluency, and reading proficiency, measured by the scale score Grade 3 Reading Texas Assessment of Knowledge and Skills, of 155 Grade 3 students for the 2008-2009 school year in a school district. The results indicated a relationship between oral reading fluency and reading proficiency. Study results may help elementary school administrators, teachers, and reading specialists to identify at-risk readers and implement interventions to enable students to gain greater reading proficiency and improve their performance on state- mandated assessments.
  • 5.
    The Relationship BetweenOral Reading Fluency and Reading Proficiency by Kathy S. Jones MA, Abilene Christian University, 2001 BS, Abilene Christian University, 1997 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Education Administration Walden University August 2010
  • 6.
    UMI Number: 3423184 Allrights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. UMI 3423184 Copyright 2010 by ProQuest LLC. All rights reserved. This edition of the work is protected against unauthorized copying under Title 17, United States Code. ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, MI 48106-1346
  • 7.
    Dedication This dissertation isdedicated to God. Throughout my life, He has prepared me for what was coming next. At times, He prepared me for something before I even dared to dream about it. My doctoral journey has been no exception. God first planted the dream in me of pursuing a doctorate 15 years ago. I did not even have a bachelor’s degree at the time, and the dream seemed impossible. Every time it was in danger of fading, God would use someone to nudge me forward and place me back on track. He has been with me through all the twists and turns of these past 5 years that I have been enrolled in the doctoral program. He has molded me and made me into the person I have become. I dedicate this honor to Him and trust that He will guide me as He continues to use me to serve others with the talents with which He has blessed me.
  • 8.
    Acknowledgments God has usedmany people to encourage and inspire me on this doctoral journey. I am thankful for the support of my family. My husband, Roy, has stood by my side through all the ups and downs. He celebrated with me as I conquered hurdles and encouraged me when I was discouraged. My adult children, Rachel, Benjamin, Timothy, and Daniel, have showered me with comments, and I have especially been bolstered and encouraged with their cries of “Way to go, Mom!” My parents, Bill and Anita Love, planted the vision of setting goals and working hard to achieve them. My father has gone on, but I treasure the memory of the crack in his voice as he proudly told others I was working on my doctorate. People in the field of academia have guided me on my journey as well. I appreciate the support of my chair, Dr. Thomas L. Schnick. He has promptly reviewed my work each time it was submitted and provided me with constructive criticism that improved the quality of my dissertation. Dr. Barbara Bailey served as my methodologist. I thank her for helping me see how the problem, purpose, and research questions provided the foundation for the dissertation. I am especially appreciative of my editor, Dr. Noelle Sterne, who coached me from where I was to where I needed to be. She had a clear vision of what my dissertation had to be and how to get me there. In addition to editorial changes, she encouraged me to clearly describe my thoughts, to tell her more about what I was writing, and to find additional sources to support my statements. This coaching was invaluable in my desire to deepen and expand the substance of this dissertation.
  • 9.
    I am gratefultoo for the bonds I have forged with fellow students. Because Walden is an online university, I have not had the pleasure of meeting many of my peers. Nonetheless, I was amazed at how deeply the bond of friendship developed. I especially appreciate the friendship of Susan Myers. We met in our first online classes, roomed together at two residencies, exchanged many emails, and talked on the phone. I am confident that our professional relationship will continue even beyond graduation. I appreciate and thank Keith Richardson, the superintendent of the district involved in this study, for his cooperation. He supported me in signing the data use agreement and letter of cooperation so that I could use the district archival data for the study. I appreciate also the continued support and encouragement of Doug Doege, the principal of the school for the study. He began working with me when the logistics of the study were still a dream to me. As the details developed, he helped make them a reality. I am also grateful for the cooperation of the Grade 3 teachers and students. Finally, I give special thanks to Dynamics Measurement Group for giving me permission to use the DIBELS ORF benchmark in the study and to the Texas Education Agency for granting copyright permission for the use of the Grade 3 Reading TAKS in the study.
  • 10.
    i Table of Contents Listof Tables .......................................................................................................................v Section 1: Introduction to the Study ....................................................................................1 Problem Statement.........................................................................................................3 Nature of the Study........................................................................................................4 Research Question and Hypothesis................................................................................6 Purpose of the Study......................................................................................................6 Theoretical Framework..................................................................................................7 Operational Definitions..................................................................................................8 Assumptions, Limitations, Scope, and Delimitations..................................................10 Significance of the Study.............................................................................................11 Transition Statement....................................................................................................13 Section 2: Literature Review .............................................................................................16 Introduction..................................................................................................................16 Automaticity Theory....................................................................................................16 Oral Reading Fluency ..................................................................................................21 Definition of Oral Reading Fluency ..................................................................... 22 The Role of Oral Reading Fluency in the Context of Learning to Read .............. 26 Fluency Instruction ............................................................................................... 29 Reading Proficiency.....................................................................................................30 Reading Proficiency and Automaticity Theory .................................................... 30
  • 11.
    ii Definition of ReadingProficiency........................................................................ 34 Vocabulary and Matthew Effects.......................................................................... 36 Comprehension ..................................................................................................... 38 Relationship Between Oral Reading Fluency and Reading Proficiency in State-Mandated Assessments...........................................................................43 Studies of Grade 3 Only........................................................................................ 44 Studies of Reading First Schools.......................................................................... 44 Studies of Grade 3 and Other Grades ................................................................... 45 Limitations of State Studies.................................................................................. 47 Summary......................................................................................................................49 Section 3: Research Design ...............................................................................................52 Introduction..................................................................................................................52 Research Design...........................................................................................................52 Setting and Sample ......................................................................................................55 Setting ................................................................................................................... 55 Characteristics of the Sample................................................................................ 55 Sampling Method.................................................................................................. 57 Sample Size........................................................................................................... 57 Eligibility Criteria for Study Participants ............................................................. 58 Instrumentation and Materials .....................................................................................58 DIBELS ORF........................................................................................................ 59 TAKS ................................................................................................................... 67
  • 12.
    iii Data Collection andAnalysis.......................................................................................71 Data Collection ............................................................................................................71 Data Analysis...............................................................................................................73 Researcher’s Role ........................................................................................................74 Protection of Participants’ Rights................................................................................74 Summary......................................................................................................................75 Section 4: Results of the Study.........................................................................................77 Introduction..................................................................................................................77 Research Question and Hypothesis..............................................................................77 Research Tools.............................................................................................................78 Data Analysis...............................................................................................................80 Summary......................................................................................................................87 Section 5: Discussion, Conclusions, and Recommendations.............................................89 Overview......................................................................................................................89 Interpretation of Findings ............................................................................................89 Recommendations for Action ....................................................................................100 Recommendations for Further Study.........................................................................105 Quantitative Studies............................................................................................ 105 Qualitative Studies.............................................................................................. 108 Mixed-Method Studies........................................................................................ 109 Conclusion .................................................................................................................112 References........................................................................................................................116
  • 13.
    iv Appendix A: Permissionto Collect Data.........................................................................133 Appendix B: Data Use Agreement ..................................................................................135 Appendix C: Permission to Use DIBELS........................................................................139 Appendix D: Permission to Use TAKS ...........................................................................141 Curriculum Vitae .............................................................................................................151
  • 14.
    v List of Tables Table1. Characteristics of Grade 3 Students in Texas Overall and the Research Site .......................................................................................................................56 Table 2. Correlation of DIBELS ORF Rates and TAKS Scores for Grade 3 Students.....80 Table 3. Ranges and Categorization Cut Points of DIBELS ORF and Grade 3 TAKS ....82 Table 4. Comparison of Students’ Performance by Categories for DIBELS ORF and Grade 3 Reading TAKS.......................................................................................84 Table 5. Studies Correlating State-Mandated Assessments to the DIBELS ORF.............90
  • 15.
    1 Section 1: Introductionto the Study It is important for students to learn to read proficiently by the end of Grade 3. From kindergarten through Grade 2, students focus on learning to read (Ehri, 2005). The process of learning to read is a complex but systematic one. Many children learn to recognize letters and words even before beginning formal school (Ehri, 2005). By the end of Grade 3, students are expected not only to decode the words in a text, but also to understand what they read (National Institute of Child, Health and Human Development [NICHHD], 2000). Students who struggle with reading at the end of Grade 3 often continue to fall farther behind their peers in terms of reading proficiency (Morgan, Farkas, & Hibel, 2008). The failure to meet minimum reading proficiencies at the end of Grade 3 can have negative consequences on children’s success in subsequent grades (Jimerson, Anderson, & Whipple, 2002; Katsiyannis, Zhang, Ryan, & Jones, 2007). After Grade 3, students transition from learning to read to reading to learn in other subject areas (Musti-Rao, Hawkins, & Barkley, 2009). As a part of implementing the No Child Left Behind legislation (NCLB, 2002), the federal government commissioned the National Accessible Reading Assessment Project (NARAP, 2006) to define reading proficiency. Their definition described reading as a process in which readers decode words in order to make meaning. Readers understand text by using a variety of reading strategies to determine the purpose of a passage and understand the context and nature of the text. Once the NARAP established its working definition of reading proficiency, states used this definition to write and assess curriculum objectives for reading. In section
  • 16.
    2 2, I reviewliterature that discussed reading in the context of students’ learning to read and reading proficiency. In compliance with NCLB (2002), federal legislators mandated that school districts monitor students’ progress by identifying academic needs early in school, and providing scientifically-based interventions. As NCLB requires, states have designed curriculum standards and annual assessments to measure the number of students who read proficiently. For example, Texas has used performance on high-stakes tests in Grade 3 to determine promotion or retention of students (Texas Education Agency [TEA], 2008). Given the legal mandates and the commitment of NCLB to leave no child behind academically, it is important that local school districts identify nonproficient and struggling readers before they fail in reading and later grades (Jenkins, Hudson, & Johnson, 2007). To ascertain students’ reading proficiency in the early grades, researchers have established that oral reading fluency is a reliable predictor of reading proficiency (Fuchs, Fuchs, Hosp, & Jenkins, 2001; Simmons et al., 2008). Section 2 contains additional reports of researchers who found that oral reading fluency was a predictor of performance on seven state-mandated reading assessments (Baker et al., 2008; Barger, 2003; Roehrig, Petscher, Nettles, Hudson, & Torgesen, 2008; Shapiro, Solari, & Petscher, 2008; Vander Meer, Lentz, & Stollar, 2005; Wilson, 2005; Wood, 2006). Section 2 also contains reviews of literature in which reading experts disagree on the definition of fluency (Kame’enui & Simmons, 2001; Samuels, 2007). States define reading proficiency by using curriculum standards rather than the verbal definitions of
  • 17.
    3 NARAP (2006), andthe curriculum standards may differ from state to state (Colorado Department of Education, 2009; Florida Department of Education, 2005; TEA, 2004). Researchers have established positive correlations between oral reading fluency rates and reading proficiency (Hosp & Fuchs, 2005; Simmons et al., 2008). Additional research is needed to confirm the specific relationship between oral reading fluency assessments and other state-mandated tests for reading proficiency (Roehrig et al., 2008). The purpose of this research project was to determine if a relationship existed between oral reading fluency of Grade 3 students in a West Texas school district and their performance on the state-mandated assessment in Texas in the 2008-2009 school year. Problem Statement In Texas, the site of the current research, in 2009, 33,462 students, approximately 15% of all Grade 3 students, were unable to read proficiently enough to pass the Grade 3 Reading Texas Assessment of Knowledge and Skills (TEA, 2009c). According to the Nation’s Report Card in Reading (U. S. Department of Education, 2007), 67% of the fourth-grade students who took the National Assessment of Educational Progress (NAEP) in 2007 scored at the basic level or above. However, 33% of the tested students did not read well enough to score at the basic level. The majority of students learn to read proficiently (Chard et al., 2008; Ehri, 2005). Nevertheless, many students do not reach an acceptable level of reading by Grade 3 and are considered struggling readers (Applegate, Applegate, McGeehan, Pinto, & Kong, 2009; Pressley, Gaskins, & Fingeret, 2006; Torgesen & Hudson, 2006).
  • 18.
    4 Educators have usedoral reading fluency as a reliable indicator of students’ progress toward overall reading proficiency (Jenkins et al., 2007). The following researchers have found significant correlations between oral reading fluency rates and reading proficiency in seven states’ mandated reading assessments: Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002; Wood, 2006), Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and Pennsylvania (Shapiro et al., 2008). However, it is unknown whether a similar connection exists in the Texas school system (Roehrig et al.). An investigation of this possible relationship is important for providing evidence to aid elementary school administrators, teachers, and reading specialists to identify at-risk readers. Early identification of struggling readers in concurrence with research-driven interventions can close the gap between these readers and their proficient peers before the end of Grade 3 (Simmons et al., 2008). Nature of the Study To determine if a statistically significant (p < .05) relationship existed between students’ oral reading fluency rates and their reading proficiency, this nonexperimental quantitative study compared the middle-of-year oral reading fluency rates and reading proficiency of 155 Grade 3 students for the 2008-2009 school year in a West Texas school district. The students resided in a small West Texas town with a population of 6,821, and in which all Grade 3 students attended the same elementary school. The demographics of the school district are similar to those of all Grade 3 students in Texas
  • 19.
    5 (TEA, 2009c). Toobtain as wide a range of scores as possible, all of the Grade 3 students in this district comprised the nonprobability convenience sample. The Dynamic Indicators of Basic Early Literacy Skills Oral Reading Fluency (DIBELS ORF) comprised the independent variable to measure students’ oral reading fluency rates. The developers of DIBELS ORF found the assessments reliable and valid (Good & Kaminski, 2002a). Additionally, other researchers have found DIBELS ORF rates reliable (Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002; Vander Meer et al., 2005; Wilson, 2005, Wood, 2006). However, some researchers disagreed with the definition of fluency used by the developers of DIBELS (Hudson, Lane, & Pullen, 2005; Young & Rasinski, 2009) and the reliability and validity of the assessment (Samuels, 2007). Section 2 contains a review of the literature on DIBELS ORF, including the controversy. The Grade 3 Reading Texas Assessment of Knowledge and Skills (TAKS) scale scores (TEA, 2009b) comprised the dependent variable to measure students’ reading proficiency. The TEA in conjunction with Pearson Education established the reliability and validity of TAKS (TEA & Pearson, 2008). Staff members of these organizations have worked regularly with teachers in Texas and national testing experts to ensure that the TAKS is a quality assessment. Further information describing the reliability and validity of TAKS is included in section 3. To address the study problem, I used the middle-of-year 2008-2009 DIBELS ORF benchmark rates of the Grade 3 sample from January 2009 and the scale score of the 2009 Grade 3 Reading TAKS. I applied the Statistical Package for the Social Sciences
  • 20.
    6 (SPSS, 2009) softwareprogram, version 17.0, and conducted a Pearson correlation analysis to determine if there was a statistically significant relationship (p < .05) between oral reading fluency rates as measured by DIBELS ORF and reading proficiency as measured by the scale score of the Grade 3 Reading TAKS. Research Question and Hypothesis The following research question guided this study: Is there a statistically significant relationship between Grade 3 students’ oral reading fluency rates and their reading proficiency? From this question, the following null and alternative hypotheses were formulated: H0: There is no statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year. H1: There is a statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year. Section 3 contains additional information regarding the research question, hypotheses, methodology, and measurements. Purpose of the Study The purpose of this nonexperimental, quantitative study was to determine whether a statistically significant relationship (p < .05) existed between Grade 3 students’ oral
  • 21.
    7 reading fluency ratesand their reading proficiency. With a nonprobability convenience sample of 155 Grade 3 students in a West Texas school district, I sought to determine if a statistically significant relationship existed between oral reading fluency rates and students’ reading proficiency on the summative, high-stakes reading assessment results for the 2008-2009 school year. The independent variable was oral reading fluency, and the dependent variable was reading proficiency. The outcome of the study identified additional ways in which elementary teachers can screen struggling readers so that interventions can take place to help these students to increase their reading ability. Theoretical Framework The automaticity theory developed by LaBerge and Samuels (1974) and expanded upon by Samuels (2006) formed the theoretical basis of this study. The automaticity theory explains the relationship between fluency and comprehension. When initially proposing the automaticity theory, LaBerge and Samuels found that accurately reading words alone was not sufficient for reading proficiency. They posited that for students to devote attention to other aspects of reading, they must first accurately and fluently read words. In 2006, Samuels identified four components of the reading process: decoding, comprehension, metacognition, and attention; the last he viewed as limited. When students spend too much time trying to sound out words, they cannot comprehend what they have read by the end of a line or a page. Unless students are able to fluently decode the words in a text, they do not have sufficient attention to devote to comprehend what is read or to use metacognitive strategies to improve their comprehension. This theory
  • 22.
    8 explains why slowreaders who may accurately identify all or most of the words in a passage may still not read proficiently enough to pass state-mandated reading assessments. Several researchers have tested the automaticity theory and found a relationship between oral reading fluency and reading proficiency in general (Baker et al., 2008; Daane, Campbell, Grigg, Goodman, & Oranje, 2005; Kuhn, 2005; Morgan & Sideridis, 2006; Riedel, 2007). However, scholars disagree on the definition of oral reading fluency. Some define oral reading fluency as rate and accuracy alone (Kame’enui & Simmons, 2001; Riedel), and others include prosody and comprehension (Allington, 2009; Samuels, 2006). I will further discuss the automaticity theory, including controversies in the literature, in section 2. Operational Definitions The following terms as defined were used throughout this study. Automaticity theory: The four-component process by which proficient readers decode words fluently, enabling them to focus attention on higher comprehension skills of the passages read (Samuels, 2006). Dynamic Indicators of Basic Early Literacy Skills (DIBELS): Short, 1-minute assessments that educators use to measure the development of early literacy skills. These assessments include oral reading fluency, retell fluency, and nonsense word fluency (Good & Kaminski, 2002b). Matthew effects: The gap between readers and struggling readers. Matthew effects are based on the Biblical concept found in Matthew 25:29, a verse which refers to
  • 23.
    9 the rich becomingricher and the poor becoming poorer. In the context of reading, this term indicates that students who read consistently increase their knowledge base and vocabulary, and readers who read less will learn less and continue to be struggling readers. The gap between proficient readers and struggling readers widens as they progress through school (Morgan et al., 2008; Stanovich, 1998). Oral reading fluency (ORF): The ability to read words accurately and quickly (Fuchs et al., 2001; Roehrig et al., 2008) with proper expression (Eldredge, 2005; Hudson et al., 2005) and comprehension (Marcell, 2007; Samuels, 2007). In the current study, Grade 3 students’ DIBELS ORF rates were used as the independent variable (Good & Kaminski, 2002b). Proficiency level: A scale score of 2100 or more on the Grade 3 Reading TAKS (TEA, 2009b). Out of the 36 questions, proficient readers answer at least 24 correctly to demonstrate proficiency. Reading proficiency: The ability of a reader to decode the words in a passage and comprehend what it says. Proficient readers employ a variety of strategies to comprehend a passage, such as determining the purpose for reading, using context clues and the nature of the passage, and using background knowledge (Cline, Johnstone, & King, 2006). Proficient readers demonstrate a basic understanding of reading, apply knowledge of literary elements, use a variety of strategies to analyze a passage of text, and apply critical-thinking skills to analyze a passage by scoring 2100 or more on the Grade 3 Reading TAKS (TEA, 2004). In the current study, I used students’ scale scores from the Grade 3 Reading TAKS as the dependent variable to measure reading proficiency.
  • 24.
    10 Scale score: In2009, the TEA measured performance on the TAKS using a scale score. A raw score of zero had a scale score of 1399, and a perfect raw score of 36 had a scale score of 2630. Students who achieved a scale score of 2100 or above were considered proficient; students who received a scale score of 2400 or above received commended performance (TEA, 2009b). Struggling readers: Struggling readers have at least average intelligence, but they read more slowly than other readers their age and may be at risk for long-term reading difficulties (Pressley et al., 2006). Texas Assessment of Knowledge and Skills (TAKS): The TAKS is a state- mandated, summative, high-stakes assessment in Texas that assesses specific proficiencies in Grades 3 to 11 in accordance with the state curriculum (TEA, 2009a). The TAKS is scored with a scale score, defined above. For this study, I used the scale score of the Grade 3 Reading TAKS to measure the dependent variable of reading proficiency (TEA, 2006, 2009e). Assumptions, Limitations, Scope, and Delimitations In this study, I made four assumptions. First, the DIBELS ORF was a reliable measure of oral reading fluency. Second, the Grade 3 Reading TAKS was a reliable measurement of reading proficiency. Third, the Grade 3 students’ DIBELS ORF rates and Grade 3 Reading TAKS scale scores reported for the target West Texas school district were accurate and reliable. Finally, the statistical method chosen for this study was appropriate.
  • 25.
    11 I acknowledged fourlimitations for this study. The first was the use of a nonprobability convenience sample, and the second was the limitation of the study population to students in Grade 3. Because of these factors, the results may not generalize to students in other elementary grades. Third, the study took place in a single geographical location, a small town in West Texas. Similarly, the results may not generalize to other locations in Texas or other states. Finally, quantitative correlational analysis does not prove the existence of relationships between variables. Rather, in correlational studies, a statistical procedure may establish a relationship between two variables, but neither the method nor results prove that one variable causes another variable (Gravetter & Wallnau, 2005). Thus, study results do not prove the existence of a relationship between oral reading fluency rates and reading proficiency or that low fluency rates cause poor reading comprehension. Rather, results indicate whether a statistically significant relationship exists between the two variables. The scope focused on the oral reading fluency rates and reading proficiency of Grade 3 students in a Western school district. I limited the study to 155 students in Grade 3 in a West Texas elementary school district. I investigated only the two variables of oral reading fluency and reading proficiency for a single school year, 2008-2009. Each of the variables was operationalized by a single state-mandated assessment, for oral reading fluency the DIBELS ORF and for reading proficiency the Grade 3 Reading TAKS. Significance of the Study This study was significant in several ways. With regard to the literature, the study filled a gap that existed regarding students’ oral reading fluency and reading proficiency.
  • 26.
    12 Researchers in sevenstates (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006) used DIBELS ORF rates to determine if there was a significant relationship between oral reading fluency rates and reading proficiency as measured by their states’ mandated reading assessment. Although each of these studies found that oral reading fluency rates correlated with reading proficiency, additional studies were needed for other states since the curriculum standards that define reading proficiency differ from state to state (Roehrig et al.). This study filled a gap in the literature on the assessment of struggling readers by determining if such a relationship exists with Grade 3 students in Texas. In addition, with regard to professional application of the study, the results can help teachers in Texas. NCLB (2002) mandates that school districts must make adequate yearly progress in students’ reading scores. In situations where the school or district has not met adequate yearly progress for 2 consecutive years, state education agencies are required to impose corrective actions, such as removing the principal, replacing the staff, receiving advice from outside experts, or requiring the implementation of an entirely new curriculum. The study results can help Texas educators identify struggling readers before the administration of the Grade 3 Reading TAKS. Educators can then provide interventions designed to improve basic reading skills, such as decoding, fluency, vocabulary, and comprehension skills. Such interventions may help struggling readers’ possibilities of scoring in the proficient range on the state-mandated assessment (Simmons et al., 2008).
  • 27.
    13 In turn, theschool districts may increase the likelihood of meeting adequate yearly progress. Study results may also contribute to positive social change. Elementary educators may more easily identify struggling readers so that interventions can take place before students take the Grade 3 Reading TAKS. Such interventions would target basic literacy skills and include greater oral reading fluency and decoding strategies to improve struggling readers’ reading proficiency (Jenkins et al., 2007). Diagnosis of struggling readers and skills implementation help reduce the risk of students’ failing the state tests and increase their possibilities of academic success (Ehri, 2005; Shapiro et al., 2008). Struggling readers who are identified and remediated before the end of Grade 3 are more likely to improve their reading skills, thus helping to close the academic gap with more proficient peers (Simmons et al., 2008). When struggling readers experience success in reading, they are more likely to continue academic achievement through the grades and graduate from high school (Houge, Peyton, Geier, & Petrie, 2007; Rumberger & Palardy, 2005). Graduation from high school with proficient reading skills increases an individual’s opportunities for employment and contribution to society (Katsiyannis et al., 2007). Transition Statement It is important for students to learn to read proficiently by the end of Grade 3 (Ehri, 2005; NCLB, 2002). Nationwide, most students learn to read before the fourth grade (U. S. Department of Education, 2007); however, between 15% (TEA, 2009c) and 33% (U.S. Department of Education, 2007) of students do not read proficiently.
  • 28.
    14 Researchers in Arizona(Wilson, 2005), Colorado (Wood, 2006), Florida (Roehrig et al., 2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and Pennsylvania (Shapiro et al., 2008) have found that oral reading fluency rates correlated with reading proficiency as assessed on their states’ mandated reading assessment. The two variables have been studied only in these seven states. This nonexperimental quantitative study, based on the automaticity theory (LaBerge & Samuels, 1974) and using Pearson correlation, filled a gap in the literature to determine with a sample of 155 Grade 3 students in Texas if a statistically significant relationship (p < .05) existed between their oral reading fluency rates (middle-of-year DIBELS ORF) and their reading proficiency (end-of-year TAKS). Study findings may result in positive social change by providing Texas educators with a valuable tool for identifying struggling readers in Grade 3 (Simmons et al., 2008). Once struggling readers are identified, educators can provide scientifically-based reading interventions to strengthen phonemic awareness, phonics, fluency, vocabulary, and comprehension skills (Chard et al., 2008). The identification and assistance of struggling readers can help them improve academically and increase their success later in life (Katsiyannis et al., 2007; Shaw & Berg, 2009). In section 2, I review pertinent literature on the automaticity theory, oral reading fluency, reading proficiency, and related concepts. In section 3, I describe the study methodology in greater detail. Aspects include the sample, instruments, protection of participants’ rights, data collection, and data analysis. In section 4, I report the study findings. In section 5, I analyze the findings and discuss data from the study in light of
  • 29.
    15 the previous studiesthat compared DIBELS ORF rates with student performance on state-mandated assessments. Additionally, I make recommendations in section 5 regarding future actions and ideas for future studies.
  • 30.
    16 Section 2: LiteratureReview Introduction In this section I review literature pertinent to this study in terms of the theory and studies relevant to the variables. The section is organized as follows: (a) automaticity theory; (b) oral reading fluency, including definitions, the role of oral reading fluency in learning to read, and fluency instruction; (c) reading proficiency, including reading proficiency and automaticity theory, definitions of reading proficiency, vocabulary and Matthew effects, and comprehension; and (d) the relationship between oral reading fluency and reading proficiency in state-mandated assessments, including studies of Grade 3 only, of Reading First schools, and of Grade 3 and other grades, as well as limitations of these studies. The section is concluded with a summary. To investigate the literature for this study, I searched electronic databases, such as Academic Search Premier, EBSCO, Dissertations and Theses, and ERIC, using the keywords oral reading fluency/ORF, phonemic awareness, phonics, reading comprehension, reading proficiency, and vocabulary. I also used bibliographies of articles to search for additional literature and reviewed peer-reviewed articles. In addition, I read several recently published books on the subject of oral reading fluency. Automaticity Theory LaBerge and Samuels (1974) used the automaticity theory to explain the relationship between fluency and reading proficiency. They acknowledged that reading is a complex skill that takes years to develop. Fluent readers are able to process the necessary information in less than a second. However, some readers never reach a level at
  • 31.
    17 which they fluentlyread a text, even though they are able to adequately communicate orally. Reading is a complex skill with many subskills, in which readers must recognize letters, spelling patterns, and words as well as attach meanings of words and context. According to this theory, attention is a limited faculty, and when students are beginning to learn to read, their attention is focused on factors other than meanings, such as shapes of letters (LaBerge & Samuels, 1974). In prekindergarten and kindergarten, students must initially focus on the shapes within a letter, the length of lines, and the direction letters are facing. As beginning readers, students have great difficulty distinguishing the difference between letters. In addition, at first, students focus their attention on recognizing and naming letters and do not have enough attention to focus on the sounds. In addition to letter naming, LaBerge and Samuels (1974) found that a certain amount of attention devoted to other subskills is also necessary for development of fluent reading. Once beginning readers automatically recognize letters, they must learn to associate sounds with the letters. Then they can focus their attention on reading parts of words and entire words. To test their theory of automaticity, LaBerge and Samuels (1974) designed an experiment with eight college students. They measured the time the students took to identify patterns of familiar and unfamiliar letters. The students were able to accurately identify patterns of familiar letters more quickly than those of unfamiliar letters. Next, the students received training on the unfamiliar letters. At the end of a 20-day period, they were able to recognize the unfamiliar letters more accurately although still not as fast as
  • 32.
    18 the familiar letters.LaBerge and Samuels concluded that some degree of attention was still needed for the students to make the association for unfamiliar letters. From these results, LaBerge and Samuels (1974) expressed concerns regarding their observations of teaching reading. They had found that teachers often directly teach letter names until students are able to correctly identify the letters. Then teachers move to direct instruction in other aspects of reading, such as letter sounds and blending. However, students may still not be able to fluently identify letters. Because their attention is focused on this task, they may not have adequate attention to devote to learning sounds and how to blend them together to make words. Thus, LaBerge and Samuels recommended that teachers continue teaching and testing letter naming until students reach a level of automaticity. The researchers maintained that only when students can automatically recognize letters can they focus a significant amount of attention on the sounds the letters make. In 1974, when LaBerge and Samuels first proposed the automaticity theory, they focused on automaticity at the word level. Their study showed how readers who had not reached a level of automatically decoding words would have trouble comprehending. At that time, LaBerge and Samuels did not conceptualize that automaticity in other areas was also important. However, as researchers continued to develop the automaticity theory, they recognized that metacognition also played an important role in comprehension and that aspects of metacognition could be automaticized (Samuels, Ediger, Willcutt, & Palumbo, 2005). Metacognition is the readers’ awareness of what is being read and whether or not
  • 33.
    19 comprehension is takingplace. Readers who use metacognition are thinking about whether or not they are thinking about what the passage says. Factors such as readers’ motivation to read, their attitudes and beliefs about their reading ability, their interest in the topic, and the amount of attention they devote to reading can affect how much they comprehend. Whether or not readers are distracted by other factors, such as the environment, noises, or other thoughts, can affect comprehension. Metacognition also involves the insight readers bring to a text derived from background knowledge. When readers read something with which they are highly familiar, gaining new insights and adding to their cognitive repertoire of knowledge seems easy (Samuels et al., 2005). However, when readers know very little about the topic, they may have to work harder to comprehend the passage. Good readers learn to recognize when cognitive aspects interfere with their ability to comprehend and to make adjustments. Samuels et al. recommended that teachers instruct readers to be aware of such factors by modeling thinking strategies and measuring students’ use through rubrics until they automatically use metacognition strategies when reading. According to automaticity theory, students who have low fluency rates struggle with reading proficiency (Goffreda, Diperna, & Pedersen, 2009). In order to read proficiently, students must be able to recognize words in context and use a variety of strategies to comprehend a passage. Samuels (2006) summarized the automaticity theory by identifying four components as elements of the relationship between oral reading fluency and reading proficiency: decoding, comprehension, metacognition, and attention.
  • 34.
    20 First, a readermust decode the words in a text (LaBerge & Samuels, 1974). Beginning or struggling readers may devote so much attention to decoding words that comprehension is difficult, if not impossible (Samuels, 2006). However, proficient readers are able to decode words and quickly attach meaning to them (Shaywitz, 2003). Skilled readers can read a word and, within 150 milliseconds, less than the time it takes for the heart to beat once, the brain has attached meaning to the word. However, beginning and struggling readers process reading differently. Whereas skilled readers see whole words, beginning and struggling readers, such as dyslexics, may only see a few letters or sounds at a time (Samuels, 2006). In 1974, LaBerge and Samuels could only presume that oral reading fluency would affect comprehension. By 2006, Samuels had identified comprehension as another component of the reading process. Fluent readers spend less attention on decoding words and therefore can devote more attention to comprehending. Proficient readers combine the information in a text with their background knowledge and critically analyze the text. Samuels believed that readers must automatically decode before they can devote sufficient attention to comprehension. Samuel’s (2006) next component of the reading process is metacognition or one’s own awareness of the thought processes. Proficient readers use metacognition when they do not understand a text and make adaptations so they can comprehend it. Walczyk and Griffith-Ross (2007) found that readers can comprehend by using metacognition to know when they do not comprehend and adapt by using reading strategies such as slowing down, reading out loud, rereading, or sounding out difficult words. Some readers use
  • 35.
    21 metacognition to improvetheir use of reading strategies until the strategies become automatic. For example, readers can learn the skill of making inferences and use this skill repeatedly until they automatically infer meaning while reading a passage. Proficient readers use metacognition as they read and analyze text (Samuels, 2006). Samuels’ (2006) final component of the reading process is attention. This is an outcome of the previous three components, decoding, comprehension, and metacognition. Beginning readers use excessive attention to decode words and have insufficient attention remaining for comprehending or thinking about reading strategies they could use. As readers become more proficient, they decode words automatically and devote more attention to comprehension and metacognition. According to the automaticity theory, readers are not proficient until they can apply their attention to decode, comprehend, and use metacognition at the same time (Samuels, 2006). Oral Reading Fluency Originally LaBerge and Samuels (1974) used the automaticity theory to explain why low reading fluency rates affect reading proficiency. The degree of expression can indicate students’ understanding of the passage (Samuels, 2006). Most researchers studying oral reading fluency and reading proficiency agree that oral reading fluency and reading proficiency are related (Daane et al., 2005; Deeney, 2010; Miller & Schwanenflugel, 2006). Researchers disagree, however, on how fluency is defined and the validity of assessments used to assess it. To measure oral reading fluency, researchers have developed assessments such as AIMSweb (Edformation, 2004), the DIBELS (Good & Kaminiski, 2002a), the Reading Fluency Monitor (Read Naturally, 2002), and the
  • 36.
    22 Texas Primary ReadingInventory (University of Houston, 1999). In particular, Reading First schools have widely used the DIBELS with more than 1,800,000 children (Allington, 2009; Baker et al., 2008; Glenn, 2007; Manzo, 2005). Some researchers have claimed that political reasons may have motivated the widespread use of the DIBELS (Allington, 2009; Goodman, 2006; Manzo, 2005; Pearson, 2006; Pressley, Hilden, & Shankland, 2005; Riedel, 2007). One of the developers of DIBELS, Good from the University of Oregon, served on national committees that developed Reading First. Critics (e.g., Glenn, 2007) have alleged that Good personally benefitted when applications for Reading First grants were denied and schools felt pressured by federal officials and consultants to include DIBELS in their grant applications (Manzo, 2005). However, despite such claims, researchers recognize DIBELS as a respected way of measuring oral reading fluency (Baker et al., 2008; Riedel et al., 2007). Definition of Oral Reading Fluency Many researchers have focused on the definition of fluency used by the developers of the DIBELS, although definitions vary (Hudson et al., 2005; Samuels, 2006). A fluent reader can read text with speed, accuracy, and proper expression (NICHHD, 2000). Worthy and Broaddus (2002) defined fluency as “not only rate, accuracy, and automaticity, but also of phrasing, smoothness, and expressiveness” (p. 334). Effective readers do more than just read the words; they understand what they read (Marcell, 2007). Hudson et al. included accuracy, rate, and prosody in their definition of fluency. Samuels (2007) and others (Allington, 2009; Miller & Schwanenflugel, 2008;
  • 37.
    23 Rasinski, 2006) contendedthat educators should consider prosody or the expression with which students read a passage. Fuchs et al. (2001) found that prosody was difficult to measure, so they chose to focus on rate and accuracy. The developers of the DIBELS defined fluency as rate and accuracy (Good & Kaminski, 2002a). Various researchers have asserted that the definition of fluency should include comprehension (Kuhn, 2005; Marcell, 2007; Pikulski & Chard, 2005; Samuels, 2007). Pikulski and Chard (2005) defined fluency as follows: Reading fluency refers to efficient, effective word-recognition skills that permit a reader to construct the meaning of text. Fluency is manifested in accurate, rapid, expressive oral reading and is applied during, and makes possible, silent reading comprehension. (p. 510) Samuels (2006) not only agreed that the definition of fluency should include comprehension but also stated that measures of fluency should assess reading and comprehension at the same time. Samuels noted that beginning readers focus first on decoding, and once they are able to decode automatically, they focus on comprehension. Samuels emphasized that the automaticity theory, which he and LaBerge developed (LaBerge & Samuels, 1974), requires students to decode words automatically so that they can comprehend. He pointed out educators who assess fluency first and then assess comprehension later or with a different assessment miss the point of automaticity. Deeney (2010) included endurance in her definition of fluency. She agreed that the definition of fluency consists of four components; accuracy, rate or speed, prosody, and comprehension. However, in her work with students she saw the need to consider
  • 38.
    24 endurance. In heropinion, 1-minute fluency probes provide useful information for identifying which students are struggling. However, Deeney pointed out that these problems did not provide adequate information for determining why students struggle and what can be done to address their academic needs. She agreed with Pikulski and Chard (2005), who called for a deeper view of fluency. Deeney believed that such a deeper view includes rate, accuracy, prosody, comprehension, and endurance. The developers of DIBELS (Kame’enui & Simmons, 2001) and others (Francis et al., 2005; Harn, Stoolmiller, & Chard, 2008; Hasbrouck & Tindal, 2006; Jenkins et al., 2007; Katzir et al., 2006) agreed with the definition of Roehrig et al. (2008) of oral reading fluency as “accuracy and rate in connected text, or correct words per minute” (p. 345). Reading is a complex skill (Fuchs et al., 2001) that includes various components. Kame’enui and Simmons discussed the simplicity and complexity of oral reading fluency. They cited Watson (1968), the discoverer of DNA, who stated, “The idea is so simple it had to be true” (as cited in Kame-enui & Simmons, p. 203). Like DNA, fluency is a complex skill with many components, including phonemic awareness, alphabetic principle, word reading, expression, and comprehension. Fluency is also easily recognized. For example, a person listening to struggling readers decoding words sound by sound can easily recognize they are not reading fluently. Kame’enui and Simmons (2001) emphasized that even though educators have debated the exact definition of oral reading fluency for decades, definition is not the major point. Rather, as LaBerge and Samuels (1974) recognized when they developed the automaticity theory, reading proficiency is a complex skill with many different aspects.
  • 39.
    25 Although LaBerge andSamuels focused only on rate and accuracy and the subskills of letter and word recognition in 1974, they recognized that other aspects are involved. When educators use only rate and accuracy to define fluency, they are measuring only the subskill of fluency. Different assessments can be used to measure other subskills of reading proficiency. Skills such as phonemic awareness, phonics, and word reading are necessary components of fluency that enable students to read passages expressively and with understanding. However, by defining fluency as accuracy and rate, researchers gain a measureable, workable definition to analyze the progress of beginning readers toward the goal of adequately comprehending what they read. The National Reading Panel (NICHHD, 2000) identified fluency as one of the main components of early reading instruction. Hasbrouck and Tindal (2006) analyzed thousands of students’ oral reading fluency rates in order to develop norms for Grades 1 through 8. Educators can listen to a student read for as little as 1 minute and compare the fluency rate with thousands of other students at the same grade level at the beginning, middle, or end of the school year. At the conclusion of their research, Hasbrouck and Tindal recognized the complexity of reading and recommended that educators consider fluency in the context of the other components necessary for mastery. In the present study, I defined oral reading fluency as the accuracy and rate at which students read a grade level text (Francis et al., 2008; Harn et al., 2008; Hasbrouck & Tindal, 2006; Jenkins et al., 2007; Katzir et al., 2006). This definition provided a specific way to measure oral reading fluency so that I could determine if a relationship
  • 40.
    26 existed between oralreading fluency and reading proficiency. Although adopting the limited definition of accuracy and rate (Roehrig et al. 2008), I also acknowledged that prosody plays an important role in reading and fluency. However, because prosody is difficult to measure (Fuchs et al., 2001; Schwanenflugel et al., 2006), I chose to use the more measurable definition of oral reading fluency. I also acknowledged that oral reading fluency is more that reading quickly. Proficient readers are also able to comprehend what they read, as supported by the research question, whether a relationship exists between oral reading fluency and reading proficiency, as measured by Grade 3 students’ DIBELS ORF and the Grade 3 Reading TAKS. The Role of Oral Reading Fluency in the Context of Learning to Read As readers begin to read, fluency has a significant role (Harn et al., 2008). Fluency for educators may be compared to temperature readings for physicians (Hasbrouck & Tindal, 2006). A physician considers a patient’s temperature, and if it is above normal, the physician looks for possible causes and then determines an appropriate treatment. In a similar way, educators examine oral reading fluency. If a student’s oral reading fluency rate is below average, educators look for possible causes and then determine appropriate interventions (Hasbrouck & Tindal). Ehri (2005) studied the process by which students learn to read words. She identified five phases: (a) the prealphabetic phase, (b) the partial-alphabetic phase, (c) the full-alphabetic phase, (d) the consolidated-alphabetic phase, and (e) the automatic-
  • 41.
    27 alphabetic phase. Thesephases describe the process that beginning readers engage in as they develop the skill of reading. Phonemic awareness is one of the first skills beginning readers must learn. Ehri (2005) discussed phonemic awareness in her first two phases: the prealphabetic and partial-alphabetic phases. Beginning readers must learn to recognize that words they use in their spoken vocabulary are constructed of individual sounds (Ehri & McCormick, 1998). Katzir et al. (2006) demonstrated the role of phonemic awareness in fluency. They concluded that readers need phonemic awareness the most when decoding words they have not seen before. After readers have developed a certain level of fluency, the brain focuses less on phonemic awareness and more on other components of fluency, such as the speed at which they are able to recognize letter patterns or word units. Researchers have also found phonemic awareness in kindergarten to be a predictor of oral reading fluency in later years (Katzir et al.). As students learn phonemes, they begin to learn phonics, learning to associate the phonemes to letters or graphemes (Ehri, 2005; Ehri & McCormick, 1998). Although beginning readers focus on phonemic awareness, they deal only with the sounds of language. When they begin to understand that letters represent sounds, they use phonics. In Ehri’s second phase, the partial-alphabetic phase, children use the first and last letters when identifying words. Children then learn to master the ability to sound out words. Approximately half way through first grade, most readers enter what Ehri described in her third phase as the full-alphabetic phase. During the first two phases, readers laboriously sound out words. As they move into the full-alphabetic phase, they begin to
  • 42.
    28 recognize that certainletters work together to make certain sounds. Recognizing more of these letter patterns, students are able to read faster. Over time, students learn to recognize more letter patterns and words allowing them to read more words correctly each minute (Ehri & McCormick). Researchers have established a relationship between the knowledge of phonetic elements and fluency (Chard et al., 2008; Eldredge, 2005; Harn et al., 2008). Chard et al. documented that students who had demonstrated mastery in the alphabetic principal by the end of Grade 1 were more fluent readers than their struggling peers at the end of Grades 2 and 3. The readers who struggled with the alphabetic principle in Grade 1 did not progress at the same rate as their peers who had mastered the alphabetic principle. Chard et al. emphasized that teachers’ assuring that students have a good grasp of phonetic knowledge by the end of Grade 1 is critical, because phonetic knowledge serves as a predictor of how fluently they will read in later years. Mastery at the full-alphabetic phase is essential before moving to the next two stages, the consolidated-alphabetic and the automatic-alphabetic phases. Usually during Grade 2, students progress to Ehri’s (Ehri & McCormick, 1998) fourth phase, the consolidated-alphabetic phase. Readers begin to recognize more combinations of letters within words (Harn et al., 2008). They begin to read words syllable by syllable as well as to recognize prefixes and suffixes in words. This ability to identify groups of letters helps to facilitate fluent reading as readers recognize more words by sight (Ehri, 2005). However, readers may not completely develop the concepts of the full-alphabetic phase until Grade 8 (Ehri & McCormick). In fact, some students
  • 43.
    29 learn to readand progress well until approximately Grade 4, when they begin to read words with four or more syllables. As Ehri pointed out, educators may need to provide further instruction to help these students see the syllabic units in larger words. Fluency is the final phase, the automatic-alphabetic phase, in Ehri’s (Ehri & McCormick, 1998) developmental sequence to early reading proficiency. During this phase, students are able to read most words quickly and efficiently. When they encounter a word they do not know, they use one or more of several methods they have developed to ascertain meaning. Speece and Ritchey (2005) established that oral reading fluency predicts reading comprehension. In their study with students in Grade 1 and Grade 2, a significant gap was found between the oral reading fluency rates of students at risk for reading difficulty and their peers who were not at risk. The researchers concluded that the most unique variance for students’ second-grade increases in reading mastery and state assessment achievement was predicted by increases in oral reading fluency for students in the first grade. Fluency Instruction Oral reading fluency has been correlated with overall reading competence (Fuchs et al., 2001), and studies have confirmed several strategies to improve reading fluency. When students repeatedly read a passage, their fluency increases (Begeny, Daly, & Valleley, 2006; Hiebert, 2005; Martens et al., 2007; Therrien, Gormley, & Kubina, 2006; Therrien & Hughes, 2008). The repeated reading strategies can be implemented in a variety of ways including readers’ theater, in which students repeatedly read a script as they prepare to perform the reading for an audience (Corcorcan & Davis, 2005; Rasinski,
  • 44.
    30 2006). Although studieshave found repeated readings effective, Kuhn (2005) found that wide reading is just as effective, and Morgan and Sideridis (2006) found motivational strategies were more effective than repeated readings. In addition, other strategies have improved fluency. Nes Ferrera (2005) found that pairing struggling readers with more experienced readers helped the struggling readers to read more fluently. Rasinski and Stevenson (2005) found parents an effective resource when they helped their at-risk students by working daily at home with them on reading. Reading Proficiency Reading proficiency involves more than mechanical reading of a passage. Based on their review of literature, the National Reading Panel (NICHHD, 2000) captured the essence of reading by identifying its five most important components. These are phonemic awareness, phonics, fluency, vocabulary, and reading comprehension. Reading Proficiency and Automaticity Theory Several researchers (Baker et al., 2008; Kuhn, 2005; Morgan & Sideridis, 2006; Therrien & Hughes, 2008) have confirmed a relationship between reading proficiency and automaticity. Baker et al. found that the DIBELS ORF rates of 4,696 Grade 3 students in Oregon significantly predicted their performance on the state-mandated assessment. Students who read fluently also comprehended well enough to attain scores of proficient on the state assessment. Kuhn (2005) also confirmed the automaticity theory in a study of 24 Grade 2 students in a southeastern United States city with three student groups. The first group participated in the repeated reading condition, in which students read the same story
  • 45.
    31 repeatedly over a3-day period using strategies such as modeling, repetition, choral reading, and pair reading. Over the 6-week period, this procedure was followed with six books. The second group participated in the nonrepetitive reading condition, in which students read each book one time. The students read the same six stories as the ones used in the repeated reading condition, with an additional 12, so that a new story was available at each session. The third group participated in the listening only condition, in which the researcher expressively read the books aloud to the students. Over the 6-week period, these students listened to the same 18 stories as the students in the nonrepetitive group read. The fourth group was the control group, in which students received no interventions outside of the normal classroom instruction (Kuhn, 2005). Kuhn (2005) found that the repeated reading and nonrepetitive reading intervention groups were a more effective way to help students decode words than simply listening or providing no intervention. Thus, Kuhn’s work confirmed the automaticity theory. The students in the repeated reading and nonrepetitive reading groups demonstrated that they automaticized the knowledge of words and sounds better than the students in the listening only and control groups. Students in the first two groups demonstrated greater gains in their ability to read both real and nonsense words and their oral reading fluency rates. Therrien and Hughes (2008) studied the effects of repeated reading and question generation on students’ reading fluency and comprehension. During a 2-week period, students who read at Grade 2 or 3 instructional levels were randomly divided into two
  • 46.
    32 groups. One groupreceived the repeated reading intervention. Students were asked to read a passage repeatedly until they reached the desired fluency level. On average, students reached the fluency goal after 2.42 readings. The other group received the question-generation intervention. Students read the passage once and received questions to cue them to better comprehend the narrative passage, such as the following: (a) Who is the main character?, (b) Where and when did the story take place?, and (c) What did the main character do? In both groups after the intervention was completed, tutors asked both factual and inferential questions. Results documented that repeated reading improves fluency: students in the repeated reading group read 22.5 more correct words per minute than students in the question-generation group, who only read the passage one time. Additionally, the students in the repeated reading group comprehended more factual questions than the students in the question-generation group. There was no significance difference in the two groups when they answered inferential questions (Therrien & Hughes, 2008). Therrien and Hughes (2008) concluded that repeated reading improved both fluency and comprehension. However, they recommended that additional research be conducted to determine the effects of text difficulty. Oral reading fluency rates are important to consider when students are reading passages in which they cannot read a significant percentage of the words. For example, in this study the researchers considered instructional level to be one at which students correctly read 85% to 95% of the words. Therefore, the students in the question-generation group may not have been able to
  • 47.
    33 understand 5% to15% of the words in the text. When students do not know 5% to 15% of the words in a passage, their comprehension can be affected. The situation can be further compounded when students read at their level of frustration, less than 85% of the words in the passage. Therrien and Hughes (2008) recommended that other research studies focus on how comprehension is affected by readers’ levels of difficulty. The researchers also recommended that studies with longer intervention time be conducted to indicate whether more than 2 weeks of intervention would enable the students in the question-generation group to use cued questions to answer the inferential questions. This study demonstrated that repeated reading does improve both fluency and comprehension. In contrast to these and other studies that found repeated reading to be an effective intervention (Begeny et al., 2006; Hiebert, 2005; Hudson et al., 2005; Rasinski, 2006; Therrien et al., 2006; Therrien & Kubina, 2007), Morgan and Sideridis (2006) found that repeated reading was not as effective as other strategies. They conducted a meta-analysis of 30 studies that used single-subject research designs to determine “the effectiveness of different types of interventions on fluency for students with or at risk for learning disabilities” (p. 200). The researchers categorized the interventions by the following strategies: (a) keywords and previewing, (b) listening and repeated readings, (c) goal setting plus performance feedback, (d) contingent reinforcement, (e) goal setting plus feedback and reinforcement, (f) word recognition, and (g) tutoring. According to Morgan and Sideridis’ (2006) findings, the most effective interventions were reinforcement, goal setting plus feedback, and goal setting plus
  • 48.
    34 feedback and reinforcement.When the researchers analyzed the students’ improvements over time, the goal setting interventions showed significant growth, and the listening and repeated readings interventions did not. Although Morgan and Sideridis’ findings did not concur with other researchers’ findings regarding repeated readings, their results nevertheless substantiate the automaticity theory. They found that when students were motivated to set goals and were positively reinforced, they were able to improve their automaticity, as demonstrated by increases in their oral reading fluency rates. Definition of Reading Proficiency After the authorization of NCLB (2002), the U.S. Department of Education formed the NARAP to define reading and reading proficiency (2006). Since NCLB mandated that all students be proficient in reading by 2013-2014, states and test developers needed a definition of reading proficiency to create appropriate measurement assessments. This task was not a simple one. The definition had to capture the essence of reading and encompass the developmental attributes of reading across grade levels (NARAP). Each of the 50 states defines reading proficiency at each grade level in terms of their curriculum expectations, as measured by the state’s reading assessment. NARAP’s (2006) definition also had to include students with disabilities who access texts differently from nondisabled students. For example, blind students access texts through braille. Hearing-impaired students may not decode the words of a text in the same manner as nonhearing-impaired students. Furthermore, students with learning disabilities
  • 49.
    35 such as dyslexiamay require accommodations, including extra time in order to read a passage proficiently. Thus, the national definition of reading proficiency had to allow for differences in states’ definitions, curriculum differences, students’ ages and grades, and conditions of handicap. After NARAP (2006) convened a panel of experts who were discussing working definitions with focus groups, NARAP formulated the following definition of reading proficiency: Reading is the process of deriving meaning from text. For the majority of readers, this process involves decoding written text. Some individuals require adaptations such as braille or auditorization to support the decoding process. Understanding text is determined by the purposes for reading, the context, the nature of the text, and the readers’ strategies and knowledge. (Cline et al., 2006, para. 8) Following from this definition, states were able to use it as a guide to examine their curriculum expectations at each grade level and design assessments to measure reading proficiency. Under the NCLB (2002), implementation means that 50 different states are assessing reading proficiency at Grades 3, 5, 8, and exit level. Consequently, 200 definitions could exist for defining reading proficiency (four grades times 50 states). Many states also give assessments of reading proficiency in more grades. In Texas, for example, assessments are given in eight grades, Grades 3 through 10 (TEA, 2009d). Definitions vary among states. TEA, for example, requires proficient readers in Grade 3 to have a basic understanding of reading, apply knowledge of literary elements, use a variety of strategies to analyze text, and apply critical-thinking skills to analyze the
  • 50.
    36 passage (TEA, 2004).Florida’s state assessment that measures reading proficiency contains four areas of assessment: (a) word phrases in context; (b) main idea, plot, and purpose; (c) comparison and cause/effect; and (d) reference and research (Florida Department of Education, 2005). In Colorado, the definition of reading proficiency includes requiring students to “locate, select, and make use of relevant information from a variety of media, reference, and technological sources” and to “read and recognize literature as a record of human experience” (Colorado Department of Education, 2010, para.7). Even though NARAP (2006) provided states with a general definition of reading proficiency, as these examples show, each state assesses reading proficiency with its own definition, based on their curriculum standards. As the definition of reading proficiency in Texas shows, reading proficiency is a complex skill. Proficient readers, as noted earlier, must attach meaning to the words they read to understand what the author is communicating (NARAP, 2006). Proficient readers not only read the words on a page, but they use strategies to read between the lines and beyond the lines. Thus, a review of literature relating to vocabulary and comprehension is warranted. Vocabulary and Matthew Effects Vocabulary is one of the National Reading Panel’s five essential components for effective reading programs (NICHHD, 2000). Vocabulary is comprised of the words an individual knows (Rosenthal & Ehri, 2008). People may identify words by the way they sound, the way they are used in context, or the way they are spelled. New words are introduced either through words that are heard or read. Thus, readers can read more
  • 51.
    37 proficiently when theydecode unknown words and attempt to determine the meaning of unknown words (NICHHD, 2000). Ouellette (2006) found that readers are more likely to decode an unknown word if it exists in their spoken vocabulary. When readers know 90% to 95% of the words in a text, they are able to figure out the meaning of the other unknown words (Hirsch, 2003). Struggling readers have difficulty decoding words. Because reading is difficult for them, they become frustrated (Ehri & McCormick, 1998). They must work hard to read and often find numerous ways to avoid reading. Consequently, they do not want to read widely, and their vocabulary does not increase. Conversely, proficient readers have a larger vocabulary than struggling readers because they have read more. Proficient readers are able to use their vocabulary to decode and determine the meaning of even more unknown words. The more they read, the more they know. Stanovich (1998) referred to this gap between proficient readers and their struggling peers as Matthew effects—a reference to the Biblical passage, Matthew 25:29, that refers to the rich becoming richer and the poor becoming poorer (p. 10). For struggling readers, the gap widens as they progress through the grades (Morgan et al., 2008; Stanovich, 1998). Several studies support Matthew effects. Katz, Stone, Carlisle, Corey, and Zeng (2008) studied the difference in growth of reading skills between Grade 1 proficient readers and struggling readers. The researchers reported that not only did the struggling readers have lower oral reading fluency rates than the proficient readers, but they improved at a slower rate. A study with students in kindergarten to Grade 3 in Oregon and Texas documented the presence of Matthew effects for students in all four grades
  • 52.
    38 (Chard et al.,2008). Rosenthal and Ehri (2008) studied the acquisition of new vocabulary with students in Grades 2 and 5. For students at both grade levels, the researchers documented the gap between struggling readers and proficient readers and its effect on students’ learning of new words. Consequently, struggling readers’ vocabulary is limited when compared to their proficient peers (Rosenthal & Ehri, 2008). Struggling readers’ limited vocabulary hinders them from learning as many new words as their proficient fellow students. Struggling readers who have a limited vocabulary have difficulty understanding texts that they read, decoding and learning new words, and developing skills that allow them to become proficient readers. Comprehension The final component for effective reading programs identified by the National Reading Panel is comprehension (NICHHD, 2000). Comprehension and oral reading fluency rates are related, and researchers continue to examine the relationship from various perspectives. These include questions such as the following: Are oral reading fluency rates and comprehension related only in the beginning stages of reading? Do all readers with low reading fluency rates struggle with comprehension? Are there readers with high oral reading fluency rates who are unable to comprehend? The following researchers have asked these questions. Fuchs et al. (2001) were among the first to establish oral reading fluency as an accurate and reliable measure that served as a predictor of general reading comprehension. Other researchers (Baker et al., 2008; Chard et al., 2008; Katz et al.,
  • 53.
    39 2008; Simmons etal., 2008) have since correlated oral reading fluency rates and comprehension at different grade levels and on a variety of assessments. As beginning readers progress through the various phases of learning to read (Ehri, 2005), teachers design instructional strategies to meet their needs and help them improve their reading skills. By the middle of first grade, a connection is established between how many words a minute students read and how much they comprehend (Hosp & Fuchs, 2005). For students in Grades 1 through 4, researchers (Hosp & Fuchs, 2005; Simmons et al., 2008) have found that oral reading fluency correlated with reading performance on the Woodcock Reading Mastery Test-Revised (Woodcock, 1973). For these same grades, other researchers have studied the relationship between oral reading fluency and comprehension (Daane et al., 2005). Baker et al. (2008) and Chard et al. (2008) found a relationship between oral reading fluency rates and performance on SAT-10. The results of Pressley et al. (2005) confirmed a relationship between oral reading fluency and reading performance on the TerraNova. Speece and Ritchey (2005) also demonstrated the relationship between oral reading fluency and the Comprehensive Test of Phonological Processing (CTOPP) and the Test of Word Reading Efficiency (TOWRE). Researchers have also studied the relationship between oral reading fluency and comprehension in older students. With students above Grade 4, Therrien and Hughes (2008) as well as Therrien and Kubina (2007) found that oral reading fluency and comprehension were significantly related. In work with middle and junior high school students, Fuchs et al. (2001) established a correlation between oral reading fluency and reading proficiency as measured by the Stanford Achievement Test. However, for
  • 54.
    40 students at variousgrade levels, additional research should be conducted to examine whether a statistically significant relationship exists between oral reading fluency and reading proficiency as defined by specific state reading assessments, such as the TAKS. The current study filled this gap for Grade 3 students in Texas. Older struggling readers with low oral reading fluency rates can learn to use certain learning strategies to improve their comprehension (Walczyk & Griffith-Ross, 2007). Walczyk and Griffith-Ross worked with students in Grades 3, 5, and 7 to determine if struggling readers at these grade levels could develop compensatory strategies to help them read text. The researchers found that struggling readers sometimes slowed down in order to facilitate their comprehension. For example, to compensate for decoding unknown words, struggling readers would sound out unknown words or simply skip them. When the students realized they were not comprehending, they might pause, look back, or reread the text. In addition, factors other than familiarity or understanding of words can affect comprehension. These factors include degree of readers’ motivation or feelings of time pressure. Walczyk and Griffith-Ross (2007) found that students who had low motivation read slowly, but those who were interested in the text could comprehend by using compensation strategies. Stage of maturation also seemed to play a role in the effect of time pressure on comprehension. Students in Grade 3 comprehended less when under time constraints than students in Grade 7. However, students in Grade 5 performed at generally the same level with and without time constraints.
  • 55.
    41 Walczyk and Griffith-Ross(2007) conjectured that the readers’ engagement made the difference. The students in Grade 3 seemed to be engaged in the text as they tried to decode the words, and time pressure negatively affected their comprehension. In contrast, by Grade 7 many of the students had become less engaged with routine reading assignments, but their engagement increased when they were timed. The students in Grade 5 appeared to be in transition between the two modes of reading text. Thus, Walczyk and Griffith-Ross demonstrated that students with low fluency rates can comprehend text by using metacognitive strategies to compensate, such as sounding out words, and engaging with greater motivation when experiencing time pressure. Researchers indicate that oral reading fluency rates predict comprehension (Chard et al., 2008; Daane et al., 2005) not because readers can read a passage quickly but because fluent readers are able to use mental energy comprehending rather than struggling with decoding (Fuchs et al., 2001; Samuels, 2006). Kuhn et al. (2005) indicated that when the readers in their study focused on how fast they read the passage, their comprehension was not as effective as those who focused on the passage itself. Samuels (2007) reiterated that when readers focus too intensely on how fast they are reading, their ability to comprehend suffers. Fluency is only one component of reading (NICHHD, 2000). When Hasbrouck and Tindal (2006) developed the oral reading fluency norms, they observed that once readers reached the 50th percentile, teachers should not encourage them to read faster. The researchers recommended that teachers only provide intervention for those students who read 10 words per minute below the 50th percentile. This advice was based on the
  • 56.
    42 principle that fluencyintervention is reading fluently enough to focus on comprehension rather than a race for readers of the most words in a minute. Some readers read fluently but have trouble comprehending (Harrison, 2008; Samuels, 2006). If readers are not actively engaged in the reading process, they can mechanically read a passage but not understand what the author is saying (Schnick & Knickelbine, 2000). Readers who are intrinsically motivated to read can understand what the author is saying because they are curious and want to read more. In contrast, readers who are not motivated to read are often not engaged and fail to make the necessary connections for comprehension. Marcell (2007) reported on a student who was able to read a passage fluently at 120 words correct per minute but could not retell what he had read. This student needed further instruction in comprehension strategies in order to become a proficient reader. Samuels (2006) pointed out that students whose second language is English may successfully decode passages although they have very little comprehension. His work with among students in St. Paul, Minnesota, revealed that although about 20% could read fluently, they could not comprehend what they read. Furthermore, some high- comprehending readers have low fluency rates (Samuels, 2006). The goal of reading is to comprehend or understand text (Applegate et al., 2009; NICHHD, 2000), and the studies in this section indicate a relationship between students’ oral reading fluency rates and comprehension at various stages of reading and grade levels. Oral reading fluency rates are important not because of how fast students read but
  • 57.
    43 because fluency demandslittle conscious attention and enables them to focus on comprehension (Samuels, 2006). Relationship Between Oral Reading Fluency and Reading Proficiency in State- Mandated Assessments Since LaBerge and Samuels’ (1974) development of the automaticity theory, many studies (Daane et al., 2005; Kuhn, 2005; Riedel, 2007) have confirmed a relationship between how fluently students read and how proficiently they comprehend text at various grade levels. However, fewer studies have considered oral reading fluency rates with the DIBELS ORF and reading proficiency as measured by state-mandated assessments. Many school districts use the DIBELS ORF assessment to identify students at risk for failing (Cusumano, 2007; Fletcher, Francis, Morris, & Lyon, 2005; Gersten & Dimino, 2006; Jenkins et al., 2007; Katz et al., 2008; Stecker, Lembke, & Foegen, 2008; Wood, Hill, Meyer, & Flowers, 2005). Studies have been conducted for only seven states for positive correlations between students’ oral reading fluency rates, measured by the DIBELS ORF, and reading proficiency, using their states’ mandated reading assessments. These states are as follows: Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002; Wood, 2006), Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and Pennsylvania (Shapiro et al., 2008).
  • 58.
    44 Studies of Grade3 Only Researchers have conducted two studies with Grade 3 students only. In Colorado, Shaw and Shaw (2002) conducted research using 52 Grade 3 students from a Colorado elementary school to determine if the DIBELS ORF benchmarks were a predictor of success on the Colorado State Assessment Program. The researchers found the DIBELS ORF to be strongly correlated, .80, the highest correlation of the studies reviewed here. For Florida Grade 3 students, Buck and Torgesen (2003) compared the DIBELS ORF rates of 1,102 students in 13 elementary schools to their performance on the Florida Comprehension Assessment Test. The researchers found a significant correlation, .70. They thus concluded that for this sample the DIBELS ORF rate was a predictor of success on the Florida Comprehension Assessment Test. Studies of Reading First Schools For studies in three states, the researchers (Baker et al., 2008; Roehrig et al., 2008; Wilson, 2005) used the large databases from Reading First schools to determine if DIBELS ORF correlated with state-mandated assessments. To identify and monitor the progress of struggling readers, over 90% of the Reading First schools used DIBELS oral reading fluency (Glenn, 2007; U.S. Department of Education, 2010). With an Arizona Reading First school, Wilson conducted a study with 241 Grade 3 students. He found a moderately large correlation (.74) between the DIBELS ORF and the Arizona Instrument to Measure Standards. Roehrig et al. conducted a similar study with Florida students, in which the DIBELS ORF rates of 35,207 Grade 3 students were compared to their performance on the Florida Comprehensive Assessment. Correlations were also
  • 59.
    45 moderately large, rangingfrom .70 to .71. In Oregon, Baker et al. used 34 randomly selected Oregon Reading First schools located in 16 different school districts. The researchers compared the DIBELS ORF rates of 4,696 Grade 3 students with their performance on the Oregon State Reading Assessment. The correlations ranged from 0.58 to 0.68, somewhat lower than the Arizona and Florida studies. Pressley et al. (2005) suggested that studies on the DIBELS should be conducted by researchers not associated with Reading First. In response, Barger (2003) compared the DIBELS ORF rates of 38 students to the North Carolina End of Grade 3 Test. Barger also found a moderately large correlation (.73), similar to those found by Wilson (2005) and Roehrig et al. (2008). These researchers documented the positive relationship between the DIBELS ORF and four different state-mandated reading assessments for Grade 3 students. Studies of Grade 3 and Other Grades However, Fuchs et al. (2001) suggested that the correlation between oral reading fluency and comprehension may be stronger in Grade 3 than in other grades, for which assessments measure higher levels of comprehension. Three groups of researchers (Shapiro et al., 2008; Vander Meer et al., 2005; Wood, 2006) studied this issue by using students in Grades 3, 4, and 5. For Colorado schools, Wood (2006) used 82 Grade 3 students, 101 Grade 4 students, and 98 Grade 5 students to determine if the DIBELS ORF consistently predicted performance on the Colorado Student Assessment Program across grade levels. Wood found significant correlations at all three levels: for Grade 3 at .70,
  • 60.
    46 Grade 4 at.67, and Grade 5 at .75. These values were similar to those found by Wilson (2005), Roehrig et al. (2008), and Barger (2003) for Reading First schools. For two grade levels with students in Ohio, Vander Meer et al. (2005) correlated the end-of-year Grade 3 DIBELS ORF with performance on the reading portion of the Ohio Proficiency Test. A total of 318 Grade 4 students from three elementary schools comprised the sample. Vander Meer et al. (2005) found a significant correlation, .65, between the end-of-year Grade 3 ORF scores and the Grade 4 Ohio Proficiency Test scores. Because this study considered the same students across two successive grades, the researchers concluded that oral reading fluency was an accurate predictor of performance on the reading portion of the Ohio Proficiency Test. With students in Pennsylvania across grade levels, Shapiro et al. (2008) compared the DIBELS ORF rates of 401 Grade 3 students, 394 Grade 4 students, and 205 Grade 5 students with their performance on the Pennsylvania System of School Assessment. The researchers found significant correlations for all three grades: Grade 3 at.67, Grade 4 at .64, and Grade 5 at.73. Again, these results were similar to those of the previous studies. The findings of Wood (2006) and Shapiro et al. support the use of the DIBELS ORF as a consistent predictor of performance on state-mandated reading assessments not only for Grade 3 but also across grade levels. Thus, for seven states of the 50 that require state-mandated assessments of reading proficiency, studies have found significant positive relationship between the DIBELS ORF and the state measures. However, because definitions of reading proficiency differ across states and grade levels, the need existed for researchers to
  • 61.
    47 determine if DIBELSORF correlates significantly with state-mandated assessments in other states and at other grade levels (Roehrig et al., 2008). The present study focused on Grade 3 students in Texas. No similar study has been conducted to determine if a significant relationship existed for Grade 3 students between DIBELS ORF rates (independent variable) and reading proficiency (dependent variable), as measured by the scores on the Grade 3 Reading TAKS. Limitations of State Studies In the studies reviewed above, the researchers all reported significant relationships between the DIBELS ORF in relation to Grade 3 students and reading proficiency as assessed on state-mandated assessments. However, the researchers also recognized limitations of their studies. One limitation was sample composition and size. For example, the Colorado sample (Wood, 2006) consisted of primarily White students. The North Carolina sample (Barger, 2003) had only 38 students. On the other hand, the Oregon sample (Baker et al., 2008) and one of the Florida samples (Roehrig et al., 2008) had large statewide samples from students in Reading First schools. One of the criteria for being a Reading First school was a high population of students from low socioeconomic backgrounds, and thus these samples were not representative of the general Grade 3 population. In all these studies, researchers recommended future research that would include wider cross-sections of students as well as non-Reading First schools for greater generalizability of results to other school settings. Another limitation of these studies was recognition of other variables that might affect the relationship between oral reading fluency and reading proficiency. In the
  • 62.
    48 Oregon study, Bakeret al. (2008) singled out diversity as a factor. The Reading First sample schools had large poverty rates and low reading achievement, and Baker et al. recommended additional research that focused on specific ethnic backgrounds as well as subpopulations, such as special education students. In Arizona, Wilson (2005) recommended future studies after Arizona implemented the 2005 Arizona’s Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) measurement scale and performance levels. Wilson posited that the relationships between scores would be different with the new standards. When Wilson published his results, it was not clear whether the expected performance levels would be higher or lower than the ones used in his study. For the Pennsylvania study, Shapiro et al. (2008) used data from the 4Sight assessments in addition to DIBELS. They used the 4Sight assessments to provide further information on students’ reading comprehension to predict reading proficiency. The 4Sight assessments are short tests similar in format and other aspects to state assessments and are used throughout the school year to produce overall scores that may predict students’ scores on state assessments. Many states use the 4Sight assessments, including California, Pennsylvania, Tennessee, and Texas (Success for All Foundation, 2010). In the Pennsylvania study (Shapiro et al., 2008), the 4Sight benchmark assessments were administered to students as a group rather than individually. Whole group administration enabled researchers to assess many students in approximately 1 hour. However, some reading skills, such as oral reading fluency, are best tested when
  • 63.
    49 administered individually soeducators can monitor the rate and accuracy with which a student reads. Shapiro et al. (2008) recommended future studies in other states to determine if their findings could be replicated. Additional studies over time and in various locations would help generalize the findings to larger populations. Shapiro et al. also noted that the schools in their samples represented varying demographic characteristics but were similar in their levels of poverty. The researchers recommended additional studies in other schools with different poverty levels to eliminate poverty as a factor and for greater generalization. As a result of these limitations and recommendations, researchers indicate the need for additional studies on the DIBELS ORF and state-mandated assessments, in Grade 3 and other grades. Roehrig et al. (2008) pointed out this need especially in other states. I responded to this need with this study to determine if a statistically significant relationship existed between the DIBELS ORF and the TAKS for Grade 3 students in Texas. Summary In section 2, literature regarding the automaticity theory, oral reading fluency, and reading proficiency was reviewed. LaBerge and Samuels (1974) developed the automaticity theory to explain why readers who struggle with low oral reading fluency rates also struggle with comprehension. La Berge and Samuels and other researchers that followed (Baker et al., 2008; Kuhn, 2005; Morgan & Sideridis, 2006) demonstrated that readers have a limited amount of attention to devote to reading. When too much attention
  • 64.
    50 is focused ondecoding words, readers have insufficient attention to allocate to other aspects of reading, such as comprehending. Next, I reviewed oral reading fluency, beginning with various definitions of oral reading fluency (Allington, 2009; Deeney, 2010; Hudson et al., 2005; Kame’enui & Simmons, 2001; Marcell, 2007; Roehrig et al., 2008; Worthy & Broaddus, 2002). Researchers have pointed out the complexity of oral reading fluency and demonstrated norms (Hasbrouck & Tindal, 2006; Kame’enui & Simmons, 2001). The role of oral reading fluency was then reviewed in the context of learning to read (Ehri, 2005; Ehri & McCormick, 1998). Finally, I reviewed literature relating to reading proficiency. In NCLB (2002), legislators required the development of a definition of reading proficiency that is flexible enough to be measured by state reading assessments in all 50 states (NARAP, 2006). Vocabulary (Ouellette, 2006) and comprehension (Samuels, 2006) are important aspects of reading proficiency that generally cannot take place until students reach a certain level of oral reading fluency. Once readers are able to fluently decode text, they can devote their attention to other aspects of reading, such as vocabulary and comprehension. Researchers in Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002; Wood, 2006), Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and Pennsylvania (Shapiro et al., 2008) have established a relationship between oral reading fluency rates and reading proficiency as measured on their state-mandated reading assessment.
  • 65.
    51 However, the specificdefinition of reading proficiency varies from state to state because the state-mandated reading assessments are based on the reading objectives defined in the state curricula (NARAP, 2006). It is therefore necessary for researchers in other states to determine if a relationship exists between oral reading fluency rates and their state-mandated assessment (Roehrig et al., 2008). This study investigated whether a statistically significant relationship (p < .05) existed between oral reading fluency rates as measured by the middle-of-Grade 3 DIBELS benchmark rates and the reading proficiency measured by the scale score of the Grade 3 Reading TAKS. In section 3, I focus on the methodology of this study. In section 4, I report the findings of the study, and in section 5, I discuss implications of the findings in comparison with previous studies, application to social change, and recommendations for action and further research.
  • 66.
    52 Section 3: ResearchDesign Introduction The purpose of this study was to determine whether a statistically significant relationship (p < .05) existed between Grade 3 students’ oral reading fluency rates and their reading proficiency. In this section, I explain the research design of this study, followed by a description of the sample and population. Then, I discuss the reliability and validity of the two instruments used in this study and finally the procedures for data collection and analysis. Research Design I designed this quantitative study to determine if a statistically significant relationship (p < .05) existed between oral reading fluency and reading proficiency. Specifically, I used the middle-of-year DIBELS ORF rates of 155 students in a Texas school district as the independent variable to measure oral reading fluency rates, and the scale scores on the Grade 3 Reading TAKS as the dependent variable to measure reading proficiency. I did not conduct experimental research to determine, for example, if one group performed better than another group with regard to either oral reading fluency rates or reading proficiency, because the study purpose was to determine if a relationship existed between oral reading proficiency and reading proficiency. Neither a qualitative study nor a mixed study was appropriate because I focused on quantitative statistical data to fulfill the study purpose. These data were used to determine whether a statistically significant relationship (p < .05) existed between Grade 3 students’ oral reading fluency and their reading proficiency on a state-mandated
  • 67.
    53 assessment. This purposewas in contrast to that of a qualitative study, which might ask students about their attitudes, feelings, or self-efficacy during performance on the examination. I considered several nonexperimental research methods, including point-biserial correlation, Spearman correlation, chi-square test for independence, and Pearson correlation. The point-biserial correlation is a type of formula researchers use to determine relationships between two variables when one variable can represent only one of two components (Gravetter & Wallnau, 2005). For example, in a study in which a student can either pass or fail an examination, researchers assign each of the options of the pass-fail dichotomous variable a number. A researcher might assign 1 if the student passes the examination and 0 if the student fails it. However, I did not use a point-biserial correlation because neither of the variables was dichotomous. Researchers also use Spearman correlations to determine relationships between two variables in which the variables are rank ordered (Gravetter & Wallnau, 2005). For example, this study could have used the students’ middle-of-year DIBELS ORF rates to rank the students from the fastest to the slowest readers. I could have also ranked students by the scale scores received on the Grade 3 Reading TAKS. I then would have used the Spearman correlation to determine the relationship between the students’ performance on both assessments. However, I chose not to use a Spearman correlation because students’ same scores could skew the data and render inaccurate results. Specifically, all the students who reached a scale score of exactly 2100 on the Grade 3 Reading TAKS would have
  • 68.
    54 received the sameranking. Because there are 36 questions on the Grade 3 Reading TAKS, I could have used 36 ranks. However, the oral reading fluency rates could then yield a greater number of ranks because the students could read anywhere between 0 to 150 or more words correct per minute. The Spearman correlation was additionally inappropriate because the results would yield a different number of ranks for each variable. I also considered using a chi-square test for goodness of fit that uses frequencies to determine if the frequencies in the sample fit the frequencies in the population (Gravetter & Wallnau, 2005). For example, in this study a chi-square could have been used to determine the percentage of Grade 3 students in each of three categories, that is, at-risk, some-risk, and low-risk readers, who passed and failed the TAKS. I would then calculate and compare the percentage of frequencies in both the sample and population. However, I did not include the categorization of students in the study purpose. I chose a Pearson correlation as the most appropriate nonexperimental statistical method for this study. Pearson correlations “measure and describe a relationship between two variables” (Gravetter & Wallnau, 2005, p. 412). In this study, I described and measured the relationship between the middle-of-year Grade 3 students’ DIBELS ORF rates and their scale scores on the Grade 3 Reading TAKS. If I had used the point-biserial method, results would have simply indicated whether students passed or failed the assessment. However, using the Pearson correlation, I obtained additional information that described the relationship between the two variables. For example, instead of indicating solely whether the students passed or failed, I could determine the range of
  • 69.
    55 scores with greaterprecision, such as whether the students passed with a minimum scale score of 2100 on the Grade 3 Reading TAKS or whether they passed with a high score of 2400 or more. Although I could have obtained useful information from a chi-square analysis, but with a Pearson correlation I acquired the most accurate data analysis. A chi-square test is a nonparametric test, and the Pearson correlation is a parametric test. Researchers prefer parametric tests because they require statistical data on individual participants (Gravetter & Wallnau, 2005). However, some researchers use nonparametric tests in studies when detailed data on individual participants are not available. Because detailed data of individual participants were available for the present study, I chose the more reliable parametric test, the Pearson correlation. Setting and Sample Setting This study took place in a West Texas town with a population of 6,821. According to the community’s economic development corporation, the average annual income of the 2,946 households at the time of the study was $37,000. For reasons of confidentiality, the source for this information was not referenced in this study. Approximately 38% of the households had school-age children, and 160 students in the district were in Grade 3. All of these students attended the same elementary school. Characteristics of the Sample In Table 1, I show the characteristics of the sample and compare them to the population of Grade 3 students in Texas who took the Grade 3 Reading TAKS in March
  • 70.
    56 2009. Of thestudents in Grade 3 in the district, 52% were male and 48% were female. In terms of ethnicity, 54% of the students were Hispanic, 38% were White, 6% were Black, and 1% was Native American. The district listed 56% of the students as economically disadvantaged, with10% in special education. Table 1 Characteristics of Grade 3 Students in Texas Overall and the Research Site Characteristic Texas Overall (%) Local District (%) Male 50 52 Female 50 48 White 37 38 Hispanic 44 54 Black 15 6 Native American <1 1 Asian 4 0 Economically disadvantaged 56 56 Special education 5 10 Total participants 316,319 159 Note. From Texas Education Agency (TEA, 2009c, 2009d). As shown in Table 1, the characteristics of the district population were similar to those of the state overall. Consequently, the sample of Grade 3 students from this school district was representative of the population of Grade 3 Texas students overall. In 2009,
  • 71.
    57 all 316,319 Texasstudents took the first administration of the Grade 3 Reading TAKS (TEA, 2009c). In the district at the research site, all 155 students took the Grade 3 Reading TAKS (TEA, 2009d). Sampling Method The sampling method for this study was a nonprobability convenience sample of Grade 3 students. Since all Grade 3 students attended the same elementary school, all these students comprised the study sample. Although random selection would have more closely ensured that participants represented the population (Creswell, 2009), a convenience sample, such as that for the present study, used an already formed sample. However, because the total number of students was 155 (Table 1), I chose the larger convenience sample rather than a randomly selected smaller sample to increase representativeness. Sample Size The sample was comprised of 155 students. I considered a random sample size of 50 but recognized concerns of sample variance in inferring the results of a sample to an entire population. When researchers use a sample of 20 or 50 to make inferences to a population of thousands, the sample variance is generally higher than with a sample of 100 or more (Gravetter & Wallnau, 2005). In addition, I conducted an a priori power analysis to determine a statistically acceptable sample size. For one-tailed (unidirectional) bivariate correlation analysis, the power indicated was .80, a power acceptable for research studies. The effect size indicated for correlation was medium, .30. The population effect size is the measure of
  • 72.
    58 the strength (effect)of the alternative hypothesis. The alpha level was set at .05, widely acceptable in educational research (Creswell, 2009). This value indicates a 5% probability of committing a Type I error, in which a researcher would reject the null hypothesis when it was actually true (Gravetter & Wallnau, 2005). I used the G*Power statistical program (Faul, Erdfelder, Lang & Buchner, 2007), and results showed a minimum required sample size of 64. The power analysis provided additional support for the nonprobability convenience sample of 155 students. Eligibility Criteria for Study Participants I set three criteria for students to participate in this study. First, students must have attended Grade 3 in the district during the 2008-2009 school year. Second, teachers would have screened these students with the middle-of-year DIBELS benchmark in January 2009. Third, students must have taken the Grade 3 TAKS Reading assessment in March 2009. Instrumentation and Materials I used two instruments in this study to determine if a statistically significant relationship (p < .05) existed between Grade 3 students’ oral reading fluency and reading proficiency. The first instrument was the DIBELS ORF, the independent variable, which teachers administered to students in the middle of the school year. The second instrument was the Grade 3 Reading TAKS, the dependent variable, which students took at the end of the school year.
  • 73.
    59 DIBELS ORF Researchers haveidentified oral reading fluency as a predictor of reading proficiency (Fuchs et al., 2001; Hasbrouck & Tindal, 2006; Pikulski & Chard, 2005). Educators use oral reading fluency benchmarks to measure the number of correct words a student correctly reads in 1 minute in three passages. The original study for DIBELS was conducted in Lane County, Oregon, with a sample of 673 students in kindergarten through Grade 3 (DIBELS, 2002). For Grade 3, Good and Kaminski (2002a) used the Spache Readability Formula and determined the readability of the passages at 2.9, 3.0, and 3.1 grade levels, respectively. The developers selected these passages for Grade 3 so that the first one, “The Field Trip,” represented the easiest passages (2.9). The second one, “Keiko the Killer Whale,” represented passages of medium difficulty (3.0). The third one, “Getting Email,” represented passages of greatest difficulty (3.1). The developers of the DIBELS (University of Oregon Center in Teaching and Learning, 2008) established categories that educators can use to determine if students are at low risk, some risk, or at risk for poor language and reading outcomes. Students who read less than 67 correct words per minute on the middle-of-year ORF benchmark in Grade 3 are at risk for poor language and reading outcomes; students who read between 67 and 91 correct words per minute are at some risk; and students who read 92 or more words per minute are at low risk for poor language and reading outcomes. From their sample, the University of Oregon Center on Teaching and Learning determined that for number of correct words a student correctly reads in 1 minute, some students in the low-
  • 74.
    60 risk category aremore likely to pass reading outcomes, and conversely, students in the at- risk category are more likely to fail reading outcomes than pass. Barger (2003) found these categories to be accurate in correlating the end-of-year DIBELS ORF of 38 students to the North Carolina End-of-Grade Test. In his study, 100% of the students in the low-risk category passed the assessment, and 67% of the students in the at-risk category failed the assessment. Baker et al. (2008) also confirmed the findings of the University of Oregon Center on Teaching and Learning (2008) in a study of 9,602 Oregon students in Grades 1, 2, and 3. As discussed in section 2, other researchers found similar results (Buck & Torgesen, 2003; Shaw & Shaw, 2002; Vander Meer et al., 2005; Wilson, 2005). In addition to the number of correct words a student correctly reads in 1 minute, educators use another component of the DIBELS ORF to assess students on retelling what they read (Good & Kaminski, 2002b). The purpose of the retell fluency assessment is to verify that students understand what they read, as opposed to simply word calling. The developers of the DIBELS, Good and Kaminski, established that students should use at least 25% of the number of words read when they retell a passage. Students who read 100 correct words per minute should use at least 25 words when they retell a passage. Furthermore, Good and Kaminski specified that students who are progressing appropriately toward their reading goal must read at their determined goal and use at least 25% of the words read correctly in 1 minute when retelling a passage. Reliability and validity of the DIBELS ORF. The developers of the DIBELS ORF probes established their reliability and validity. Researchers use reliability to
  • 75.
    61 indicate whether anassessment consistently provides a score that researchers can trust (Creswell, 2009). When the assessment committee analyzed DIBELS (DIBELS, 2002), they found test-retest coefficients between .91 and .96 when the DIBELS ORF was compared to other comparable assessments (DIBELS). The 1-minute fluency probes provided reliable results. However, the developers recommended administering three to five 1-minute probes to improve reliability. Researchers use validity to indicate whether an assessment measures what it is intended to measure (Creswell, 2009). They use concurrent validity to establish whether the scores from one assessment compare to the scores from similar assessments (Creswell, 2008). Hasbrouck and Tindal (1992) developed national oral reading fluency norms by comparing the oral reading fluency rates of students throughout the country. The DIBELS Assessment Committee (2002) indicated that the Hasbrouck and Tindal’s (1992) research corresponded with the DIBELS’ benchmark goals. In 2006, Hasbrouck and Tindal updated the national norms of ORF and expanded them to include Grades 1 through 8. Their study included the DIBELS ORF probes along with the Texas Primary Reading Inventory (University of Houston, 1999); AIMSweb (Edformation, 2004); and the Reading Fluency Monitor (Read Naturally, 2002). Hasbrouck and Tindal found similar results to the developers of DIBELS (Good & Kaminski, 2002b). Examining the oral reading fluency rates of 17,383 students in the middle-of-year Grade 3, Hasbrouck and Tindal found that students in the bottom 25th percentile read less than 63 words correctly in a minute. The top half of the Grade 3 students in the middle of the year read more than 92 words correctly per minute.
  • 76.
    62 Jenkins et al.(2007) reviewed studies that used ORF screenings during the previous 10 years and concluded that the DIBELS ORF screenings were comparable to other fluency assessments. This conclusion applied especially to students scoring in the lowest category, at risk, and those scoring in the highest category, low risk. Jenkins et al. found that the DIBELS cut scores tended to overestimate students in the middle category, some risk. For comparison of results, they referred to Buck and Torgesen’s (2003) study conducted in Florida Reading First schools, in which students in the middle group were equally likely to perform satisfactorily or unsatisfactory on other assessments. A study conducted in Oregon and Texas (Harn et al., 2008) confirmed that the results of the DIBELS ORF rates provided comparable results. Harn et al. based their study on Ehri’s (2005) phase theory, explained in section 2. In accordance with the model, students in the prealphabetic phase progress from painstakingly sounding out every letter in a word to recognizing units of sound within words. Eventually, they progress to the more advanced consolidated alphabetic phase, in which they can fluently read passages. Jenkins et al. (2007) confirmed Ehri’s (2005) phase theory. The researchers measured entering first-grade students’ knowledge of letter sounds using DIBEL’s Nonsense Word Fluency assessment and compared it with their oral reading fluency rate at the end of the Grade 1 using the DIBELS’ oral reading fluency assessment. DIBELS ORF rates produced reliable results, confirming that first-grade students at the beginning of the year in the prealphabetic phase who were able to recognize letter sounds progressed to the consolidated alphabetic phase and were able to fluently read connected
  • 77.
    63 text by theend of Grade 1. Thus, these researchers, as well as others, have established the concurrent validity of DIBELS ORF (Begeny et al., 2006; Chard et al., 2008; Francis et al., 2005; Katz et al., 2008; Pressley et al., 2005; Riedel, 2007). Studies have also correlated the DIBELS ORF to reading proficiency on state- mandated assessments in Arizona (Wilson, 2005), Colorado (Shaw & Shaw, 2002; Wood, 2006), Florida (Buck & Torgesen, 2003; Roehrig et al., 2008), North Carolina (Barger, 2003), Ohio (Vander Meer et al., 2005), Oregon (Baker et al., 2008), and Pennsylvania (Shapiro et al., 2008). In addition, Pressley et al. (2005) found a correlational relationship between DIBELS ORF and the reading portion of the Terra Nova assessment to a greater degree than the Qualitative Reading Inventory (Leslie & Caldwell, 1995) or teachers’ predictions. Researchers (Baker et al., 2008; Chard et al., 2008) have also correlated the DIBELS ORF probes to the Stanford Achievement Test (Harcourt Brace Educational Measurement, 2002) and the Iowa Test of Basic Skills (Katz et al., 2008). The results of these studies further indicate the validity of the DIBELS ORF benchmarks. However, other researchers have questioned the validity of the DIBELS ORF. (Allington, 2009; Goodman, 2006; Pearson, 2006; Pressley et al., 2005; Samuels, 2006). Goodman criticized DIBELS ORF for not predicting reading proficiency. Pressley et al. and Riedel (2007) claimed that the DIBELS ORF mispredicts reading comprehension on other assessments. Concerned about bias, Pressley et al. called for researchers other than the developers of DIBELS or those associated with Reading First to perform research regarding the relationship of DIBELS ORF and reading proficiency.
  • 78.
    64 In a studywith 17,409 students, Roehrig et al. (2008) confirmed a significant relationship between DIBELS ORF and reading proficiency as measured by the Florida Comprehensive Assessment Test. However, the researchers recommended that the developers of DIBELS ORF change the cut points. Roehrig et al. determined that students reading less than 45 words correct per minute were at risk for unsatisfactory performance on other measures of language and reading outcomes. The category of some risk would consist of students reading between 46 and 75 words correct per minute. Those reading more than 76 words correct per minute would be in the low risk category. When Roehrig et al. evaluated the data using the cut scores, they found that 86% of the students in the low-risk category, 62% of students in the some-risk category, and 20% of the students in the at-risk category met grade level expectations at the end of the year. Currently, the developers of the DIBELS categorize students who read 77 or more words correct per minute at the beginning of Grade 3 as at low risk for not meeting grade level expectations in other measures of language and reading outcomes. The developers categorized students who read less than 53 words correct per minute as at high risk and students who read between 53 and 76 words correct per minute as at some risk (University of Oregon Center on Teaching and Learning, 2008). Based on these cut scores, the University of Oregon Center on Teaching and Learning found that 90% of the students in their low risk category, 34% of those in the some risk, and 3% of those in the at-risk category met the goals expected of them at the end of Grade 3. Roehrig et al. (2008) recommended “improved efficiency of classifications” (p. 353), and their educators screened students four times a year. The cut scores developed
  • 79.
    65 by the Universityof Oregon Center on Teaching and Learning (2008) used in the current study are based on studies that benchmarked students three times a year. Consequently, the Roehrig et al. middle-of-year benchmark goals are different from those discussed above in section 3. However, the beginning of year benchmark goals from the Roehrig et al. study and the DIBELS benchmark goals are comparable. Similar to Roehrig et al. (2008), Jenkins et al. (2007), after conducting a meta- analysis of universal screening studies, recommended a reexamination of the cut points. They pointed out that although many studies have found the DIBELS ORF valid, a change in cut points might improve the assessment. More precise cut points should allow educators to more accurately identify students who are at risk for not meeting grade level language arts and reading outcomes. Such improved information will allow educators to provide interventions only for the students who need it the most (Jenkins et al., Roehrig et al.). Administration of the DIBELS ORF. In the administration of the Grade 3 middle-of-year DIBELS ORF, each student read three passages (Good & Kaminski, 2002b). Trained teachers individually administered the middle-of-year DIBELS ORF benchmark, timing students as they read three passages: (a) "The Field Trip," (b) "Keiko the Killer Whale," and (c) "Getting Email" (Good & Kaminski). The passages were arranged in order from the easiest to the most difficult. The student read each passage for 1 minute while the teacher recorded substitutions and omissions as errors. In addition, if the student hesitated longer than 3 seconds, the teacher read the word and recorded it as
  • 80.
    66 an error. Conversely,the teacher did not consider as errors words that the student self- corrected within 3 seconds. At the end of each passage, the teacher recorded the total number of words the student read correctly in 1 minute as the oral reading fluency rate. This number was determined by subtracting the number of errors from the total number of words the student read. Then the teacher asked the student to retell what was read and recorded the number of words the student used as the retell fluency. After the student read all three passages, the teacher examined the oral reading fluency rate of the three passages and recorded the mean rate as the middle-of-year oral reading fluency rate. Scoring of the DIBELS ORF. The developers of DIBELS (Good & Kaminski, 2002b) categorized oral reading fluency rates as at risk, some risk, or low risk. On the middle-of-year Grade 3 DIBELS ORF, students who read fewer than 66 words correct in 1 minute were considered at risk for poor language and reading outcomes. The developers recommended that teachers provide these students with intensive intervention for 60 minutes daily to target basic reading skills. Students who read between 67 and 91 correct words in 1 minute were considered at some risk for poor language and reading outcomes. The developers recommended that teachers provide these students with strategic intervention for 30 minutes daily to address their reading skills. Students who read more than 92 words correctly in 1 minute were in the highest category and considered at low risk for difficulties with language and reading outcomes (Good & Kaminski). After administration and scoring of the DIBELS by the teachers, as described above, student information was stored on a secured website.
  • 81.
    67 The developers ofDIBELS (Good & Kaminski, 2002b) did not establish benchmark goals for the retell fluency portion. However, they explained that students should use at least 25% of the words read correctly in 1 minute when they retell what they read. For example, students who read 100 words correctly in 1 minute should use at least 25 words when they retell the passage they read. Although benchmark goals have not been established, the developers recommended that students who use less than 25% of the words they read to retell a passage are not considered developed readers (Good & Kaminski). TAKS The purpose of the TAKS is to determine which students are reading proficiently on their grade level. The TAKS was developed by teams of educational specialists with consultation from many educators (TEA & Pearson, 2008). Advisory committees comprised of teachers of each subject at each grade level worked with test developers and TEA staff to review and assess the TAKS at several points in the development process. The advisory committees were comprised of educators from throughout Texas and represented diverse ethnicities (TEA & Pearson). The first advisory committee convened to identify which student expectations in the state curriculum TAKS should be tested (TEA & Pearson, 2008). After testing, the specialists and TEA staff compiled an assessment of the outcome. Subsequently, a second advisory committee met to examine whether the wording of the passages, questions, and answer choices measured the assigned student expectations. Based on this committee’s
  • 82.
    68 report, test writersmade the recommended changes, and field testing followed this process. The team at TEA and Pearson then analyzed the field testing results, and another advisory committee met and evaluated the passages, questions, and answer choices to identify any biases (TEA & Pearson, 2008). This committee also recommended elimination of specific questions that appeared to reflect biases. In addition to these processes, members of TEA and Pearson frequently consulted with national testing experts during development of the TAKS. These detailed procedures resulted in an assessment instrument that reflected the state curriculum and measured specific student expectations. Reliability and validity of the TAKS. Researchers have established both the reliability and validity of the TAKS. The TEA and Pearson (2008) established reliability and verified the internal consistency of the TAKS with the Kuder-Richardson Formula 20. With perfect reliability given the value of 1.0 (Gravetter & Wallnau, 2005), reliabilities for dichotomously scored TAKS assessments ranged from .87 to .90, respectively. The TEA and Pearson (2008) also established concurrent validity of the TAKS. Researchers use concurrent validity to establish how an assessment compares with other similar assessments (Creswell, 2009). The TEA and Pearson conducted a grade correlation study to determine how scores on the TAKS compared with performance in respective grades or courses. The grade correlation study included all TAKS assessments
  • 83.
    69 from Grade 3to the Exit Level. TEA and Pearson reported the outcome indicated high concurrent validity: Results indicated that a high percentage of students who pass the TAKS tests also pass their related courses. Small percentages of students passed the TAKS tests but did not pass their related courses, passed their related courses but did not pass the TAKS tests, or failed to pass the TAKS test or their related courses. (p. 166) Administration of the Grade 3 Reading TAKS. The Grade 3 Reading TAKS was a criterion-referenced assessment that measured four objectives: 1. The student will demonstrate a basic understanding of culturally diverse written texts. 2. The student will apply knowledge of literary elements to understand culturally diverse written texts. 3. The student will use a variety of strategies to analyze culturally diverse written texts. 4. The student will apply critical-thinking skills to analyze culturally diverse written texts. (TEA, 2004, p. 4) To measure these objectives, students completed testing booklets containing three passages of between 500 and 700 words. The passages were accompanied by a total of 36 multiple-choice questions, with the number of questions for each passage varying between 11 and 13. The TAKS was an untimed assessment, and students took as much time as they need during one school day to answer the questions. Students answered the questions directly in the test booklet.
  • 84.
    70 School administrators trainedGrade 3 teachers to ensure the fidelity of testing administration. The teachers administered the first Grade 3 Reading TAKS in March. Students who failed this round (scored under 2100; see scoring explanation below) were offered intensive intervention, developed by student success teams comprised of the students’ parents, teachers, and administrators. Then the students were offered additional opportunities in April and June to retake the Grade 3 Reading TAKS. Students unable to score above 2100 after three attempts were retained in Grade 3 or provided intensive intervention determined by the student support teams in Grade 4. After administration of the TAKS by teachers, as described above, school administrators collected students’ test booklets and kept them in a secure location. The administrators then sent the booklets to the TEA staff in the headquarters in Austin, Texas (TEA, 2009f). Staff members scored the booklets and returned the results to the district. Educators who have access to the TAKS information signed oaths to maintain confidentiality; those who break this confidentiality are subject to consequences that can include loss of certification. Scoring of the TAKS. The possible range of scores for the TAKS is from 1381 to 2630 (TEA, 2009b). Students were required to answer 24 questions correctly to obtain the minimum expectancy scale score of 2100, indicating they were reading proficiently at Grade 3 level. Students who answered 34 questions correctly received the commended score of 2400, indicating they had demonstrated high academic achievement and possessed a thorough understanding of the reading curriculum (TEA, 2006).
  • 85.
    71 Data Collection andAnalysis The data were collected and analyzed to test the null and alternative hypotheses for this study. They were the following: H0: There is no statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS. H1: There is a statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS. To test the null hypothesis, after appropriate permissions were granted, I collected archival data from Grade 3 students’ DIBELS ORF and TAKS. The school district at the research site screened all students in Grade 3 three times a year with DIBELS ORF benchmarks, and I used the middle-of-year DIBELS ORF for the 2008-2009 school year. All Grade 3 students in Texas were required to take the Reading TAKS during the spring semester, and the researcher used this TAKS for the 2008-2009 school year. Data Collection First I sought permission to conduct the study from Walden University’s Institutional Review Board (IRB), and permission was granted (IRB Approval Number 06-06-10-0334942). I requested and was granted permission from the superintendent of the school district to use the Grade 3 students’ DIBELS ORF and TAKS scores for the 2008-2009 school year (Appendix A).
  • 86.
    72 For data collectionof the DIBELS ORF scores, the students’ scores were stored on a secure district website. Only individuals who signed an agreement of confidentiality could obtain access to the site. The district involved in this study purchased and agreed to the terms of confidentiality, allowing me access during the 2008-2009 school year. When I became the district dyslexia teacher in 2009, I obtained access to the DIBEL’s ORF rates for all the students in the district. The district superintendent granted me permission to use these data for the study (Appendices A and B). For data collection of the TAKS scores, the TEA sent reports to the district listing individual test scores. After return of the students’ TAKS scores by the TEA to the district, school administrators kept students’ individual information confidential, accessible only to the students, their parents, and teachers. I had access to these data to carry out my duties as the district dyslexia teacher. In addition, the district superintendent granted me approval to use the data. I used the TAKS scores in the aggregate for the 2008-2009 school year. With access to the school records, I used the Grade 3 students’ middle-of-year DIBELS ORF rates and the end-of-year TAKS scale scores. To preserve students’ anonymity, I assigned code numbers to all students’ scores instead of using their names. Then I entered the DIBELS benchmark rates and TAKS scale scores into the chosen software program.
  • 87.
    73 Data Analysis I employedSPSS, version 17.0 (SPSS, 2009) for data analysis. SPSS is a widely used statistical software package in educational research (Creswell, 2009). Following the SPSS instructions, I entered the data into the computer. I performed a Pearson correlation to answer the research question. Pearson correlations “measure and describe a relationship between two variables” (Gravetter & Wallnau, 2005, p. 412). I entered into the SPSS program the DIBELS ORF scores for each student as the independent variable and the TAKS scores for each student as the dependent variable. The level of significance was .05, a level widely used in educational research (Creswell, 2009). The resulting statistic indicated whether a significant relationship existed for Grade 3 students between their DIBELS ORF scores and TAKS scores for the 2008-2009 school year. With regard to the direction of the relationship, a correlation coefficient (Pearson r) of 0 would indicate no linear relationship between two variables. A positive correlation of 1.0 would indicate the maximum strong relationship, in which the scores of the two variables increase together. A negative correlation of -1.0 would indicate the maximum weak relationship, in which the scores of the two variables decrease together (Creswell, 2008; Gravetter & Wallnau, 2005). With regard to the strength of the relationship, whether negative or positive as represented by Pearson’s r, value of generally .20 to .35 indicate a slight relationship, values of .35 to .65 indicate a somewhat strong relationship, values of .66 to .85 indicate a very good relationship, and values of .86 and above (to the perfect 1.0) indicate an excellent relationship (Creswell, 2008).
  • 88.
    74 Researcher’s Role During the2008-2009 school year, I was a teacher of Grade 2 at the research site of this study. In this position, I volunteered to tutor four dyslexic Grade 3 students, the only ones identified as such in the third grade. In March 2009, I administered the Grade 3 Reading TAKS to these students. As of the 2009-2010 school year, I became the district dyslexia teacher. In this capacity, I previewed all the assessment data from the students on the campus and recommended which students needed additional reading instruction. I teach some of these students in small groups, addressing their dyslexic tendencies. In addition, when principals in the district request dyslexic assessments, I individually administer a battery of assessments and report the information to a committee that decides whether the student is dyslexic. I provide direct, explicit, systematic, sequential instruction in all aspects of reading to most of the dyslexic students in the district. Because I was a second-grade teacher during the 2008-2009 school year, to remain neutral and removed from the testing administration, I did not administer any of the DIBELS benchmarks. However, as described above, I worked with four identified dyslexic students in an afterschool tutorial program and administered the Reading TAKS to them. Because of my involvement in the administration of their TAKS, I did not include the data from these four students in this study. Protection of Participants’ Rights I protected the rights of participants in several ways, first seeking and obtaining approval for the study from Walden University’s IRB. The district granted me permission to use the archival data of Grade 3 students’ DIBELS ORF and TAKS scores from the
  • 89.
    75 2008-2009 school year(Appendices A and B). In addition, the Dynamic Measurement Group granted me permission to use DIBELS in the study (Appendix C). The TEA granted me copyright permission to use the 2009 Grade 3 Reading TAKS in this study with the understanding that the TEA and Pearson Corporation are the owners of the TAKS and with the understanding that individual student data were to be kept confidential (Appendix D). I did not include the children’s names or other information that could identify the children in any reports of the study, and individual children’s rates and scores were not reported. All oral reading fluency rates and scale scores were assigned code numbers and were reported in group form only. Letters of consent or assent were not needed for the study participants because the data were archival. The DIBELS ORF and TAKS are routine assessments all students are required to take. Summary The purpose of this nonexperimental, quantitative study was to determine whether the middle-of-year DIBELS ORF rates correlated with the scale scores of the Grade 3 Reading TAKS for students in a West Texas school district during 2008-2009 school year. I used a quantitative correlational design to determine whether a statistically significant relationship (p >.05) existed between the independent variable, the DIBELS ORF rates, and the dependent variable, the scale scores of the Grade 3 Reading TAKS. A nonprobability convenience sample of 155 Grade 3 students participated in the study, and their demographic composition was representative of Grade 3 students in Texas overall.
  • 90.
    76 In this retrospectivestudy, archival records were used for two instruments, the DIBELS ORF and the TAKS for Grade 3 students. Both instruments have been reported with good reliability and validity in many studies. Data collection took place with the permission of Walden University’s IRB and the district superintendent. The superintendent provided a letter of cooperation and data use agreement for my use of archival data (Appendices A and B), and I accessed archival records of Grade 3 students’ DIBELS ORF rates and TAKS scores for the 2008-2009 school year. In addition, the Dynamic Measurement Group granted me permission to use DIBELS (Appendix C). The TEA granted me copyright permission to use the 2009 Grade 3 Reading TAKS with the understanding that the TEA and Pearson Corporation were the owners of the TAKS and that individual student information would be kept confidential (Appendix D). Data analysis took place with the SPSS software package, and I used Pearson correlation to test the null hypothesis that there is no statistically significant relationship (p >.05) between Grade 3 students’ middle-of-year DIBELS ORF rates and TAKS scores. Although I taught second grade at the research site school, I have not taught third grade, and I am presently the district dyslexia teacher. My professional role did not compromise data collection or analysis. Protection of participants was assured, and potential harm to participants was minimal. Data were coded by numbers only, and students’ names did not appear. Further, results were presented solely in group form. I report the results of the study in section 4 and discuss the findings in section 5.
  • 91.
    77 Section 4: Resultsof the Study Introduction The purpose of this quantitative, retrospective study was to determine if a statistically significant relationship (p < .05) existed between students’ oral reading fluency and reading proficiency. Pearson correlation analysis was used to examine 155 Grade 3 students’ oral reading fluency rates and reading proficiency scores in a West Texas school district. The data were analyzed to investigate whether a significant relationship existed between the students’ middle-of-year Grade 3 DIBELS ORF rates and their scale scores on the Grade 3 Reading TAKS during the 2008-2009 school year. Research Question and Hypothesis The following research question guided this study: Is there a statistically significant relationship between Grade 3 students’ oral reading fluency rates and their reading proficiency? From this question, the following null and alternative hypotheses were formulated: H0: There is no statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year. H1: There is a statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year.
  • 92.
    78 Research Tools The dependentvariable, oral reading fluency, was measured by the middle-of- year 2009 Grade 3 DIBELS ORF benchmark rates. The independent variable, reading proficiency, was measured by the scale score on the 2009 Grade 3 Reading TAKS. In this section, I describe how these research tools were used in this study. The Grade 3 teachers in the small West Texas school district were trained in the administration of DIBELS. During January 2009, they listened to each student read three passages: (a) "The Field Trip," (b) "Keiko the Killer Whale," and (c) "Getting Email" (Good & Kaminski, 2002b). As the student read a passage for exactly 1 minute, the teacher recorded the number of words read. The teacher marked substitutions and omissions as errors and subtracted the number of errors made from the total number of words read to ascertain the words read correctly in 1 minute. After the student read the passage, the teacher asked the student to retell what was read (Good & Kaminski). The teacher used the same procedure for all three passages and recorded the median oral reading fluency score as the student’s middle-of-year DIBELS ORF benchmark rate. These rates were stored on a secure website to which only authorized personnel had access. As district dyslexia teacher, I was authorized to view the rates (Appendix A). The Grade 3 Reading TAKS was administered according to the guidelines set by the TEA in compliance with the NCLB (2002) mandates for determining adequate yearly progress. The teachers were trained to administer the Grade 3 Reading TAKS before March 2009. The students were given 1 school day to read three passages and answer 36
  • 93.
    79 questions. Upon completion,the booklets were securely transported to Austin for scoring by the TEA, and the scale scores were electronically sent back to the district. The district stored the scale scores on a secured website accessible only by authorized personnel. As the district dyslexia teacher, I had authorized access to the scale scores (Appendix A). The district superintendent signed a letter of cooperation and a data use agreement (Appendices A and B). These gave me permission to use the middle-of-year DIBELS benchmark rates and the Grade 3 Reading TAKS scale scores from the 2008-2009 school year. In addition, Dynamic Measurement Group granted me permission to use the DIBELS (Appendix C). The TEA granted me copyright permission with the understanding that the TEA and Pearson Corporation are the owners of TAKS and no individual student information would be used in the study (Appendix D). Data were collected with the following procedure. On an Excel spreadsheet, I matched the DIBELS ORF benchmark rates and the Grade 3 Reading TAKS scale scores for each individual student. Although 160 students were enrolled in the district’s Grade 3 during the 2008-2009 school year, the data for four students were not included because I had administered the TAKS to them. Additionally, one student was excluded because no score was available for the Grade 3 Reading TAKS. Thus, the oral reading fluency rates and scale scores of the 155 remaining students were used in the study. The DIBELS ORF benchmark rates and Grade 3 Reading TAKS scale scores were imported into SPSS 17.0. To verify the accuracy of data entry, I compared the sum of the benchmark rates on the Excel spreadsheet to the sum of the benchmark rates
  • 94.
    80 entered into SPSS17.0. The same procedure was followed with the Grade 3 Reading TAKS scale scores. To verify accuracy and obtain the means and standard deviations for both variables, I used an Excel spreadsheet to compute the formula for the Pearson correlation and obtained the same results. Data Analysis A Pearson correlation analysis was conducted to determine if a statistically significant relationship (p < .05) existed between the middle-of-Grade 3 DIBELS ORF rates and the scale scores from the Grade 3 Reading TAKS for the sample of 155 students. The bivariate Pearson correlation analysis was conducted with the two variables, with a one-tailed test of significance. Table 2 shows the results. Table 2 Correlation of DIBELS ORF Rates and TAKS Scores for Grade 3 Students Measure 1 2 Mean Standard Deviation 1. DIBELS ORF --- .655* 86.9 33 2. TAKS .655* --- 2268 187 *p < .01. As Table 2 shows, for the sample of 155 students, the mean for DIBELS ORF was 86.9 (SD = 33), and the mean for the TAKS was 2268 (SD = 187). A statistically significant correlation was found, r = .66 (p = .01), between the students’ DIBELS ORF
  • 95.
    81 rates and TAKSscale scores for the 2008-2009 school year. This correlation was strong enough to reject the null hypothesis and was in the very good range (.66-.85; Creswell, 2008). This correlation indicated a linear relationship between the two variables. Students who had a high DIBELS ORF rate tended to have a high Grade 3 Reading TAKS scale score and vice versa (Gravetter & Wallnau, 2005). Thus, Null Hypothesis 1 was rejected. A statistically significant relationship was found between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year. Although the stated level of significance was p < .05, the finding of p < .01 is a stronger level of significance and also results in rejection of the null hypothesis. For example, in a study with 100 participants, if r were less than .16, I would reject the null hypothesis if the alpha level for the one-tailed Pearson correlation was .05. However, if r were less than .23, I would reject the null hypothesis if the alpha level for the one-tailed Pearson correlation was .01 (Gravetter & Wallnau, 2005). The study findings confirmed LaBerge and Samuels’ (1974) proposal regarding fluency in the automaticity theory. LaBerge and Samuels (2006) posited that when readers are able to read fluently, they have more attention to focus on other areas, such as reading proficiency. The present results indicate a relationship between oral reading fluency, as measured by the DIBELS ORF rates and reading proficiency, as measured by the scale score on the Grade 3 Reading TAKS. Thus, the results support H1: There was a statistically significant relationship between students’ oral reading fluency rates, as
  • 96.
    82 measured by middle-of-yearDIBELS ORF rates, and their reading proficiency, as measured by scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year. In addition, I investigated how the students in each of the DIBELS ORF categories performed on the Texas Grade 3 Reading TAKS, although this aspect was not part of the study purpose. However, this investigation is similar to Buck and Torgesen’s (2003) classification of information in their correlational study comparing DIBELS ORF rates to performance on the Florida Comprehensive Assessment Test (FCAT) to show how the students in each of the DIBELS ORF categories performed on the FCAT. To understand the present results, the ranges for the sample are reported for both the DIBELS ORF and TAKS. Ranges are reported in relation to the cut points and performance categories of the DIBELS ORF and TAKS. Table 3 displays these values. Table 3 Ranges and Categorization Cut Points of DIBELS ORF and Grade 3 TAKS DIBELS ORF Possible Range Sample Range At risk Some risk Low risk 0 – 92+ 3-169 -67 67-91 92+ TAKS Possible Range Sample Range Not proficient Proficient Commended 1381-2630 1835-2630 -2100 2100+ 2400+ Note. All DIBELS ORF scores are expressed in words read correctly per minute.
  • 97.
    83 As Table 3shows, for the present sample, the DIBELS ORF scores ranged from 3 words read correctly per minute (wcpm) to 169 wcpm. As mentioned in section 3, those students whose oral reading fluency rates are less than 67 wcpm are considered to be at risk for poor learning outcomes on other reading and language arts assessments. Students whose oral reading fluency rates are between 67 and 91 wcpm are considered to be at some risk, and students whose oral reading fluency rates are 92 or more wcpm are considered at the lowest risk for poor outcomes on other reading and language arts assessments. Table 3 shows that the range of Grade 3 Reading TAKS scale scores in this sample of 155 students was 1835-2630. As stated in section 3, students are expected to have a scale score of 2100 or more to be considered proficient. Students with scale scores of 2400 or greater receive a rating of commended. With this summary of the ranges and cut score categorizations of the DIBELS ORF and TAKS, the results of the study sample per category can be better understood. These were calculated with frequencies and percentages. Table 4 displays the Grade 3 DIBELS ORF and TAKS performance of the study sample. As Table 4 shows, of the 155 students, 44 were considered to be at risk. About half of the at-risk students (52%) failed the Grade 3 Reading TAKS with scale scores less than 2100. About half (48%) scored proficient on the assessment. Five (11%) of the at- risk students scored in the commended range.
  • 98.
    84 Table 4 Comparison ofStudents’ Performance by Categories for DIBELS ORF and Grade 3 Reading TAKS Number of Students DIBELS ORF Categories TAKS Not proficient Performance Proficient Categories Commended 44 At risk 23 (52%) 21 (48%) 5 (11%) 47 Some risk 9 (19%) 39 (79%) 14 (30%) 64 Low risk 0 (0%) 64 (100%) 41 (64%) 155 Total 32 (21%) 123 (79%) 60 (39%) As Table 4 also shows, the 47 students in the some-risk category performed much better than the 44 students in the at-risk category on the Grade 3 Reading TAKS. Only 19% of the some-risk students did not score proficient on the Grade 3 Reading TAKS, and 79% scored proficient. About three times as many of the some-risk students (30%) scored in the commended range compared to those in the at-risk category (11%). In the low-risk category, all (100%) of the 64 students passed the Grade 3 Reading TAKS, and over half (64%) scored in the commended range. The results as shown in Table 4 confirmed the values used by the developers of DIBELS ORF to define their categories (University of Oregon Center in Teaching and Learning, 2008). Such information as that in Table 4 could additionally help educators and reading specialists identify which students could be at risk of failing the Grade 3
  • 99.
    85 Reading TAKS beforeit is administered. This information could aid educators especially in benchmarking, setting oral reading fluency goals for students, and monitoring their progress (Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008). The results of the current study also confirmed the findings of Hasbrouck and Tindal’s (2006) oral reading fluency norms. Hasbrouck and Tindal found that Grade 3 students at the 50th percentile at the middle of the year were reading 92 wcpm. They recommended that students who read less than 10 wcpm of the 50th percentile (82 wcpm) be considered for intervention program. In the current study, all but 2 of the 86 students (98%) who read 82 or more wcpm scored proficient on the Grade 3 Reading TAKS. One hundred percent of the students who read 84 or more wcpm scored proficient. These findings seem to support Hasbrouck and Tindal’s research. Although the study results indicated a strong positive correlation between students’ DIBELS ORF rates and Grade 3 Reading TAKS scores, the findings demonstrate a relationship between oral reading fluency rates and reading proficiency. They do not prove a causal relationship (Gravetter & Wallanau, 2005). In this study, the significant linear relationship indicated that when oral reading fluency rates increased, the Grade 3 Reading TAKS scale score tended to increase, with the converse also true. The results do not, however, prove that high oral reading fluency rates cause high scores on the Grade 3 Reading TAKS. Other variables could be contributing factors, such as comprehension strategies students may use to answer questions on the Grade 3 Reading TAKS, and such strategies could affect their performance.
  • 100.
    86 This caution issupported by the study results. They do not prove that good oral reading fluency always results in proficient reading. For example, 5 of the 44 at-risk students not only scored proficient on the Grade 3 Reading TAKS but scored in the commended range. Similarly, Buck and Torgesen (2003) reported that 42 of the 220 students in their high-risk category scored adequate on the FCAT. Another factor that could affect reading proficiency is students’ knowledge of the English language. As noted earlier, Samuels (2006) documented cases in which Hmong students whose second language was English could fluently read English but could not read proficiently. The use of compensational strategies could also have an effect on students with low fluency rates but score proficient on reading assessments. In the current study, 21 (48%) of the at-risk students scored proficient on the Grade 3 Reading TAKS. In fact, 5 (11%) scored commended. In the some-risk category, 38 (79%) scored in the proficient range and 14 (30%) scored in the commended range. These scores do not indicate that students with low fluency rates cannot comprehend text but rather that they find it more difficult to read proficiently. Walczyk and Griffith-Ross (2007) reported that struggling readers in Grades 3, 5, and 7 can compensate by applying strategies such as pausing, slowing down, reading aloud, and looking back in the text. The Grade 3 students read at a constant rate and were more affected by constraints, such as time pressure, than the Grade 7 students in the study. Walczyk and Griffith-Ross conjectured that the Grade 3 students read at a slower rate and were more likely to take the time to sound out words they did not know or use
  • 101.
    87 context clues toread than the Grade 7 students. By using these skills, the struggling readers in Grade 3 were able to comprehend. Walczyk and Griffith-Ross’s (2007) conclusions may explain why 48% of the at- risk students in the present study and 79% of the some-risk students were able to score proficient despite their low fluency rates. Although the use of compensation strategies may account for struggling readers’ ability to score in the proficient range, it should be noted that the faster students read, the higher their chances of scoring proficient (at-risk students, 48%; some-risk students, 79%; 100% low-risk students). In addition, the faster students read, the higher their chance of scoring commended (at-risk students, 11%; some-risk students 30%; low-risk students, 64%). Summary The purpose of this study was to determine if a statistically significant relationship existed between oral reading fluency and reading proficiency for Grade 3 students in a West Texas school district. I used the following research question to guide the study: Is there a statistically significant relationship between Grade 3 students’ oral reading fluency rates and their reading proficiency? The resulting null hypothesis was as follows: There is no statistically significant relationship between students’ oral reading fluency rates, as measured by the students’ middle-of-year DIBELS ORF rates, and their reading proficiency, as measured by their scale scores on the Grade 3 Reading TAKS for the 2008-2009 school year. To test the hypothesis, I matched the students’ DIBELS ORF rate with their Grade 3 Reading TAKS scale scores on an Excel spreadsheet and then imported them
  • 102.
    88 into SPSS 17.0(SPSS, 2009), after which a Pearson correlational analysis was performed. Results indicated a strong significant correlational relationship (r = .66, p < .01) between oral reading fluency and reading proficiency. Thus, the null hypothesis was rejected. The study results confirmed LaBerge and Samuels’ (1974) proposal in the automaticity theory: Students who read fluently can devote their attention to other reading skills, such as comprehension. This research indicated a linear relationship between oral reading fluency and reading proficiency; as students’ oral reading fluency rates increased, reading proficiency also tended to increase. Although a positive correlational relationship was found, this result did not prove that oral reading fluency rates cause reading proficiency. Other factors such as comprehension strategies and knowledge of English as a second language may affect reading proficiency. Nevertheless, the strong positive correlation found for Grade 3 students’ DIBELS ORF scores and Reading TAKS performance is important for interpretation of the findings in terms of practical applications and implications for social change. In section 5, I discuss these issues, as well as recommendations for action and further study.
  • 103.
    89 Section 5: Discussion,Conclusions, and Recommendations Overview The purpose of this study was to determine if a statistically significant (p < .05) relationship existed between oral reading fluency and reading proficiency. The middle- of-Grade 3 DIBELS ORF was used to measure oral reading fluency and the Grade 3 Reading TAKS scale scores was used to measure reading proficiency with 155 students in a West Texas school district. In this section, the findings are reviewed and the correlations found in this study are compared with the correlations of nine other similar previous studies. The implications for social change are discussed and recommendations are made both for future actions and further studies. Interpretation of Findings Based on the results of the study, I concluded that for Grade 3 students, oral reading fluency rates are significantly positively related to their performance on state- mandated assessments. These findings specific to students in Texas supported those of previous studies correlating state-mandated assessments to the DIBELS ORF in other states (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006). For ease of comparison, Table 5 shows the other studies by state, grade levels, number of students in the sample, and the correlation statistic. Some researchers studied more than one grade level.
  • 104.
    90 Table 5 Studies CorrelatingState-Mandated Assessments to the DIBELS ORF Authors State assessment Grade level Number in sample Correlation found Baker et al., 2008 Oregon State Reading Assessment 3 4,696 .58 to .68 Buck & Torgesen, 2003 Florida Comprehensive Assessment Test 3 1,102 .70 Roehrig et al., 2008 Florida Comprehensive Assessment Test 3 5,207 .70 -.71 moderately strong Shapiro et al., 2008 Pennsylvania System of School Assessments 3 401 .67 Shapiro et al., 2008 Pennsylvania System of School Assessments 4 394 .64 Shapiro et al., 2008 Pennsylvania System of School Assessments 5 205 .73 moderately strong Shaw & Shaw, 2002 Colorado State Assessment Program 3 52 .80 Strong (continued)
  • 105.
    91 Authors State assessment Grade level Numberin sample Correlation found Wilson, 2005 Arizona Instrument to Measure Standards 3 241 .74 moderately strong Wood, 2006 Colorado Student Assessment Program 3 82 .70 moderately strong Wood, 2006 Colorado Student Assessment Program 4 101 .67 Wood, 2006 Colorado Student Assessment Program 5 98 .75 moderately strong The current study, in relation to these studies, compared the DIBELS ORF and the TAKS with 155 Grade 3 students, and the results of Pearson correlation showed a significance level of .66 (p < .05). These results confirmed the findings of the nine other studies, as Table 5 illustrates, all of which reported significant correlations between DIBELS ORF and state-mandated assessments. The lowest ranges were .58 to .68 (Baker et al., 2008, .58 to .68; Shapiro et al., 2008, Grade 3 .67, Grade 4 .64; Vander Meer et al., 2005, .65; Wood, 2006, Grade 4, .67).
  • 106.
    92 Some of theresearchers found higher correlations. Moderately strong correlations, .70-.75, were found by Wood (2006, Grade 3, .70, Grade 5, .75), Barger (2003, .73), and Shapiro et al. (2008, .73). The strongest correlation was found by Shaw and Shaw (2002), 80. In the context of these studies, the present results confirmed a significant relationship between the Texas state-mandated assessments and the DIBELS ORF. However, the present correlation was lower, .66, than those reporting moderate or strong correlations (Barger, 2003; Shapiro et al., 2008; Shaw & Shaw, 2002; Wilson, 2005; Wood, 2006). Several reasons may be cited for this difference. There were 155 students in the current sample. Several of the studies with higher correlations had smaller samples. The smallest sample of 38 was that of Barger (2003), with a moderately strong correlation of .73. Shaw and Shaw (2002), with a sample of 52 students, had a strong correlation, the highest correlation of all the studies (.80). Two of the grade levels in Wood (2006) had smaller sampler sizes, for the Grade 3 sample 82 students and a moderately strong correlation of .70; and for the Grade 5 sample 98 students and a moderately strong correlation of .75. Wood’s Grade 4 sample with a population size similar to the one in this study (101) also had a similar significant correlation (.67). The other previous studies with correlations closest to the one found in the present study had larger sample sizes. In the Vander Meer et al. (2005) study, the sample was 318 students, with a .65 correlation. Likewise, in the Shapiro et al. (2008) study, the Grade 3 sample was 401 and a similar significant correlation was found, .67. One of the Reading
  • 107.
    93 First studies, thatof Baker et al. (2008), had a large sample (4,696) and found significant correlations ranging from .58 to .68. Demographics could also have been a factor that contributed to varying correlation values. The student sample used in the Wood (2006) study was mostly White. Wood recommended replication with samples that represented other demographic characteristics. The studies conducted with Reading First schools (Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008; Wilson, 2005) had large numbers of students from low socioeconomic backgrounds. Baker et al. recommended other studies with samples of varying socioeconomic backgrounds to rule out poverty as a factor. The present sample of the district Grade 3 students had approximately half Hispanic, a third White, and half economically disadvantaged students. However, these characteristics reflected all Grade 3 students in Texas (Table 1). Another factor that may explain differences in study results was the rigor and degree of testing higher-level thinking skills of the assessment used to measure reading proficiency. Although all state-mandated assessments are designed to assess reading proficiency as required by the NCLB (2002) legislation, they differ from state to state and from grade to grade. State curricula determine how reading proficiency is defined and assessed. For example, Wilson (2005) studied the relationship between DIBELS ORF and performance on the Arizona Instrument to Measure Standards. With his sample of 241 students, he reported a moderately strong correlation (.74). However, in Wilson’s discussion of limitations, he recommended that the study be repeated after Arizona began
  • 108.
    94 using the nextreading assessment designed to assess higher reading comprehension skills. In the study by Baker et al. (2008), when the SAT-10 was used to measure reading proficiency, the researchers found a stronger correlation (.63 to .80) in their Grade 2 sample than in their Grade 3 sample (.58 to .68), in which the Oregon State Reading Assessment was used to measure reading proficiency. The authors speculated that the rigor of the two reading proficiency assessments might have affected the correlational differences. Fuchs et al. (2001) recommended additional studies beyond Grade 3 that measured higher-level thinking. In the exploration by Applegate et al. (2009) of state-mandated reading assessments in several states, the researchers observed that the TAKS contained questions which required higher-level thinking than many of the assessments in other states. Another factor influencing results may be whether the assessment is timed. In Texas, students are allowed as long as needed in 1 school day to answer the questions on the Grade 3 Reading TAKS. However, in the North Carolina study (Barger, 2003), in which a moderately strong correlation (.73) was found, students had 115 minutes to answer 56 questions. Walczyk and Griffith-Ross (2007) found that Grade 3 struggling readers were negatively affected by time constraints because they were struggling to decode words. In the same study, however, time constraints positively affected Grade 7 struggling readers. These readers seemed to be more engaged when the reading was timed, in contrast to untimed routine reading assignments. Time constraints should be
  • 109.
    95 considered in comparisonsof studies and future studies of oral reading fluency rates and reading proficiency. All of these correlational studies used DIBELS ORF to measure oral reading fluency and compared the DIBELS ORF to a state-mandated assessment, but not all these studies used the same DIBELS ORF benchmark. DIBELS ORF benchmarks are administered three or four times a year. In the current study, I found a correlation of .66 between the middle-of-year DIBELS ORF benchmark administered in January 2009 and the Grade 3 Reading TAKS administered in March 2009. Barger (2003) found a moderately strong correlation (.73) between the North Carolina End-of-Grade 3 assessment and the DIBELS ORF benchmark administered 1 week prior to the state- mandated assessment. Vander Meer et al. (2005) found a .65 correlation when they compared the end-of-Grade 3 DIBELS ORF to performance on the Ohio Proficiency Test administered in Grade 4. Baker et al. (2008) took a more comprehensive approach by determining the correlational relationship between each of the DIBELS ORF benchmarks over a 2-year period and the Oregon State Reading Assessment at the end of Grade 3 (beginning of Grade 2, .58; middle of Grade 2, .63; end of Grade 2, .63; beginning of Grade 3, .65; middle of Grade 3, .68; end of Grade 3; .67). Thus, the varying time periods between the administration of the DIBELS ORF and the administration of state-mandated assessments could influence the different strengths of correlational values. Several factors could explain the varying correlational values between this study and the other nine studies conducted in seven other states. The samples sizes varied from 38 (Barger, 2003) to 5,207 (Roehrig et al., 2008). The demographic representation
  • 110.
    96 differed widely. TheWood (2006) sample consisted of mostly White students and the studies conducted in Reading First schools (Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008; Wilson, 2005) had wide socioeconomic representations. Another factor is the rigor of the state-mandated assessments. Although all are designed to follow the stringent standards of NCLB (2002), the assessments differ in how reading proficiency is defined and assessed. A salient factor is whether the assessment is timed; a discrepancy can affect correlational relationships. Additionally, the time between the administration of the DIBELS ORF benchmark and administration of state-mandated assessments can account for varying correlational levels. Based on the present study findings, several practical applications for administrators, teachers, parents, and students can be suggested. Based on longitudinal studies, such as those by Baker et al. (2008) and Vander Meer et al. (2005), administrators can use oral reading fluency rates to predict which students are at risk for failing a state-mandated assessment in years prior to the year of assessment administration. According to Baker et al., oral reading fluency rates at the beginning of Grade 2 showed a significant correlation (.63) to the state-mandated assessment that was administered almost 2 years later. Vander Meer et al. showed similar results (.65) between the end-of-Grade 3 DIBELS ORF and the state-mandated assessment administered in Grade 4. Wood (2006) demonstrated that a significant correlational relationship continues to exist between oral reading fluency and reading proficiency through Grade 5 (Grade 3, .70; Grade 4, .67; Grade 5, .75). As Wood recommended, administrators could use such information to identify in an early and timely manner
  • 111.
    97 students at riskfor failing state-mandated assessments and implement intervention programs to address the needs of at-risk readers before they fail. Teachers can benefit from the study results showing that oral reading fluency is significantly related to reading proficiency. Hasbrouck and Tindal (2006) have developed oral reading fluency norms that identify national percentiles for the beginning, middle, and end of Grades 1 to 8. For example, the 50th percentile for Grade 3 students at the middle of the year is 92 wcpm. According to Hasbrouck and Tindal’s instructions, Grade 3 students who are reading less than 82 wcpm should be receiving interventions. In the current study, 84 of the 86 (98%) students who read more than 82 wcpm on the middle of year DIBELS ORF scored proficient on the Grade 3 Reading TAKS. Of the remaining 69 students, 43% failed the Grade 3 Reading TAKS. Teachers could use the information from these data. Students reading within 10 wcpm of the 50th percentile of Hasbrouck and Tindal’s (2006) oral reading fluency norms have a very good chance of passing the state-mandated reading assessment. Hasbrouck and Tindal’s oral reading fluency norms were similar to the cut points established by DIBELS for the at-risk, some-risk, and low-risk categories (University of Oregon Center in Teaching and Learning, 2008). Researchers in several of the other state studies (Barger, 2003; Baker et al., 2008; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002) confirmed the reliability of the DIBELS ORF categories in accurately identifying at-risk readers. Thus, regardless of the grade level, teachers can use the DIBELS ORF benchmarks and progress monitoring probes to set goals to ensure that as many children as possible score
  • 112.
    98 within 10 wcpmof the 50th percentile of Hasbrouck and Tindal’s (2006) oral reading fluency norms or in the low-risk category of DIBELS (University of Oregon Center in Teaching and Learning, 2008). Parents can benefit from the present study results as well. Once parents realize the importance of their children’s ability to read fluently in the current as well as subsequent grades, parents can be trained to work with their children at home. For example, teachers can train parents how to administer a 1-minute oral reading fluency probe in parent- teacher conferences or through written instructions. Schools can also present meetings for training purposes. Rasinski and Stevenson (2005) demonstrated that positive changes in students’ oral reading fluency resulted from parents working with their children daily at home. Students can also be encouraged by the present study results. Morgan and Sideridis (2006) found that when students set goals, they were motivated to improve their oral reading fluency scores. Once students understand the importance of oral reading fluency and its relationship to their success on state-mandated assessments, they can set their own goals to read more fluently. Students can be made aware of a variety of strategies, such as repeatedly reading text (Begeny et al., 2006; Hiebert, 2005; Hudson et al., 2005; Rasinski, 2006; Therrien et al., 2006; Therrien & Kubina, 2007); reading a wide variety of texts (Kuhn, 2006); reading with a partner (Nes Ferrera, 2005); and participating in reader’s theatre (Young & Rasinski, 2009).
  • 113.
    99 Implications for SocialChange With the implementation of the NCLB (2002) legislation, schools have been charged with demonstration of adequate yearly progress. Each year a significant number of students must pass the state-mandated assessment. For example, for schools and districts to meet the NCLB requirements for academic acceptability in 2009, the schools and districts had to have certain percentages of students in each content area scoring proficient on the state assessments: 70% in reading/English language arts, writing, and social studies assessments; 55% in mathematics; and 50% in science. Four levels were possible: Academically Unacceptable, Academically Acceptable, Recognized, and Exemplary. For schools and districts to receive a designation of Recognized , 75% of students in all subjects had to score proficient; for schools and districts to be considered Exemplary, 90% of students in all subjects had to score proficient (TEA, 2009f). To meet such goals, elementary educators need to be able to identify struggling readers before they take assessments such as the Grade 3 Reading TAKS so that interventions to improve their academic skills can take place. For elementary educators in Texas, the findings of the present study provide useful information with which to identify struggling readers. Once they are identified, interventions targeting basic literacy skills, improvement of oral reading fluency skills, and teaching of decoding strategies can be provided (Jenkins et al., 2007). This identification of struggling readers is important for several reasons. In kindergarten through Grade 3, students learn to read (Ehri, 2005). As they progress through later years of elementary school, they make the transition to reading to learn in
  • 114.
    100 other academic areas(Musti-Rao et al., 2009). When struggling readers are identified and given remediation, the possibilities of their passing state-mandated reading assessments in Grade 3 and above improve, as well as their success in other academic areas (Ehri, 2005; Shapiro et al., 2008). If struggling readers are able to improve their reading skills by the end of Grade 3, they will more likely close the academic gap with their peers who are more proficient in reading (Simmons et al., 2008). Furthermore, if the needs of struggling readers are not met by the end of Grade 3, they are more likely to fall farther and farther behind their more proficient peers (Morgan et al., 2008). With reading mastery, Grade 3 students are more likely to improve academically as they progress through the grades. They are also more likely to graduate from high school (Houge et al., 2007; Rumberger & Palardy, 2005). When students graduate from high school with proficient reading skills, they are more likely to obtain employment and become contributing members of society. Recommendations for Action In this study, my results supported those of nine other studies (Baker et al., 2008; Barger, 2003; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002; Shapiro et al., 2008;Vander Meer et al., 2005; Wilson, 2005; Wood, 2006), confirming a correlational relationship between oral reading fluency and reading proficiency. Thus, the first recommended action is for administrators and teachers to recognize the importance of oral reading fluency to the development of reading proficiency. Second, administrators should cultivate learning communities (Senge, 2000) of district administrators, teachers, parents, and students in which at-risk readers are identified early, provided interventions
  • 115.
    101 that address theirneeds, and then monitored to determine if they are progressing academically. Third, the importance of the correlation between oral reading fluency and reading proficiency should be communicated not only to educators but to students and parents. By graphing oral reading fluency rates, students can quickly and easily see whether they are improving. When these graphs include a goal line, students can see how fast they need be reading by the end of the year. Once students have a visual representation of where they need to be, they can also participate in setting their oral reading fluency goals. Parents can also be trained to listen to their child read for exactly 1 minute and then count the number of words their child read correctly. Parental training can be provided through written instructions, parent-teacher conferences, and group training sessions. Once the methods are established, oral reading fluency rates can be administered daily or weekly. Rasinski and Stevenson (2005) demonstrated that parents can be trained to be effective partners in programs designed to improve oral reading fluency. Morgan and Sideridis (2006) demonstrated that students were motivated to improve their oral reading fluency skills when they set goals to improve them. Fourth, teachers in Grades 1 to 3 can be encouraged by administrators and in professional development workshops to track oral reading fluency rates for their students. Regardless of the grade level, elementary teachers can use oral reading fluency norms to determine the developmental level of their students, monitor students’ progress, set goals to motivate growth, and include instructional activities to build oral reading fluency. With regard to monitoring students’ progress and setting goals, teachers can graph the oral
  • 116.
    102 reading fluency oftheir students with charts and graphic organizers and set goals for students to reach them, with the students contributing their own goals. Morgan and Sideridis (2006) found that setting goals improves oral reading fluency. Further, during reading periods, teachers can pair struggling readers with their more proficient peers so the struggling students benefit from the modeling of the proficient students (Nes Ferrera, 2005). With regard to activities, several studies (Begeny et al., 2006; Hiebert, 2005; Martens et al., 2007; Therrien et al., 2006; Therrien & Hughes, 2008) demonstrated that repeated reading of passages can improve oral reading fluency rates. Teachers can motivate students to read a text repeatedly by organizing such activities as a reader’s theatre. In this activity, students practice reading scripts to perform them before an audience (Corcorcan & Davis, 2005; Rasinski, 2006). Researchers have shown that wide reading improves oral reading fluency (Kuhn, 2005). Based on such findings, teachers could organize instructional programs that track the number of books students read and reward them for reading certain amounts of books. Students could also be introduced to software that keeps track of their books read and update their records themselves. When students are reading books at their instructional levels, they are more likely to be actively engaged and understand what authors are saying (Schnick & Knickelbine, 2000). Lexiles are one tool teachers can use to determine either a readers’ ability or the difficulty of text (MetaMetrics, 2010). Texts are analyzed and assigned Lexile numbers, with lower numbers (from -200L) indicating lower reading ability and higher numbers (to
  • 117.
    103 +1700L) indicating highability (MetaMetrics, 2010). Educators can assign students Lexile levels based on their performance on assessments such as TAKS. For example, students with a raw score of 2100 on the 2009 Grade 3 Reading TAKS had a Lexile level of 380L (TEA, 2009b). The Lexile range for that student would be from 280L to 480L. Once students’ Lexile levels and ranges are determined, students’ abilities to read can be matched with books which they are likely to be able to read. However, Lexile levels are not fixed, and many factors can influence readability formulas. For example, interest can affect Lexile levels. When students are interested in a subject, they may be able to read text above their Lexile range (Schnick & Knickelbine, 2000). Therefore, a student with a strong interest in airplanes may be able to read a book about airplanes with a Lexile level above his Lexile range. When assigning reading in cognitive subjects, such as social studies and science, teachers can also make books available for students within their Lexile range. Students should then feel confident they will be able to understand the text and will feel more comfortable and motivated in reading. Readers who are intrinsically motivated to read can understand what the author is saying because they are curious and want to read more. In contrast, readers who are not motivated to read are often not engaged and fail to make the necessary connections for comprehension (Schnick & Knickelbine, 2000). In addition, teacher effectiveness has been identified as a crucial variable in improved student test scores (Lane et al., 2009; Luster, 2010). Lane et al. found that students with teachers who had a greater knowledge of the definition of oral reading fluency and the skills required to test it demonstrated greater gains in oral reading fluency
  • 118.
    104 than students withteachers who had less knowledge and skills. Professional development workshops can educate and train teachers on the roles of oral reading fluency and reading proficiency After training in how to administer oral reading fluency assessments, teachers can be introduced to instructional activities that have been shown to improve both oral reading fluency and reading proficiency and enhance students’ skills and motivation, such as repeated reading (Therrien & Hughes, 2008), reader’s theatre (Young & Rasinski, 2009), goal setting (Morgan & Sideridis, 2006), and wide reading (Kuhn, 2005). Finally, greater parental involvement is recommended. Although many parents are involved daily with their children’s learning, parents often need support to become involved (Persampieri, Gortmaker, Daly, Sheridan, & McCurdy, 2006; Senge, 2000). School partnership with parents has been shown to have strong positive effects on their child’s educational experience. Persampieri et al. documented dramatic growth in oral reading fluency rates with two struggling readers whose parents were trained to work with them for 10 to 15 minutes 3 times a week for 3 weeks. In a training session with each individual parent, one of the researchers described the intervention, modeled it, and observed the parent implementing the routine. Parents were then given a calendar with stickers to track intervention dates and reward the child. In addition, each child was assessed 3 times at school each week. One student’s initial reading of a passage on his reading level went from 43 wcpm to 61 wcpm 3 weeks later. The other student’s initial reading of a passage went from 36 wcpm to 60 wcpm. Rasinski and Stevenson (2005) studied two groups of 30 first graders. In one group, the parents worked with the children on reading assignments an average of 10
  • 119.
    105 minutes daily for11 weeks. In the other group, parents did not assist. The results showed significantly greater gains for the experimental group in oral reading fluency rates than for the control group. Parents can be trained to help their children through parent-teacher conferences and workshops specifically for parents on activities at home that encourage their children to read. Activities may include small-group demonstrations, written instructions, and modeling (Persampieri et al., 2006). To verify that parents are correctly implementing the method and encourage them, teachers can follow up with procedural checklists, audio tapes, video tapes, and phone calls (Persampieri et al.). Recommendations for Further Study Based on the results of this study, I present several recommendations for further studies with quantitative, qualitative, and mixed methods. Some of these studies could replicate the current research to test generalizability. Other researchers could extend the present results for greater understanding of the relationship between students’ ratings on the DIBELS ORF and state-mandated assessments as diagnostic tools to improve and accelerate students’ reading proficiency. Quantitative Studies In this study and others (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006), a correlational relationship was established between oral reading fluency and reading proficiency as measured by state-mandated reading assessments. However, because the definition of reading proficiency differs from state to state and grade to grade, it is
  • 120.
    106 important that additionalstudies be conducted to determine if a statistically significant relationship exists between oral reading fluency and specific state-mandated reading assessments (Roehrig et al.). Thus, I suggest quantitative studies replicating the present study in other states and at other grade levels than Grade 3. Further, beginning in the 2011-2012 year, the TEA will replace the TAKS with the State of Texas Assessments of Academic Readiness (STAAR; TEA, 2009f). An additional study is suggested to determine if a statistically significant relationship exists between the DIBELS ORF benchmark rates and the Grade 3 STAAR assessment. Results could be compared with those of the present study not only for additional understanding but also for determination of which assessment is more strongly related to the DIBELS ORF. Other quantitative studies could be conducted to explore the correlational relationship of the DIBELS ORF to other types of reading proficiency assessments which may measure various comprehension skills. Specifically, within the district in which this study was conducted, a study could explore whether a statistically significant relationship existed between DIBELS ORF benchmark rates and reading comprehension measured with other assessments, such as istation, which assesses students’ reading ability through Internet-delivered tests (istation, 2010). Additionally, a study could be conducted to determine if a stronger correlational relationship existed between DIBELS ORF and the state-mandated assessment or between the istation reading comprehension score and the state-mandated reading assessment.
  • 121.
    107 Each spring, thedistrict involved in this study also administers the Comprehensive Test of Basic Skills (CTBS; Kids IQ Test Center, 2010) to students in Grades 1 and 2. A quantitative study could be conducted to determine if a statistically significant relationship existed between DIBELS ORF rates and the comprehensive reading score on CTBS. A study could also compare the correlational relationship between the Grade 2 CTBS comprehensive reading scores and the Grade 3 DIBELS ORF benchmark rates to determine which had the highest correlational relationship to the Grade 3 state-mandated reading assessment. Students are motivated to read books when using programs such as the Accelerated Reader (AR) program (Renaissance Learning, 2010). With the AR, students receive AR points after they read a book and take a test on the computer. Such assessments frequently contain many explicit questions to which students can find the specific answers in the book. In these assessments, few implicit questions are asked in which students are required to infer or conclude, reading between the lines or beyond the lines to answer questions. Students who take the Grade 3 Reading TAKS must demonstrate that they can use higher-order comprehension skills of inference and concluding. Following from these observations, a study could be conducted on students’ use of higher-order and lower-order comprehension skills. A correlational analysis could investigate whether the DIBELS ORF and a reading proficiency assessment demonstrate a greater statistically significant relationship if the assessment used required only lower- order comprehension skills. Such a study would help teachers determine if instruction in
  • 122.
    108 higher-order comprehension skillsneeds to be included in the instructional program. Students could be more strongly motivated to learn if they recognize the relationship between reading and answering explicit questions could help them in the state-mandated reading assessment. Furthermore, they could be encouraged to learn additional strategies to increase the possibilities of answering higher-order comprehension questions on the state-mandated reading assessment. In addition, struggling readers could benefit from development of meta-cognitive strategies to improve their scores on both the explicit and implicit questions. Finally, as Baker et al. (2008) suggested, researchers could use quantitative studies to focus on students of specific ethnic backgrounds and subpopulations, such as economically disadvantaged and special education students. In the current study, 56% of the sample was economically disadvantaged and 54% was Hispanic. Although this demographical representation was similar to that of all Grade 3 students in Texas, a study could be conducted in which subpopulations, such as economically disadvantaged, Hispanic, or special education students, were larger than the 64, the minimal sample size determined by the G*Power Statistical Program for a study of this nature (Faul et al., 2007). Results could help educators identify factors such as poverty and ethnicity that might affect test scores. Once factors were identified, intervention programs could be specifically designed and implemented for students in these subpopulations. Qualitative Studies I also recommend follow-up qualitative studies to the present research. Using qualitative studies, researchers could explore factors such as participants’ experiences,
  • 123.
    109 attitudes, perceptions, andunderstanding of a phenomenon (Creswell, 2009). In the present study, I showed a statistically significant relationship between oral reading fluency and reading proficiency for Grade 3 students. A natural next step could be for researchers to explore teachers’ attitudes toward oral reading fluency and reading proficiency. Teachers could be asked if and why they felt that oral reading fluency and reading proficiency were related, if and why they felt that time spent improving oral reading fluency was effective, and if and why they felt that periods spent teaching other reading strategies were more effective. Researchers could include investigation of teachers’ perceptions of effective teaching strategies that positively impact oral reading fluency or reading proficiency. Teachers’ experiences with severely struggling readers in relation to such strategies could be documented as well. Attitudes and practices of parents could also be studied. Studies (Persampieri et al., 2006; Rasinski & Stevenson, 2005) have found that when parents were trained to work with their children, the students’ oral reading fluency increased. Researchers could use qualitative studies to explore the attitudes and strategies of parents in assisting their children with reading, as well as the children’s attitudes and experiences working with their parents to improve oral reading fluency and/or reading proficiency. Mixed-Method Studies Mixed-method studies are effective because they combine quantitative and qualitative designs. Use of the combination may provide a more comprehensive understanding of the research problem and answers to the research questions than either single method (Creswell, 2009). A mixed-methods study could be conducted to
  • 124.
    110 determine how theadministration of each of the assessments affects students’ scores. In this study and the other state studies (Baker et al., 2008; Barger, 2003; Roehrig et al., 2008; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006), no one-to-one correspondence was found between oral reading fluency and the state- mandated reading assessment. That is, none of the studies found that all the students with a specific rate on the DIBELS ORF benchmark also achieved the same specific score on the reading assessment. Although a perfect one-to-one correspondence would be rare, studies could investigate the factors that may account for the less-than-perfect results. For example, do White students have a higher chance of passing the state-mandated assessment than Hispanic students? In situations in which more White students pass than Hispanic students, are there indications of bias? Researchers could conduct a mixed-method study to determine if there were statistical differences in demographic characteristics and passing rates (quantitative) and the views of teachers and administrators as to the reasons for the results (qualitative). Researchers could also conduct a mixed-method study to explore the effect of testing environments on student performance in reading assessments. The DIBELS ORF benchmarks are individually administered (Good & Kaminski, 2002b). When the DIBELS ORF is administered, various factors could affect the results, such as the presence of other students if the assessment was conducted in a corner of the classroom, or administration in a quiet room with only the teacher and the student. In the first case, a student could be distracted by the other students in the room and not read as fluently as if
  • 125.
    111 he or shewould have if in a quiet room alone with the teacher. The TAKS is most frequently administered in a group setting (TEA, 2009e). Schools and teachers generally make every possible effort to maintain test security, but factors such as the behavior of other students in the classroom and eye contact with the teacher could have an effect on a student’s performance. Using a mixed-method study, researchers could quantitatively measure variables, such as student scores from both individual administration and group administration, and then qualitatively explore participants’ and facilitators’ opinions of the effects on the students of each variable. Further, researchers could explore the attitudes of the teacher toward the student’s reading level and regarding administration of the test. Although test directions specifically guide the teacher’s verbal expressions, students are often aware of the teachers’ body language. The teacher’s body language and assumptions about the student’s difficulties in reading may affect the student’s oral reading fluency (Childs & Fung, 2009; Singer & Goldin-Meadow, 2005). Researchers could conduct a mixed-method studies to determine how such factors might influence the results of the assessments. Quantitative explorations could include surveys on teachers’ attitudes, as well as correlational analyses between the two tests. In addition, researchers could ask open-ended questions during individual interviews with participants on factors such as the tests themselves, distractions during administration, and perceptions about students’ reading levels.
  • 126.
    112 Conclusion The purpose ofthis study was to determine if a statistically significant relationship existed between oral reading fluency and reading proficiency. The middle- of-year DIBELS oral reading fluency rates for 155 Grade 3 students in a West Texas school district was used to measure oral reading fluency. Reading proficiency was measured using their Reading TAKS scale scores. In this archival study, I used the data from the 2008-2009 school year to conduct a Pearson correlation analysis. The analysis demonstrated a statistically significant positive correlation of .66 (p < .01). My results were comparable to those of nine other studies conducted in seven other states (Baker et al., 2008; Barger, 2003; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002; Shapiro et al., 2008;Vander Meer et al., 2005; Wilson, 2005; Wood, 2006). All of these studies found a significant correlation between oral reading fluency and reading proficiency (range .58 to .80). Shaw and Shaw showed a strong correlation, .80. Five other researchers (Barger, Roehrig et al., Shapiro et al., Vander Meer et al., Wilson) reported moderately strong correlations. The difference in levels of significance might be attributed to factors such as sample size, the rigor of the assessment used to measure reading proficiency, and the sample demographic composition. This study is the first to investigate the relationship in Texas between students’ oral reading fluency and reading proficiency. The study fills a gap in the literature by the investigation of whether a statistically significant relationship existed between oral reading fluency and reading proficiency as defined by the state-mandated assessment in Texas (TAKS). Positive social change can occur when educators in Texas use the results
  • 127.
    113 of this studyto provide them with additional information and tools for helping struggling readers gain greater fluency and proficiency. Because of the positive correlation between the DIBELS ORF and the Grade 3 Reading TAKS, educators can use oral reading fluency as a means of identifying struggling readers as early as Grade 1. Educators can then provide scientifically-based interventions designed to improve students’ basic literacy skills before they take the high-stakes Grade 3 Reading TAKS. If the needs of struggling readers are addressed in the early grades, they have a better chance of learning to read (Jenkins et al., 2007) and academic success on assessments such as TAKS and in later course work throughout school (Ehri, 2005; Shapiro et al., 2008). Struggling readers whose needs have been addressed also improve their chances of graduating from high school (Houge et al., 2007; Rumberger & Palardy, 2005), as well as obtaining employment after they graduate (Katsiyannis et al., 2007). I made several recommendations for action in the field to improve oral reading fluency rates and to help students to become more proficient readers. Teachers, especially in Grades 1 to 3, can use oral reading fluency rates to monitor their students’ progress. Teachers can also employ teaching strategies such as repeated reading (Martens et al., 2007), reader’s theatre (Corcorcan & Davis, 2005), motivational strategies (Morgan & Sideridis, 2006), and pair reading (Nes Ferrera, 2005). Teachers can also enlist parental support, with school-designed training programs for parents (Rasinski & Stevenson, 2005). Further, teachers can encourage students’ wide reading (Kuhn, 2005) and help them monitor their progress. Lexile levels (Schnick & Knickelbine, 2000) can be used to
  • 128.
    114 determine students’ instructionalreading ranges, and teachers can then help students select reading material on their level, increasing students’ motivation to read. I recommended several possibilities for further study with quantitative, qualitative, and mixed-method research. These recommendations included additional quantitative studies replicating the present research in other states and grade levels, as well as quantitative studies on the relationship between the DIBELS ORF and assessments other than the TAKS, such as the new STAAR assessment to be implemented in Texas in 2012 (TEA, 2009f). I suggested researchers use qualitative studies to explore the attitudes of students, teachers, and parents regarding oral reading fluency and reading proficiency. I suggested further that researchers use mixed-method studies to explore the differences in the performance of various subgroups and the reasons and thoughts of teachers and administrators regarding those differences. Researchers could also conduct mixed-method studies to explore the possible impact of testing environments on results. In the current study, I confirmed a statistically significant relationship between oral reading fluency rates and reading proficiency. Although researchers in other states (Baker et al., 2008; Barger, 2003; Buck & Torgesen, 2003; Roehrig et al., 2008; Shaw & Shaw, 2002; Shapiro et al., 2008; Vander Meer et al., 2005; Wilson, 2005; Wood, 2006) confirmed similar relationships, I was the first to document the relationship between oral reading fluency rates and reading proficiency for Grade 3 students on the state-mandated assessment in Texas. Administrators and teachers in districts and schools can use the study findings to identify struggling readers in the early grades and provide immediate
  • 129.
    115 and appropriate interventionsto address these students’ reading needs. Through identification and remediation, the students will not only improve their scores on state- mandated assessments but will learn to read more proficiently and achieve greater academic success in the elementary grades and subsequent grades.
  • 130.
    116 References Allington, R. L.(2009). What really matters in fluency: Research-based practices across the curriculum. Boston, MA: Pearson. Applegate, A. J., Applegate, M. D., McGeehan, C. M., Pinto, C. M., & Kong, A. (2009). The assessment of thoughtful literacy in NAEP: Why the states aren’t measuring up. Reading Teacher, 62(5), 372-381. doi:10:1598/RT.62.5.1 Baker, S. K., Smolkowski, K., Katz, R., Fien, H., Seeley, J. R., Kame’enui, E. J., & Beck, C T. (2008). Reading fluency as a predictor of reading proficiency in low- performing, high-poverty schools. School Psychology Review, 37(1), 18-37. Barger, J. (2003). Comparing the DIBELS oral reading fluency indicator and the North Carolina end of grade reading assessment (Technical Report). Asheville, NC: North Carolina Teacher Academy. Begeny, J., Daly III, E., & Valleley, R. (2006). Improving oral reading fluency through response opportunities: A comparison of phrase drill error correction with repeated readings. Journal of Behavioral Education, 15(4), 229-235. doi:10.1007/s10864-006-9028-4 Buck, J., & Torgesen, J. (2003). The relationship between performance on a measure of oral reading fluency and performance on the Florida Comprehensive Assessment Test, I. Tallahassee, FL: Florida Center for Reading Research.
  • 131.
    117 Chard, D. J.,Stoolmiller, M., Harn, B. A., Wanzek, J., Vaughn, S., Linan-Thompson, S., et al. (2008). Predicting reading success in a multilevel schoolwide reading model. Journal of Learning Disabilities, 41(2), 174-188. doi:10:1177/0022219407313588 Childs, R. A., & Fung, L. (2009). “The first year, they cried”: How teachers address test stress. Canadian Journal of Educational Administration and Policy, 96, 1-14. Cline, F., Johnstone, C., & King, T. (2006). Focus group reactions to three definitions of reading (as originally developed in support of NARAP goal 1). Minneapolis, MN: National Accessible Reading Assessment Projects. Colorado Department of Education. (2009). Colorado model content standards for reading and writing. Retrieved August 18, 2010, from http://www.cde.state.co.us/ cdeassess/documents/OSA/standards/read.htm Corcoran, C. A., & Davis, A. D. (2005). A study of the effects of readers’ theatre on second and third grade special education students’ fluency growth. Reading Improvement, 42(2), 105-111. Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson Merrill Prentice-Hall. Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Los Angeles, CA: Sage.
  • 132.
    118 Cusumano, D. L.(2007). Is it working? An overview of curriculum based measurement and its uses for assessing instructional, intervention, or program effectiveness. Behavior Analysis Today, 8(1), 24-34. Daane, M. C., Campbell, J. R., Grigg, W.S., Goodman, M. J., & Oranje, A. (2005). Fourth-grade students reading aloud: NAEP 2002 special study of oral reading (NCES Report No. 2006-469). U.S. Department of Education. Washington, DC: U.S. Department of Education. Deeney, T. A. (2010). One-minute fluency measures: Mixed messages in assessment and instruction. Reading Teacher, 63(6) 440-450. doi:10.1598/RT.63.6.1 Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Assessment Committee. (2002). Analysis of reading assessment measures. Retrieved August, 20, 2010, from https://dibels.uoregon.edu/techreports Edformation. (2004). AIMSweb progress monitoring and assessment system. Retrieved August 18, 2010, from http://www.edinformation.com Ehri, L. C. (2005). Learning to read words: Theory, findings, and issues. Scientific Studies of Reading, 9(2), 167-188. Ehri, L. C., & McCormick, S. (1998). Phases of word learning: Implications for instruction with delayed and disabled readers. Reading and Writing Quarterly, 14(2), 135-164. Eldredge, J. (2005). Foundations of fluency: An exploration. Reading Psychology, 26(2), 161-181. doi:10.1080/02702710590930519
  • 133.
    119 Faul, F., Erdfelder,E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. Fletcher, J. M., Francis, D. J., Morris, R. D., & Lyon, G. R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 506-522. Florida Department of Education. (2005). FCAT handbook: A resource for educators. Tallahassee, FL: Author. Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., & Shaywitz, S. E. (2005). Psychometric approaches to the identification of LD: IQ and achievement scores are not sufficient. Journal of Learning Disabilities, 38(2), 98-108. Fuchs, L., Fuchs, D., Hosp, M., & Jenkins, J. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5(3), 239-256. Gersten, R., & Dimino, J. A. (2006). RTI (response to intervention): Rethinking special education for students with reading difficulties (yet again). Reading Research Quarterly, 41(1), 99-108. Glenn, D. (2007, February 2). Reading for profit. Chronicle of Higher Education, 53 (22), p. A8. Retrieved August 18, 2010, from http://chronicle.com/weekly/v53/i22/22a00801.htm
  • 134.
    120 Goffreda, C. T.,Diperna, J. C., & Pedersen, J. A. (2009). Preventive screening for early readers: Predictive validity of the Dynamic Indicators of Basic Early Literacy skills (DIBELS). Psychology in the Schools, 46(6), 539-552. doi:10:1002/pits.20396 Good, R. H., & Kaminski, R. A. (2002a). DIBELS oral reading fluency passages for first through third grades (Technical Report No. 10). Eugene, OR: University of Oregon. Good, R. H., & Kaminski, R. A. (Eds.). (2002b). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement. Retrieved August 20, 2010, from http://dibels.uoregon.edu/ Goodman, K. S. (2006). The truth about DIBELS: What it is, what it does. Portsmouth, NH: Heinemann. Gravetter, F. J., & Wallnau, L. B. (2005). Essentials of statistics for the behavioral sciences (5th ed.). Belmont, CA: Wadsworth/Thomson Learning. Harcourt Brace Educational Measurement. (2002). Stanford Achievement Test (10th ed.). San Antonio, TX: Psychological Corporation. Harn, B. A., Stoolmiller, M., & Chard, D. J. (2008). Measuring the dimensions of alphabetic principle on the reading development of first graders. Journal of Learning Disabilities, 41(2), 143-157. doi:10:1177/0022219407313585 Harrison, C. (2008). The brown bear and the American eagle of reading research: Steven A. Stahl and Michael Pressley remembered. Reading Research Quarterly, 43(2), 188-198. doi:10.1598/RRQ.43.2.5
  • 135.
    121 Hasbrouck, J., &Tindal, G. A. (1992). Curriculum-based oral reading fluency norms for students in grades 2-5. Teaching Exceptional Children, 24(3), 41-44. Hasbrouck, J., & Tindal, G. A. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. Reading Teacher, 59(7), 636-644. Hiebert, E. (2005). The effects of text difficulty on second graders' fluency development. Reading Psychology, 26(2), 183-209. doi:10.1080/02702710590930528 Hirsch, E. D. (2003). Reading comprehension requires knowledge—Of words and the world. American Educator, 27(1), 10-13, 16-22, 28-29, 48. Hosp, M. K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word reading, and comprehension: Do the relations change with grade? School Psychology Review, 34(1), 9-26. Houge, T. T., Peyton, D., Geier, C., & Petrie, B. (2007). Adolescent literacy tutoring: Face-to-face and via webcam technology. Reading Psychology, 28(3), 283-300. Hudson, R. F., Lane, H. B., & Pullen, P. C. (2005). Reading fluency assessment and instruction: What, why, and how? Reading Teacher, 58(8), 702-714. doi:10:1598/RQ.58.8.1 istation. (2010). ISIP: istation's Indicators of Progress. Retrieved August 18, 2010, from http://www2.istation.com/products/isip.asp Jenkins, J. R., Hudson, R. F., & Johnson, E. S. (2007). Screening for at-risk readers in a response to intervention framework. School Psychology Review, 36(4), 582-600.
  • 136.
    122 Jimerson, S. R.,Anderson, G. E., & Whipple, A. D. (2002). Winning the battle and losing the war: Examining the relation between grade retention and dropping out of high school. Psychology in the Schools, 39(4), 441-457. doi:10.1002/pits.10046 Kame’enui, E. J., & Simmons, D. C. (2001). Introduction to this special issue: The DNA of reading fluency. Scientific Studies of Reading, 53(3), 203-210. Katsiyannis, A., Zhang, D., Ryan, J. B., & Jones, J. (2007). High-stakes testing and students with disabilities: Challenges and promises. Journal of Disability Policy Studies, 18(3), 160-167. Katz, L. A., Stone, C. A., Carlisle, J. F., Corey, D. L., & Zeng, J. (2008). Initial progress of children identified with disabilities in Michigan’s Reading First schools. Council for Exceptional Children, 74(2), 235-256. Katzir, T., Kim, Y., Wolf, M., O’Brien, B., Kennedy, B., Lovett, M., et al. (2006). Reading fluency: The whole is more than the parts. Annals of Dyslexia, 56(1), 51-82. Kids IQ Test Center. (2010). Comprehensive Test of Basic Skills. Retrieved August 20, 2010, from http://www.kids-iq-tests.com/CTBS.html Kuhn, M. (2005). A comparative study of small group fluency instruction. Reading Psychology, 26(2), 127-146. doi:10.1080/02702710590930492 LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323.
  • 137.
    123 Lane, H. B.,Hudson, R. F., Leite, W. L., Kosanovich, M. L., Strout, M. T., Fenty, N. S., & Wright, T. L. (2009). Teacher knowledge about reading fluency and indicators of students’ fluency growth in Reading First schools. Reading and Writing Quarterly, 25, 57-86. doi:10:1080/10573560802491232 Leslie, L., & Caldwell, J. (1995). Qualitative reading inventory (2nd ed.). Reading, MA: Addison-Wesley Longman. Luster, J. (2010). Why states should require a teaching performance assessment and a subject matter assessment for a preliminary teaching credential. Research in Higher Education Journal, 8, 1-16. Manzo, K. (2005). National clout of DIBELS test draws scrutiny. Education Week, 25(5),1-12. Retrieved August 18, 2010, from http://www.edweek.org/login.html? source=http://www.edweek.org/ew/articles/2005/09/28/05dibels.h25.html&destin ation= http://www.edweek.org/ew/articles/2005/09/28/ 05dibels.h25.html&levelId=2100 Marcell, B. (2007). Traffic light reading: Fostering the independent usage of comprehension strategies with informational text. Reading Teacher, 60(8), 778-781. doi:10.1598/RT.60.8.8 Martens, B., Eckert, T., Begeny, J., Lewandowski, L., DiGennaro, F., Montarello, S., et al. (2007). Effects of a fluency-building program on the reading performance of low-achieving second and third grade students. Journal of Behavioral Education, 16(1), 38-53. doi:10.1007/s10864-006-9022-x
  • 138.
    124 MetaMetrics. (2010). Whatis a Lexile measure? Retrieved August 18, 2010, from http://www.lexile.com/ about-lexile/lexile-overview/ Miller, J., & Schwanenflugel, P. (2006). Prosody of syntactically complex sentences in the oral reading of young children. Journal of Educational Psychology, 98(4), 839-853. doi:10.1037/0022-0663.98.4.839 Morgan, P. I., Farkas, G., & Hibel, J. (2008). Matthew effects for whom? Learning Disability Quarterly, 31, 187-198. Morgan, P. L. & Sideridis, G. D. (2006). Contrasting the effectiveness of fluency interventions for students at risk for learning disabilities: A multilevel random coefficient modeling meta-analysis. Learning Disabilities Research and Practice, 21(4), 191-210. Musti-Rao, S., Hawkins, R., & Barkley, E. (2009). Effects of repeated reading on the oral reading fluency of urban fourth-grade students: Implications for practice. Preventing School Failure, 54(1), 12-23. National Accessible Reading Assessment Project. (NARAP). (2006). Defining reading proficiency for accessible large-scale assessments: Some guiding principles and issues. Minneapolis, MN: Author. National Institute of Child Health and Human Development. (NICHHD). (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769). Washington, DC: Author.
  • 139.
    125 Nes Ferrara, S.(2005). Reading fluency and self efficacy: A case study. International Journal of Disability, Development and Education, 52(3), 215-231. No Child Left Behind Act of 2001, Pub. L. No. 107-110, § 115 Stat. 1425 (2002). Ouellette, G. P. (2006). What’s meaning got to do with it: The role of vocabulary in word reading and reading comprehension. Journal of Educational Psychology, 98(3), 554-566. Pearson, P. D. (2006). Foreword. In K. S. Goodman (Ed.), The truth about DIBELS: What it is, what it does (pp. v-xix). Portsmouth, NH: Heinemann. Persampieri, M. P., Gortmaker, V., Daly, E. J., Sheridan, S. M., & McCurdy, M. (2006). Promoting parent use of empirically supported reading interventions: Two experimental investigations of child outcomes. Behavioral Interventions, 21, 31-57. doi:10.1002/bin.210 Pikulski, J., & Chard, D. (2005). Fluency: Bridge between decoding and reading comprehension. Reading Teacher, 58, 510-519. doi:10:1598/RT.58.6.2 Pressley, M., Gaskins, I. W., & Fingeret, L. (2006). Instruction and development of reading fluency in struggling readers. In S. J. Samuels & A. E. Farstrup (Eds.), What research has to say about fluency instruction (pp. 24-46). Newark, DE: International Reading Association. Pressley, M., Hilden, K., & Shankland, R. (2005). An evaluation of end grade-3 Dynamic Indicators of Basic Early Literacy Skills (DIBELS): Speed reading without comprehension, predicting little. East Lansing, MI: Literacy Achievement Research Center.
  • 140.
    126 Rasinski, T. V.(2006). Reading fluency instruction: Moving beyond accuracy, automaticity, and prosody. Reading Teacher, 59(7), 704-706. Rasinski, T. V., & Stevenson, B. (2005). The effects of fast start reading: A fluency based home involvement reading program, on the reading achievement of beginning readers. Reading Psychology, 26(2), 109-125. doi:10.1080/02702710590930483 Read Naturally. (2002). Reading fluency monitor. Retrieved August 18, 2010, from www.readnaturally.com Renaissance Learning. (2020). Accelerated Reader. Retrieved August 18, 2010, from http://www.renlearn.com/ar/ Riedel, B. (2007). The relation between DIBELS, reading comprehension, and vocabulary in urban first-grade students. Reading Research Quarterly, 42(4), 546-562. Roehrig, A., Petscher, Y., Nettles, S., Hudson, R., & Torgesen, J. (2008). Accuracy of the DIBELS oral reading fluency measure for predicting third grade reading comprehension outcomes. Journal of School Psychology, 46(3), 343-366. doi:10.1016/j.jsp2007.06.006 Rosenthal, J., & Ehri, L. C. (2008). The mnemonic value of orthography for vocabulary learning. Journal of Educational Psychology, 100(1), 175-191. Rumberger, R. W., & Palardy, G. J. (2005). Test scores, dropout rates, and transfer rates as alternative indicators of high school performance. American Educational Research Journal, 42(1), 3-42.
  • 141.
    127 Samuels, S. J.(2006). Toward a model of reading fluency. In S. J. Samuels & A. E. Farstrup (Eds.), What research has to say about fluency instruction (pp. 24-46). Newark, DE: International Reading Association. Samuels, S. (2007). The DIBELS tests: Is speed of barking at print what we mean by reading fluency? Reading Research Quarterly, 42(4), 563-566. Samuels, S. J., Ediger, K. M., Willcutt, J. R., & Palumbo, T. J. (2005). In S. E. Israel, C. C. Block, K. L. Bauserman, & K. Kinnucan-Welsch (Eds.), Metacognition in literacy learning: Theory, assessment, instruction, and professional development (pp. 41-59). Mahwah, NJ: Lawrence Erlbaum Associates. Schnick, T. & Knicklebine, M. (2000). The lexile framework: An introduction for educators. Durham, NC: MetaMetrics. Schwanenflugel, P., Meisinger, E., Wisenbaker, J., Kuhn, M., Strauss, G., & Morris, R. (2006). Becoming a fluent and automatic reader in the early elementary school years. Reading Research Quarterly, 41(4), 496-522. Senge, P. (2000). Schools that learn: A fifth discipline fieldbook for educators, parents, and everyone who cares about education. New York, NY: Doubleday. Shapiro, E. S., Solari, E., & Petscher, Y. (2008). Use of a measure of reading comprehension to enhance prediction on the state high stakes assessment. Learning and Individual Differences, 18(3), 316-328. Shaw, D. M., & Berg, M. A. (2009). Jail participants actively study words. Journal of Correctional Education, 60(2), 100-119.
  • 142.
    128 Shaw, R., &Shaw, D. (2002). DIBELS’ oral reading fluency-based indicators of third grade reading skills for Colorado State Assessment Program (Technical Report). Eugene, OR: University of Oregon. Shaywitz, S. (2003). Overcoming dyslexia: A new and complete science-based program for reading problems at any level. New York, NY: Vintage Books. Simmons, D. C., Coyne, M. D., Kwok, O., McDonagh, S. Harn, B. A., & Kame’enui, E. J. (2008). Indexing response to intervention: A longitudinal study of reading risk from kindergarten through grade 3. Journal of Learning Disabilities, 41(2), 158- 173. doi:10.1177/00222194073113587 Singer, M. A., & Goldin-Meadow, S. (2005). Children learn when their teacher's gestures and speech differ. Psychological Science, 16(2), 85-89. doi: 10.1111/j.0956- 7976.2005.00786.x Speece, D., & Ritchey, K. (2005). A longitudinal study of the development of oral reading fluency in young children at risk for reading failure. Journal of Learning Disabilities, 38(5), 387-399. Stanovich, K. E. (1998). Progress in understanding reading: Scientific foundations and new frontiers. New York, NY: The Guilford Press. Statistical Package for the Social Sciences. (SPSS). (2009). Predictive analysis software 17.0. Chicago, IL: Author. Stecker, P. M., Lembke, E. S., & Foegen, A. (2008). Using progress-monitoring data to improve instructional decision making. Preventing School Failure, 52(2), 48-58.
  • 143.
    129 Success for AllFoundation. (2010). About SFAF. Retrieved August 18, 2010, from http://www.successforall.com/ Texas Education Agency. (TEA). (2004). Grade 3 reading Texas Assessment of Knowledge and Skills information booklet. Austin, TX: Author. Texas Education Agency. (TEA). (2006). TAKS performance level descriptors. Retrieved August 18, 2010, from http://www.tea.state.tx.us/ index3.aspx?id=3222&menu_id=793 Texas Education Agency. (TEA). (2008). Grade placement committee manual: For grade advancement requirements of the student success initiative. Retrieved March 18, 2010, from http://ritter.tea.state.tx.us/student.assessment/ resources/ssi/GPCManual.pdf Texas Education Agency. (TEA). (2009a). TAKS campus aggregate results page. Retrieved August 18, 2010, from http://www.tea.state.tx.us/index3.aspx? id=3216&menu_id=793 Texas Education Agency. (TEA). (2009b). Texas Assessment of Knowledge and Skills raw score conversion table reading—March 2009 administration grade 3. Retrieved August 18, 2010, from http://ritter.tea.state.tx.us/student.assessment/ scoring/convtables/2009/taks/mar09_g03_read.pdf Texas Education Agency. (TEA). (2009c). Texas Assessment of Knowledge and Skills statewide summary report. Retrieved August 18, 2010, from http://ritter.tea.state.tx.us/student.assessment/reporting/ results/summary/ 2009/taks_mar09_g03.pdf
  • 144.
    130 Texas Education Agency.(TEA). (2009d). Texas Assessment of Knowledge and Skills— Summary report, group performance: District. Austin, TX: Author. Texas Education Agency. (TEA). (2009e). Texas Student Assessment Program, Accommodations manual, 2009-2010. Retrieved August 18, 2010, from http://ritter.tea.state.tx.us/ student.assessment/resources/accommodations/ AccommManual_2009_10.pdf Texas Education Agency. (TEA). (2009f). 2009 accountability manual. Retrieved August 18, 2010, from http://ritter.tea.state.tx.us/perfreport/account/2009/ manual/index.html Texas Education Agency (TEA) & Pearson. (2008). Technical digest for the academic year 2007-2008. Retrieved August 18, 2010, from http://www.tea.state.tx.us/index3.aspx? id=4326&menu_id=793 Therrien, W., Gormley, S., & Kubina, R. (2006). Boosting fluency and comprehension to improve reading achievement. Teaching Exceptional Children, 38(3), 22-26. Therrien, W., & Hughes, C. (2008). Comparison of repeated reading and question generation on students’ reading fluency and comprehension. Learning Disabilities: A Contemporary Journal, 6(1), 1-16. Therrien, W., & Kubina, R. (2007). The importance of context in repeated reading. Reading Improvement, 44(4), 179-188.
  • 145.
    131 Torgesen, J. K.,& Hudson, R. F. (2006). Reading fluency: Critical issues for struggling readers. In S. J. Samuels and A. Farstrup (Eds.), What research has to say about fluency instruction (pp. 130-158). Newark, DE: International Reading Association. U. S. Department of Education. (2007). The nation’s report card in reading 2007: National assessment of educational progress at grades 4 and 8. Institute of Education Sciences, National Center for Education Statistics (NCES Publication No. 2007-496). Retrieved August 20, 2010, from http://nationsreportcard.gov/ reading_2007/ U.S. Department of Education. (2010). Reading First. Retrieved August 18, 2010, from http://www2.ed. gov/programs/readingfirst/index.html University of Houston. (1999). Technical report: Texas Primary Reading Inventory. Retrieved August 18, 2010, from http://www.tpri.org/ Researcher%5FInformation University of Oregon Center on Teaching and Learning. (2008). DIBELS data system: DIBELS benchmark goals three assessment periods per year. Eugene, OR: University of Oregon. Vander Meer, C. D., Lentz, F. E., & Stollar, S. (2005). The relationship between oral reading fluency and Ohio proficiency testing in reading (Technical Report). Eugene, OR: University of Oregon. Walczyk, J., & Griffith-Ross, D. (2007). How important is reading skill fluency for comprehension? Reading Teacher, 60(6), 560-569. doi:10.1598/RT.60.6.6
  • 146.
    132 Watson, J. D.(1968). The double helix: A personal account of the discovery of the structure of DNA. New York, NY: Atheneum. Wilson, J. (2005). The relationship of Dynamic Indicators of Basic Early Literacy Skills (DIBELS) oral reading fluency to performance on Arizona Instrument to Measure Standards (AIMS). Tempe, AZ: Tempe School District No. 3. Wood, D. E. (2006). Modeling the relationship between oral reading fluency and performance on a statewide reading test. Educational Assessment, 11(2), 85-104. Wood, F., Hill, D., Meyer, M., & Flowers, D. (2005). Predictive assessment of reading. Annals of Dyslexia, 55(2), 193-216. Woodcock, R. M. (1973). Woodcock Reading Mastery Test-Revised. Circle Pines, MN: American Guidance Corp. Worthy, J., & Broaddus, K. (2002). Fluency beyond the primary grades: From group performance to silent, independent reading. Reading Teacher, 55(4), 334-343. Young, C., & Rasinski, T. (2009). Implementing reader’s theatre as an approach to classroom fluency. Reading Teacher, 63(1), 4-13. doi: 10.1598/RT.63.1.1
  • 147.
  • 148.
  • 149.
    135 Appendix B: DataUse Agreement
  • 150.
  • 151.
  • 152.
  • 153.
  • 154.
  • 155.
  • 156.
    142 TEXAS EDUCATION AGENCY- COPYRIGHT LICENSE AND PERMISSION FORM Applicant Information (to be completed by applicant): Name: Kathy Jones Title: Doctoral Student City: Odessa Zip: 79764 Email: jonesks1956@yahoo.com Company: Walden University Address: 9060 W. UniversityState/Province: TX Country: USAPhone: 432.230.0130 Details of Request (to be completed by applicant): 1. Title/Author of Works (the “Works”) for Which License is Sought (for example,, Texas Assessment of Knowledge and Skills (TAKS), G/T Teacher Toolkit II, Videos from series “Accommodations and Modifications in CTE Classroom Instruction”): Use of TAKS as a resource for completing doctoral study at Walden University 2. Indicate How Works will be Used: Business/Commercial Non- Profit/Government Personal Use 3. Briefly State Purpose for Which License is sought: 5 10-2010 I am a doctoral student at Walden University. My research will determine if a statistically significant relationship exists between oral reading fluency (as measured by the middle- of-year Grade 3 DIBELS benchmark) and reading proficiency (as measured by the scale score on the February, 2009 Grade 3 Reading TAKS). I would like to obtain permission from TEA to use TAKS in my study. As you requested in our phone conversation, I am attaching a copy of my proposal with all the references to TAKS highlighted in green. The specific websites and resources that I referenced are listed below: Texas Education Agency (2004). Grade 3 reading Texas Assessment of Knowledge and Skills information booklet. Austin, TX: Author. Texas Education Agency. (TEA). (2006). TAKS performance level descriptors. Retrieved from http://www.tea.state.tx.us/index3.aspx?id=3222&menu_id=793 Texas Education Agency. (TEA). (2009a). TAKS campus aggregate results page. Retrieved from http://ritter.tea.state.tx.us/cgi/sas/broker
  • 157.
    143 Texas Education Agency.(TEA). (2009b). Texas Assessment of Knowledge and Skills raw score conversion table reading—March 2009 administration grade 3. Retrieved from http://ritter.tea.state.tx.us/student.assessment/scoring/ convtables/2009/taks/mar09_g03_read.pdf Texas Education Agency. (2009c). Texas Assessment of Knowledge and Skills statewide summary report. Retrieved from http://ritter.tea.state.tx.us/student.assessment/ reporting/results/summary/2009/taks_mar09_g03.pdf Texas Education Agency. (TEA). (2009d). Texas Assessment of Knowledge and Skills— Summary report, group performance: District. Austin, TX: Author. Texas Education Agency. (TEA). (2009e). Texas Student Assessment Program, Accommodations manual, 2009-2010. Retrieved from http://ritter.tea.state.tx.us/ student.assessment/resources/accommodations/AccommManual_2009_10.pdf Texas Education Agency (TEA) & Pearson. (2008). Technical digest for the academic year 2007-2008. Retrieved from http://ritter.tea.state.tx.us/student.assessment/ resources/techdigest/2008/table_of_contents.pdf My committee has approved my proposal. Several people have indicated that many doctoral students complete all the requirements of their dissertation within six months of this point. Consequently, my current goal for completion is December, 2010. On the phone you indicated that you would probably extend permission to use TAKS in my study until March, 2011. Thank you for your time! Kathy Jones 4. Identify the specific URL(s) or website(s) where the “Works” can be located: http://www.tea.state.tx.us/index3.aspx?id=3222&menu_id=793 http://ritter.tea.state.tx.us/cgi/sas/broker The Texas Education Agency is dedicated to improving educational performance. It is the owner of various proprietary materials such as the Works listed above. Any use of the Works is subject to the attached Terms of Use and the provisions listed below. Any use shall include the following notice: Copyright © 2010. Texas Education Agency. All Rights Reserved.
  • 158.
    144 Terms of CopyrightLicense Authorized, if any (to be completed by Texas Education Agency) A. Authorized Use (to be completed by Texas Education Agency): Authorized to use, reproduce, display, publish, and distribute TEA materials in conjunction with Ms. Jones' doctoral archival study at Walden University. B. Additional Restrictions:Use by applicant of TEA copyrighted material is limited to graduate study only. You may not charge a fee for your study, nor market or sell your study containing TEA copyrighted materials without a license agreement from TEA. C. Fee for Use of Works: None D. Term (check one): From date of issue and ending on March 31, 2011 E. Licensed Territory: Worldwide Country/Province U.S. & its possessions and territories State(s) (please specify): Texas F. Payment Schedule: None G. Reporting: None COPYRIGHT LICENSE AFFIRMATION OR DENIAL (To be completed by the Texas Education Agency) I, the undersigned, on behalf of the Texas Education Agency, grant a license for the person or entity identified above to use the Works on a non-exclusive, nontransferable, non-assignable basis pursuant to the above and the Terms of Use set forth below. I am unable to grant a copyright license for use of the specified Works. By Texas Education Agency Printed Name: Robert N. Jocius Title: Manager, Office of Intellectual Property Date: 6/23/2010 By Applicant Printed Name: Kathy Jones PLEASE RETURN THIS FORM TO: Robert N. Jocius Manager, Office of Intellectual Property, Room 5-125C Texas Education Agency 1701 N. Congress Avenue Austin, TX 78701
  • 159.
    145 Copyrights@tea.state.tx.us Ph.: (512) 463-9270 TERMSOF USE 1. Definitions. “Agreement” means the above Copyright License and Permission Form and these Terms of Use. “Authorized Use” means the purpose for which the Works are to be used and the approved use of the Works granted by TEA. “Intellectual property rights” means the worldwide intangible legal rights or interests evidenced by or embodied in: (a) any idea, design, concept, method, process, technique, apparatus, invention, discovery, or improvement, including any patents, trade secrets, and know-how; (b) any work of authorship, including any copyrights, moral rights or neighboring rights; (c) any trademark, service mark, trade dress, trade name, or other indicia of source or origin; (d) domain name registrations; and (e) any other similar rights. The intellectual property rights of a party include all worldwide intangible legal rights or interests that the party may have acquired by assignment or license with the right to grant sublicenses. “Licensee” means the applicant specified above, if applicant’s Copyright License and Permission Form is approved by TEA. “Licensed Territory” means the specific Territory (district, area or geographic location) in which Licensee is located and for which the license, if any, is granted by TEA. “TEA” means the Texas Education Agency. “Works” means the works of authorship, written materials or other tangible items specifically set forth above. 2. Grant of License. For the consideration set forth above, TEA grants to Licensee, and Licensee accepts from TEA, a revocable, non-exclusive, non-transferable, non-assignable license to utilize the Works on or in connection with the Authorized Use for educational purposes in the Licensed Territory for the Term specified above.
  • 160.
    146 3. Term andTermination. (a) The license granted herein will be effective from the date the Agreement is signed by TEA, and shall continue in effect for the Term identified above, unless sooner terminated in TEA’s sole discretion. (b) If Licensee breaches any of its obligations under the terms of this Agreement, TEA may terminate this Agreement effective immediately, without prejudice to any other rights or remedies TEA may have, upon giving written notice of termination to Licensee. (c) If Licensee attempts to assign, sublicense or subcontract any of its rights under this Agreement, without the prior written permission of TEA, TEA may terminate this Agreement effective immediately, without prejudice to any other rights or remedies TEA may have, upon giving written notice of termination to Licensee. Notwithstanding any of the foregoing, TEA has the right to terminate this Agreement, with or without cause, upon giving thirty (30) days written notice of its intent to terminate the Agreement. (d) To the extent permitted by law, Licensee agrees to protect and treat all information provided by TEA to Licensee relating to the Works or this Agreement as confidential, proprietary and trade secret information. Licensee will not use, disclose, or cause to be used or disclosed, such confidential and trade secret information, or otherwise impair or destroy the value of such confidential and trade secret information. Licensee agrees not to disclose or cause to be disclosed the terms of this Agreement, except as required by law. 4. Compensation. (a) Licensee will furnish to TEA a full, complete and accurate report showing all gross and net revenue received regarding the Works according to the reporting requirements found in Section G. of this license. (b) Licensee will keep accurate books of accounts and records covering all transactions relating to this Agreement and the Works for all years during which this Agreement is in effect, and will keep such books for a period of five (5) years after the expiration or termination of this Agreement. TEA and the State of Texas auditor, or their representatives, will have the right to examine such books of account and records and other documents and material in Licensee’s possession or under its control insofar as they relate to the Agreement or the Works.
  • 161.
    147 5. Indemnification. For localeducational agencies (LEAs), regional education service centers (ESCs), and institutions of higher education (IHEs): Licensee, to the extent permitted by law, shall hold TEA harmless from and shall indemnify TEA against any and all claims, demands, and causes of action of whatever kind or nature asserted by any third party and occurring or in any way incident to, arising from, related to, or in connection with, any acts of Licensee in performance of the Agreement or in connection with Licensee’s use of the Works. For all other grantees, subgrantees, contractors, and subcontractors, including nonprofit organizations and for-profit businesses: Licensee shall hold TEA harmless from and shall indemnify TEA against any and all claims, demands, and causes of action of whatever kind or nature asserted by any third party and occurring or in any way incident to, arising from, related to, or in connection with, any acts of Licensee in performance of the Agreement or in connection with Licensee’s use of the Works. 6. Intellectual Property Rights. (a) As between TEA and Licensee, TEA retains all right, title and interest in and to the Works, and any derivative works thereof, and any intellectual property rights associated therewith, including goodwill, and regardless of whether registered or not. Licensee’s use of the Works, or the intellectual property associated therewith, and the goodwill therein, inures to the benefit of TEA. Licensee has no rights in or to the Works or the intellectual property associated therewith, other than the right of use as expressly granted herein. (b) To the extent that Licensee adds any additional materials to the Works, or creates any derivative works to the Works, Licensee agrees that the additional materials or derivative works are, upon creation, works made for hire and the sole property of TEA. If the additional materials or derivative works are, under applicable law, not considered works made for hire, Licensee hereby assigns to TEA all worldwide ownership of all rights, including all intellectual property rights, in and to the additional materials or derivative works to the Works, without the necessity of any further consideration, and TEA can obtain and hold in its own name all such rights. Licensee agrees to maintain written agreements with all officers, directors, employees, agents, representatives and subcontractors engaged by Licensee regarding this Agreement, granting Licensee rights sufficient to support the performance and grant of rights to TEA by Licensee. Copies of such agreements shall be provided to TEA promptly upon request.
  • 162.
    148 (c) TEA, inits sole discretion, may procure registration of the intellectual property rights in and to the Works, or any derivative works thereof. Licensee will not seek to register or secure any intellectual property rights in and to the Works, or any derivative works thereof. Licensee shall notify TEA in writing of any infringements or imitations by others of the Works to which it becomes aware. TEA may then, in its sole discretion, commence or prosecute in its own name any claims or suits for infringement of the Works or the intellectual property rights associated therewith, or any derivative works thereto. (d) Licensee shall not, during the Term of this Agreement, or at any time thereafter, dispute or contest, directly or indirectly, the right, title or interest of TEA in or to the Works, the intellectual property rights therein, the goodwill reflected thereby, or as to the validity of this Agreement or the license terms therein. The provisions of this section 6 shall survive the expiration or termination of the Agreement. (e) Licensee will legibly and prominently display the following copyright notice in connection with all Authorized Use of the Works: Copyright © 2010. Texas Education Agency. All Rights Reserved. 7. Quality Standards. (a) Licensee acknowledges that if the Authorized Use of the Works by Licensee were of inferior quality in design, material or workmanship, the substantial goodwill which TEA has built up and now possesses in the Works would be impaired. Accordingly, Licensee will ensure that its use of the Works will meet or exceed any and all relevant industry standards, and are of such style, appearance and quality as will be reasonable, adequate and suited to their exploitation and to protect and enhance such goodwill. (b) To ensure that appropriate standards of style, appearance, and quality are maintained, Licensee will provide samples to TEA of Licensee’s proposed use of the Works prior to distribution to the intended recipient, for TEA’s approval. 8. Student Information. Licensee understands that any unauthorized disclosure of confidential student information is illegal as provided in the federal Family Educational Rights and Privacy Act of 1974 (FERPA), 20 USC, Section 1232g, and implementing federal regulations found in 34 CFR, Part 99.
  • 163.
    149 9. Miscellaneous. (a) Allnotices and statements provided for herein will be in writing and together with all payments provided for herein will be mailed to the addresses set forth above or such other address as may be designated in writing by TEA or Licensee from time to time. (b) This Agreement does not constitute and will not be construed as constituting an agency, partnership, joint venture, master and servant relationship, employer- employee relationship, or any other similar relationship between TEA and Licensee, and no representation to the contrary shall be held binding on TEA. (c) This Agreement will be construed in accordance with the laws of the State of Texas (except to the extent that federal patent, copyright, or trademark laws apply, in which case federal law shall govern), without reference to its choice of law principles, and entirely independent of the forum in which construction, interpretation or enforcement of the Agreement or any part of it may occur. (d) The parties shall use the dispute resolution process provided for in Chapter 2260 of the Texas Government Code to attempt to resolve any claim for a breach of this Agreement. All reference in this subparagraph to “subchapters” is to the subchapters referenced in the Tex. Govt. C. Chapter 2260. Any claim for a breach of this Agreement that the parties cannot resolve in the ordinary course of business shall be submitted to the negotiation process provided in subchapter B. To initiate the process, Licensee shall submit written notice to the Texas Commissioner of Education. Such notice shall specify that the provisions of subchapter B are being invoked. The contested case process provided in subchapter C is Licensee’s sole and exclusive process for seeking a remedy for an alleged breach of this Agreement if the parties are unable to resolve their disputes in accordance with the negotiation process provided in subchapter B. Compliance with the contested case process provided in subchapter C is a condition precedent to seek consent to sue from the Texas Legislature under Chapter 107 of the Tex. Civ. Prac. & Rem. C. In the event that consent to sue is granted by the Texas Legislature, then venue for any action or claim brought against TEA regarding this Agreement shall be in the state and/or federal courts located in Austin, Travis County, Texas, and the parties expressly submit themselves to the personal jurisdiction of the state and/or federal courts located in Austin, Travis County, Texas. (e) If, but only to the extent that, any provision of this Agreement is declared or found to be illegal, unenforceable or void, then TEA and Licensee shall be relieved of all obligations arising under such provision, it being the intent and agreement of the parties that this Agreement shall be deemed amended by modifying such provision to the minimum extent necessary to make it legal and enforceable while
  • 164.
    150 preserving its intent.It is the specific intent and request of TEA and Licensee that the court, arbitrator or other adjudicative body called upon to interpret or enforce this Agreement modify such provision to the minimum extent necessary so as to render it enforceable. If such amendment is not possible, another provision that is legal and enforceable and achieves the same objectives shall be substituted therefor. If the remainder of this Agreement is not affected by such declaration or finding and is capable of substantial performance by the parties, then the remainder shall be enforced to the extent permitted by law. (f) This Agreement contains the entire understanding of the parties with respect to the subject matter hereof, and supersedes in all respects all prior oral or written agreements or understandings between any of them pertaining to the transactions contemplated by this Agreement. There are no representations, warranties, promises, covenants or undertakings other than those hereinabove contained. (g) No waiver or modification of any of the terms of this Agreement will be valid unless in writing and signed by both parties. No waiver by any party of a breach hereof or a default herein will be deemed a waiver by such party of a subsequent breach or default of like or similar nature.
  • 165.
    151 Curriculum Vitae Kathy Jones jonesks1956@yahoo.com EDUCATION 2010Doctor of Education in Administration, Walden University • Dissertation Topic: “The Relationship Between Oral Reading Fluency and Reading Proficiency” o Conducted research to determine if there was a relationship between the middle-of-year oral reading fluency rates of Grade 3 students and performance on the state-mandated assessment for reading proficiency • 4.0 grade GPA • Served on the Walden Advisory Committee • Completed residencies in Dallas, TX and Lansdowne, VA 2001 Master of Education, Reading Specialist Abilene Christian University, Abilene, Texas • 4.0 GPA • Participant in the education service center program for training professionals in other fields to become educators • Passed all teacher examinations the first time they were attempted, including the prestigious Master Reading Teacher 1997 Bachelor of Applied Studies Degree Abilene Christian University, Abilene, Texas • 4.0 GPA • Member of W-Club, an honor society for Christian women at ACU • Member of Kappa Delta Pi • Secretary for the Bachelor of Applied Studies (a degree completion program for older adult students) o Motivated adult learners to achieve their educational goals o Helped prospective adult students realize how their life experiences could be used to further their education
  • 166.
    152 • Secretary forthe McNair Scholars Program o Received inspiration from the story of Ronald McNair, an African American who came from a segregated school, attended MIT, obtained a doctorate in physics , became the second African American astronaut, and died on the Space Shuttle Challenger o Encouraged first-generation, low-income, undergraduate college students to pursue graduate programs 1995 Third Year Teaching Certificate University of Micronesia/Federated States of Micronesia, Kolonia, Pohnpei, FM • 4.0 GPA • Elected as the Class Secretary • Student Teacher, Kolonia Elementary School • Conducted research in which 100 native children were assessed to determine which English sounds the students struggled to learn • Attended the local university in this country in which my family served as missionaries • Only American student in classes • Obtained the highest degree available in education at that time in that location TEACHING CERTIFICATES • Standard o Master Reading Teacher (Grades EC-12), 9/12/2003-2/28/2015 o Classroom Teacher English as a Second Language (Grades EC-12), 9/12/2003-2/28/2015 o Reading Specialist (Grades PK-12), 9/12/2003-2/28/2015 • Provisional o Elementary Reading (Grades 1-8). 2/27/1998-Life o Elementary Self-Contained (Grades 1-8), 2/27/1998-Life o Generic Special Education (Grades PK-12), 5/22/1998-Life EMPLOYMENT EXPERIENCE District Dyslexia Teacher Monahans-Wickett-Pyote Independent School District, June 2009 to present • Devised process for identifying and progress monitoring dyslexic students • Worked collaboratively with teachers, principals, and district administrators to meet the needs of dyslexic students • Provided dyslexic therapy for dyslexic students in Grades 1 to 6 • Trained other teachers and aides to teach dyslexic students • Analyzed data from students’ standardized assessments to group students for intervention
  • 167.
    153 • Conducted 504meetings where 504 students were identified and plans were made for accommodations to address their educational needs Teacher Tatom Elementary, Monahans-Wickett-Pyote Independent School District, January 2008 to June 2009 • Taught second graders • Volunteered to work with dyslexic students o Taught six dyslexic students after school All of these students who took the state-mandated reading assessment passed the first time exam was administered o Made recommendations to the district for assessments to use when determining if students are dyslexic o Trained counselors to administer assessments o Administered dyslexia assessments • Taught summer school for students who had failed the state-mandated reading assessment twice o Disaggregated data to determine students’ strengths and weaknesses o Targeted instruction to meet students’ needs o 75% of these students passed on their third attempt to take the state- mandated reading assessment 504/Dyslexia Coordinator Ector County Independent School District, August 2006 to December 2007 • Supervised six teachers who assessed students referred for dyslexia • Evaluated district process for identifying dyslexics • Improved record-keeping process for maintaining district records • Collaborated with principals and other administrators to serve the needs of 504 students Reading Coordinator Ector County Independent School District, January 2005 to August 2006 • Wrote scope and sequence for reading, Grades K to 8 • Wrote district benchmarks for reading, Grades K to 8 • Trained teachers to administer state-mandated early reading inventory • Organized staff development days to provide a wide range of professional development activities to enhance instruction in reading • Evaluated effectiveness of district reading programs • Conducted professional development workshops in reading and dyslexia
  • 168.
    154 Teacher Bowie Junior HighSchool, Ector County Independent School District, August 2000 to December 2004 • Taught reading, Grades 7 and 8 • Assessed and provided services for ESL students • At principal’s request, mentored first-year teachers • Received grants from local education foundation designed to encourage innovative teaching projects o Hidden in Plain View, $3,000.00 o Walking a Mile in Your Moccasins, $2,970.01 o Be Proud of Your Heritage, $860.00 o Because We Can Change the World, $3,790.25 o Wordplay at Work, $999.74 o Empowering Parents and Teens to be Practically Perfect, $4,874.90 o Reaching Out, $4,745.26 • Secretary and President-elect of local chapter of Texas Classroom Teachers Association o Met with superintendent each month to discuss teachers’ concerns o Attended conferences in the state capitol to keep informed on political issues relating to teachers o Did not serve as president because the district hired me as an administrator (TCTA membership does not include administrators) • Featured in several education spotlights on local television stations and in newspapers • Invited to join Professional Development Cadre, in which outstanding teachers were trained to present professional development workshops Special Education Teacher Franklin Middle School, Abilene Independent School District, December 1997 to May 2000 • Taught a self-contained unit of mentally-challenged students • Chosen as Teacher of the Month by student organization • Attended two-year dyslexia training program • Taught mentally-challenged students how to read Teacher, Adult Education, Abilene Independent School District, September 1996 to May 1997 • Taught adult students whose reading level was less than Grade 3 • Attended workshops for teachers of adult students
  • 169.
    155 Missionary Church of Christ,Kolonia, Pohnpei, Federated States of Micronesia, May 1982 to June 1995 • Served as missionary in a third-world developing country • Adopted into a local clan • Used the Bible to teach English to over 1,000 individuals • Home-schooled my children over a 10-year period o Taught and inspired one daughter, who was designated as gifted and talented and later became a lawyer o Taught and identified one son as dyslexic who later graduated with a masters degree in international development Noticed he was not learning to read proficiently Read numerous books to gather information Implemented teaching strategies that resulted in his academic growth from a 2nd-grade reading and writing level to a 7th-grade reading and writing level during his 6th-grade year o Taught one son with ADHD who is currently enrolled in a medical physics Ph.D. program • Organized a network of home school families o Taught other home schooled students who were struggling with reading o Taught a Social Studies unit on Australia for all the home schooled children and culminated the unit with an afternoon spent with the Australian ambassador • Worked collaboratively with native leaders and expatriates to open a library o Joined other interested individuals during the first meeting to discuss the idea o Recruited and organized volunteers to work in the library o Served on the Friends of the Library Board • Conducted a summer school o Wrote curriculum for the summer school o Trained my husband and children to teach in summer school o Invited 40 children who could not read to attend summer school o Taught a boy whom local teachers had declared uneducable how to write his name o Taught a 17-year-old boy to read for the first time o Taught one boy who claimed that he had learned more in summer school than in the previous 9 months of school • Accepted principals’ requests to train their teachers how to teach reading
  • 170.
    156 COMMUNITY SERVICE Visiting Committee,Brown Library, Abilene Christian University, 2004 to 2006 • Accepted appointment by the Director of Library Services to serve on committee • Served with other experts to o Review library’s operations o Report to provost of the university on the library’s status School Board Member, Odessa Christian Schools, 2007 to present REFERENCES Dr. Thomas Schnick Chairman of Doctoral Committee Walden University 608-242-4629 Thomas.Schnick@waldenu.edu Dr. Barbara Bailey Member of Doctoral Committee 404-272-2222 baileyba@mindspring.com Susan Myers Adjunct Professor Grand Canyon University (928) 333-4190 (928) 245-3101 susanmyers@frontiernet.net Doug Doege Supervisor Tatom Elementary School Monahans-Wickett-Pyote Independent School District 432-943-2769 ddoege@mwpisd.esc18.net