3. Mesibov, 2003). Training public
educators to provide evidence-based practices to children with
autism is a central issue facing the
field (Simpson, de Boer-Ott, & Smith-Myles, 2003).
One major challenge to implementing evidence-based practices
for children with autism in
community settings is the complexity of these practices.
Strategies based on the principles of
applied behavior analysis have the strongest evidence to support
their use (National Standards
This research was funded by grants from the National Institute
of Mental Health (5R01MH083717) and the
Institute of Education Sciences (R324A080195). We thank the
School District of Philadelphia and its teachers and
families for their collaboration and support. Additionally, Dr.
Stahmer is an investigator with the Implementation
Research Institute at the George Warren Brown School of Social
Work, Washington University, St. Louis, through an
award from the National Institute of Mental Health
(R25MH080916).
Correspondence to: Aubyn C. Stahmer, Child and Adolescent
Services Research Center & Autism Discovery
Institute, Rady Children’s Hospital, San Diego, 3020 Children’s
Way, MC5033, San Diego, CA 92123. E-mail:
[email protected]
181
182 Stahmer et al.
Project, 2009). These practices vary greatly in structure and
difficulty. Some strategies, such as
4. discrete trial teaching (DTT; Leaf & McEachin, 1999; Lovaas,
1987), are highly structured and
occur in one-on-one settings, whereas others are naturalistic,
can be conducted individually or
during daily activities, and tend to be more complex to
implement (e.g., incidental teaching; Fenske,
Krantz, & McClannahan, 2001; or pivotal response training
[PRT]; Koegel et al., 1989). There are also
classroom-wide strategies and structures based on applied
behavior analysis, such as teaching within
functional routines (FR; Brown, Evans, Weed, & Owen, 1987;
Cooper, Heron, & Heward, 1987;
Marcus, Schopler, & Lord, 2000; McClannahan & Krantz,
1999). Although all of these evidence-
based practices share the common foundational principles of
applied behavior analysis, each is made
up of different techniques. These and other intervention
techniques are often packaged together as
“comprehensive interventions” (Odom, Boyd, Hall, & Hume,
2010) or used in combination in the
field to facilitate learning and expand the conditions under
which new student behaviors occur (Hess,
Morrier, Heflin, & Ivey, 2008; Stahmer, 2007).
Teachers can learn these evidence-based strategies within the
context of a research study (e.g.,
Suhrheinrich, 2011); however, studies report a highly variable
number of hours of training needed
to master the intervention strategy. For example, the amount of
time required to train classroom
educators in DTT in published studies ranges from 3 hours
(Sarokoff & Sturmey, 2004) at its most
brief, to recommendations of 26 to 60 hours of supervised
experience (Koegel, Russo, & Rincover,
1977; Smith, Buch, & Gamby, 2000; Smith, Parker, Taubman, &
Lovaas, 1992). Teachers have been
5. trained to fidelity in PRT in 8 to 20 hours (Suhrheinrich, 2011).
To achieve concurrent mastery of
several different intervention techniques and to incorporate the
development of appropriate student
goals, some researchers have suggested that teachers may need a
year or more of full-time, supervised
practicum training (Smith, Donahoe, & Davis, 2000).
There are several reasons why teachers may not implement
evidence-based practices the way
they were designed. First, teachers typically receive limited
instruction in specific interventions. For
example, instruction often comprises attendance at a didactic
workshop and receipt of a manual.
Teachers are then expected to implement evidence-based
practices without the ongoing coaching
and feedback that is critical for intervention mastery (Bush,
1984; Cornett & Knight, 2009). Second,
most evidence-based practices were not designed for school
settings and therefore may be difficult
to implement appropriately in the classroom (Stahmer,
Suhrheinrich, Reed, Bolduc, & Schreibman,
2011). Perhaps as a result, teachers often report that they
combine or modify evidence-based practices
to meet the specific needs of their classroom and students
(Stahmer, Collings, & Palinkas, 2005).
Finally, school administrators sometimes mandate the use of
programs that may not align with
teachers’ classroom environment, beliefs, or pedagogy
(Dingfelder & Mandell, 2011).
A major indication of the quality of the implementation of any
evidence-based practices is
treatment fidelity, also known as implementation fidelity
(Gersten et al., 2005; Horner et al., 2005;
Noell, Duhon, Gatti, & Connell, 2002; Noell et al., 2005;
6. Proctor et al., 2011; Schoenwald et al.,
2011). Implementation fidelity is the degree to which a
treatment is implemented as prescribed, or
the level of adherence to the specific procedures of the
intervention (e.g., Gresham, 1989; Rabin,
Brownson, Haire-Joshu, Kreuter, & Weaver, 2008; Schoenwald
et al., 2011). There are several types
of implementation fidelity. Procedural fidelity (Odom et al.,
2010; also called program adherence;
Schoenwald et al., 2011) is the degree to which the provider
uses procedures required to execute the
treatment as intended. Other types of fidelity include treatment
differentiation (the extent to which
treatments differ from one another), therapist competence (the
level of skill and judgment used
in executing the treatment; Schoenwald et al., 2011), and
dosage (Odom et al., 2010). Although,
ideally, all types of fidelity would be examined to determine the
fit of an intervention in a school
program (Harn, Parisi, & Stoolmiller, 2013), procedural fidelity
provides one important avenue for
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 183
measuring the extent to which an intervention resembles an
evidence-based practice or elements of
evidence-based practice (Garland, Bickman, & Chorpita, 2010).
Procedural implementation fidelity is likely a potential
mediating variable affecting student
outcomes, with higher fidelity resulting in better outcomes
(Durlak & DuPre, 2008; Gresham,
7. MacMilan, Beebe-Grankenberger, & Bocian, 2000; Stahmer &
Gist, 2001); however, it is not often
measured. In behavioral services research, three separate
reviews of reported implementation fidelity
data have been published. In the Journal of Applied Behavior
Analysis, fidelity data were reported
in only 16% to 30% of published articles (Gresham, Gansle, &
Noell, 1993; McIntyre, Gresham,
DiGennaro, & Reed, 2007; Peterson, Homer, & Wonderlich,
1982). Three separate reviews indicated
that only 13% to 32% of autism intervention studies included
fidelity measures (Odom & Wolery,
2003; Wheeler, Baggett, Fox, & Blevins, 2006; Wolery &
Garfinkle, 2002). A recent review of
special education journals found that fewer than half (47%) of
intervention articles reported any type
of fidelity scores (Swanson, Wanzek, Haring, Ciullo, &
McCulley, 2011). Indeed, limited reporting
of implementation adherence is evident across a diverse body of
fields (Gresham, 2009). The lack of
reporting (and therefore, the presumable lack of actual
measurement of implementation) limits the
conclusions that can be drawn regarding the association between
student outcomes and the specific
treatment provided. Therefore, examination of implementation
fidelity, although complicated, is
important to advance the understanding of how evidence-based
interventions are being implemented
in school settings.
Our research team recently completed a large-scale randomized
trial of a comprehensive pro-
gram for students with autism in partnership with a large, urban
public school district. Procedural
implementation fidelity of the overall program (which includes
three evidence-based practices) was
8. highly variable, ranging from 12% to 92% (Mandell et al.,
2013). The three strategies included in
this program, DTT, PRT, and FR (see description in the Method
section), share an underlying theo-
retical base, but rely on different specific techniques. The
purpose of this study was to examine the
extent to which public school teachers implemented evidence-
based interventions for students with
autism in the way these practices were designed. Examining
implementation fidelity of each strategy
individually may provide insight into whether specific
interventions are more easily implemented in
the classroom environment. In particular, we examined whether
special education classroom teach-
ers and staff: 1) mastered specific strategies that form the
backbone of applied behavioral analysis
programs for autism; 2) used the strategies in their classroom;
and 3) maintained their procedural
fidelity to these strategies over time.
METHOD
Participants
Participants were classroom teachers and staff in an urban
school district’s kindergarten-
through-second-grade autism support classrooms (each in a
different school) participating in a
larger trial of autism services. Of the 67 total autism support
classrooms in the district at the time
of the study, teachers and staff from 57 (85%) of the schools
participated. Each classroom included
one participating teacher and 0 to 2 classroom assistants (M =
1). Throughout the district, staff were
required to participate in intervention training as part of
professional development, but were not
9. required to consent to participate in the study. Data from the
current study are reported only for the
57 teachers and staff who consented to participate.
Teachers received intensive training in Strategies in Teaching
Based on Autism Research
(STAR) during their first year of participation in the project.
During the second year, continuing
teachers received in-classroom coaching every other week.
From the original 57, 38 teachers (67%)
Psychology in the Schools DOI: 10.1002/pits
184 Stahmer et al.
Table 1
Teacher Demographic Characteristics
N % Female
Total Years
Teaching, M
(range)
Years Teaching
Children with
ASD, M (range)
Education Level %
Bachelor’s Degree/%
Master’s Degree
10. 57 97.3 10.8 (1–38) 6.8 (1–33) 30/70
participated in the second year of the study. See Table 1 for
teacher demographics. A complete
description of adult and student participants can be found
elsewhere (Mandell et al., 2013).
Intervention
Strategies for Teaching Based on Autism Research. The goal of
the Strategies for Teaching
Based on Autism Research (STAR) program is to develop
children’s skills in a highly structured
environment and then generalize those skills to more
naturalistic settings. The program includes a
curriculum in which each skill is matched to a specific
instructional strategy. The STAR program
includes three evidence-based strategies: DTT, PRT, and FR.
DTT relies on highly structured, teacher-directed, one-on-one
interactions between the teacher
and student. In these interactions, the teacher initiates a specific
stimulus to evoke the child’s
response, generally a discrete skill, which is an element of a
larger behavioral repertoire (Krug
et al., 1979; Krug, Rosenblum, Almond, & Arick, 1981; Lovaas,
1981, 1987; Smith, 2001). DTT is
used in STAR for teaching pre-academic and receptive language
skills, where the desired behavior
takes a very specific form, such as learning to identify colors,
sequencing events from a story into
a first-next-then-last structure or counting with one-to-one
correspondence. The consequence of the
desired behavior is an external reinforcer, such as a token or a
preferred edible (Lovaas, 2003; Lovaas
11. & Buch, 1997).
PRT can occur in both one-on-one interactions and small-group
interactions with the teacher.
It is considered student directed because it occurs in the regular
classroom environment, where the
teaching area is pre-arranged to include highly preferred
activities or toys that the student will be
motivated to acquire. In PRT, students initiate the teaching
episode by indicating interest in an item or
activity or selecting among available teaching materials.
Materials are varied frequently to enhance
student motivation and generalization of skills and make PRT
appropriate for targeting expressive
and spontaneous language (Koegel, O’Dell, & Koegel, 1987;
Koegel et al., 1989; Laski, Charlop,
& Schreibman, 1988; Pierce & Schreibman, 1997; Schreibman
& Koegel, 1996). After the student
expresses interest in an activity or item, he or she is required to
perform a specific behavior related
to the item. The consequence of the desired behavior is getting
access to the activity or item. For
example, students’ attempts to label and request items are
reinforced by the delivery of the item,
which may then provide the opportunity to focus on other skills,
such as joint attention, imitation,
play skills, and generalization of other skills learned in the DTT
format.
FR are the least structured of the STAR instructional strategies.
FR strategies are routines that
occur throughout the day and include school arrival and
dismissal, mealtime, toileting, transitions
between classroom activities, and recreational activities. Each
routine is broken into discrete steps
called a task analysis and then chained together using behavior
12. analytic procedures such as stimulus
prompts (visual and verbal) and reinforcement of each step in
the routine (Brown et al., 1987; Cooper
et al., 1987; Marcus et al., 2000; McClannahan & Krantz, 1999).
For example, a routine to change
activities may include cuing the transition (verbal prompt),
checking a schedule (visual prompt),
pulling a picture card from the schedule to indicate the next
activity, taking the card to the location of
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 185
the new activity, putting the card into a pocket utilizing a
match-to-sample technique, and beginning
the new activity, followed by a token for routine completion.
The advantage of this strategy is that
each transition component is taught within the context of
performing the routine, so that the child
learns to respond to natural cues and reinforcers. FR strategies
are conducted in both individual and
group formats, depending on the skills being taught (e.g.,
toileting versus appropriate participation
in snack time).
Training
STAR training occurred in accordance with the STAR
developers’ training protocols. The
research team contracted with the program developers to
provide training directly to the teachers.
Training included workshops, help with classroom setup, and
observation and coaching throughout
13. the first academic year of STAR implementation (described in
detail in the following sections).
Six local coaches also were trained by the STAR developers to
provide ongoing consultation to
classroom staff during the second year of STAR
implementation. The training protocol for STAR is
manualized and publicly available. Additional information
about the STAR program can be found
at www.starautismsupport.com. Training provided to classroom
teachers and staff included the
following components:
Workshops. The STAR program developers provided a series of
trainings on the use of the
STAR program. The training began in September and consisted
of 28 hours of intensive workshops
that covered the STAR program, including the use of the
curriculum assessment, classroom setup,
and training in DTT, PRT, and FR. Workshops included didactic
teaching, video examples, role-
playing, and a visit to each classroom to help with classroom
setup. STAR workshops took place
outside the school day (i.e., during professional development
days, at night, and on the weekends).
Observation and coaching. During the first year, program
developers observed classroom staff
during regular school hours and provided feedback on use of
STAR strategies with students. Trainers
provided 5 days of observation and coaching immediately
following training, 3 days of follow-up
coaching throughout the academic year, and ongoing advising
and coaching by e-mail and phone.
On average, classrooms received 26.5 (range, 1.5–36) hours of
coaching over 5.7 (range, 3–7) visits
in the first year. During the second year, local coaches trained
14. by the STAR developers provided
coaching in the STAR strategies. Coaching was provided
September through May on a monthly
basis. On average, classrooms received 36.1 (range, 0–59) hours
of coaching over 10 (range, 0–10)
visits in the second year.
Data Collection Procedures
Data on adherence to the instructional strategies used in STAR
were collected throughout the
academic year via video recording of teaching interactions with
students for coding of implementa-
tion fidelity in each of the three STAR intervention methods.
Classroom staff members were filmed for 30 minutes every
month in Years 1 and 2. Research
assistants trained in filming methods recorded the intervention
during a specified date each month.
Visits were timed to coincide with regularly scheduled use of
each of the intervention methods.
The 30-minute film was composed of 10 minutes of DTT, 10
minutes of PRT, and 10 minutes of
FR to provide a sample of the use of each intervention.
Recording included any consented staff
member providing the intervention. The staff member filmed by
the research staff varied depending
on which staff member (i.e., teacher or paraprofessional) was
conducting the intervention that day.
The primary classroom teacher conducted the intervention in
86% of the videos collected, and
paraprofessional staff conducted the intervention in the
remaining 14% of videos. There were no
Psychology in the Schools DOI: 10.1002/pits
15. 186 Stahmer et al.
statistically significant differences in the proportion of videos
collected by intervention provider
(teacher vs. paraprofessional) for any strategy or time period (p
> .05).
Implementation Fidelity Measures
Coding procedures. The primary method for assessing fidelity
of STAR strategies was through
video recordings of teachers and aides interacting with students.
Coding relied on different criteria
based on specific coding definitions created for each
instructional component, as well as general
teaching strategies (see following sections). Coding schemes for
each method were developed by
the first author and were reviewed by the STAR program
developers.
Trained research assistants blinded to the study hypotheses
coded all video recordings. For each
intervention method, the core research team established correct
codes for a subset of videos through
consensus coding (keys). Each research assistant coder then
learned one coding system (i.e., DTT,
PRT, or FR) and was required to achieve 80% reliability across
two keys before beginning to code
any classroom sessions independently. One third of all tapes
were double coded to ensure ongoing
reliability of data coding throughout the duration of the project.
The core research team also re-coded
two tapes for each research assistant every other month,
providing a measure of criterion validity. If
16. there was less than 80% agreement between the reliability coder
and the research assistant, additional
training and coaching were provided until criterion was
achieved and previous videos were re-coded.
Coding involved direct computer entry while viewing videos
using “The Observer Video-
Pro” software (Noldus Information Technology, Inc., 2008), a
computerized system for collection,
analysis, and management of direct observation data. For each
instructional strategy, the coder
observed the 10-minute segment and subsequently rated the
adults’ use of each component of the
strategy on a 1 to 5 Likert scale, with 1 indicating Adult does
not implement throughout segment
and 5 indicating Adult implements consistently throughout the
segment. These Likert ratings were
found to have high concordance with more detailed trial-by-trial
coding of each strategy component
(88% agreement) used in previous research (Stahmer, 2010). A
score of 4 or 5 on a component
was considered passing and correlated with 80% correct use of
strategies in the more detailed
coding scheme. Following are the individual components
included in each strategy. Complete coding
definitions are available from the first author.
Discrete trial teaching. For DTT, coders examined the use of the
following components:
gaining the student’s attention, choosing appropriate target
skills, using clear and appropriate cues,
using accurate prompting strategies, providing clear and correct
consequences, using appropriate
inter-trial intervals, and utilizing error correction procedures
effectively (error correction evaluated
against procedures described in Arick, Loos, Falco, & Krug,
17. 2004). The criterion for passing
implementation fidelity was defined as the correct use of 80%
of components (score of 4 or 5) during
the observation.
Pivotal response training. For PRT, coders examined the use of
the following components:
gaining the student’s attention, providing clear and
developmentally appropriate cues related to the
activity, providing the student a choice of stimuli/activities,
interspersing a mixture of maintenance
(previously acquired) and acquisition (not yet mastered) tasks,
taking turns to model appropriate be-
havior, providing contingent consequences, rewarding goal-
directed attempts, and using reinforcers
directly related to the teaching activity. The criterion for
passing implementation fidelity was defined
as the correct use of 80% of components (score of 4 or 5) during
the observation.
Functional routines. For FR, coders examined adherence to each
step of the FR used in class-
rooms during group and individual routines. The use of the
following components was coded: using
error correction procedures appropriately, adhering to FR lesson
plan, and supporting transitions
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 187
between activities. The criterion for passing implementation
fidelity was defined as correct use of
80% of components (score of 4 or 5) during the observation.
18. Reliability of Data Recording
Inter-rater reliability, as measured by percent agreement within
1 Likert point, was calculated
for coding of each instructional strategy and each month of
videos by having a second coder, blinded
to the initial codes, score one third of the videos per strategy
for each month. The average overall
percent agreement for each strategy was 86% for DTT (range,
60%–100%); 90% for PRT (range,
75%–100%); and 90% for FR (range, 67%–100%). A primary
coder was assigned to each strategy,
and those codes were used in the analyses.
Data Reduction and Analyses
Data were examined across four periods. Time 1 included the
first measurement for available
classrooms in Year 1, which was conducted in October,
November, or December of 2008. Filming
occurred after the initial training workshops. Coaching was
ongoing throughout the year. If class-
rooms were filmed in more than one of those months, both the
average and the best performance
were analyzed. All classroom staff participated in their initial
training prior to the Time 1 measure-
ment. Time 2 was defined as the performance from the last three
measurements of the school year
(February, March, or April 2009) for Year 1. The same
procedures were used for Year 2 (Times 3 and
4). Time 3 included the first observation in Year 2 (October,
November, or December 2009). Time
4 included the performance during the last 3 months of
observations (February, March, or April,
2010). Both average and best performance from each period
19. were utilized to provide an estimate of
the staff’s capacity to implement the strategy in the classroom
environment (best) and variability in
competency of use (average).
Data from Year 1 and Year 2 were analyzed. One-way within-
subject (or repeated measures)
analyses of variance (ANOVAs) were conducted for each
intervention strategy to examine change in
implementation fidelity scores for over time. Post-hoc
comparisons were made using paired sample
t tests between time periods when ANOVA results indicated
statistically significant differences.
In addition, we examined differences in fidelity of
implementation across intervention strategies
using a one-way ANOVA with paired sample t tests to follow up
on significant results. Type I error
probability was maintained at .05 (two-tailed) for all analyses
using a Bonferroni correction.
Pearson correlations were conducted to examine the relationship
between fidelity of implemen-
tation of each intervention strategy and teaching experience,
experience working with children with
autism spectrum disorder (ASD), level of education, and number
of hours of coaching received.
RESULTS
Use of the Strategies
Because teachers who did not allow filming in their classrooms
cited staffing difficulties or
lack of preparation as the reason, they were considered not to be
implementing DTT, PRT, or FR in
their classrooms on a regular basis. At Time 1, two teachers
20. (4%) explicitly indicated that they did
not use DTT at any time, and 13 teachers (23%) indicated that
did not use PRT at any time. The
percentage of classrooms filmed using the strategy is displayed
in Figure 1. In Year 1, classrooms
were filmed most often conducting DTT at both Time 1 (70% of
classrooms) and Time 2 (96%).
Only 23% of classrooms were filmed conducting PRT at Time 1,
and 68% were filmed at Time 2.
FR was filmed in 67% of classrooms at Time 1 and 81% at Time
2. In Year 2, filming was much
more consistent across strategies. DTT and PRT were both
filmed in 92% of classrooms at Time 3
Psychology in the Schools DOI: 10.1002/pits
188 Stahmer et al.
FIGURE 1. The percentage of classrooms using the strategy
during each time period.
and 97% of classrooms at Time 4. For FR, 89% of classrooms
were filmed at Time 3 and 97% at
Time 4.
Overall Competence in the Instructional Strategies
Discrete trial training. The percentage of DTT components on
which teachers met fidelity
(i.e., a score of 4 or 5 during the observation) was used as the
dependent variable for these analyses.
Mean results are displayed in Table 2. No statistically
significant changes over time were found in
average or best DTT fidelity over time. In general, classrooms
21. had a relatively high average and best
DTT fidelity during all time periods. The range of scores for
individual performance was variable at
both time periods, as evidenced by the large standard
deviations.
The percentage of classrooms in which teachers met DTT
fidelity (i.e., correct implementation
of 80% of the DTT strategies during the observation) was
examined. Fifty-six percent of classrooms
met fidelity at Time 1 based on the average of all observations
at Time 1, 47% at Time 2, 46%
at Time 3, and 59% at Time 4. When considering only the best
example, 65% of classrooms met
fidelity at Time 1, and this increased to 81% by Time 4 (see
Figure 2).
Pivotal response training. The dependent variable for these
analyses was the percentage of
PRT components on which teachers met fidelity (i.e., a score of
4 or 5 during the observation).
Mean results are displayed in Table 2. No statistically
significant changes were found in average
PRT fidelity over time. There was a statistically significant
increase in best scores over time,
F(3, 108) = 2.85, p = .04. In pairwise comparisons, only the
difference in best scores between Time
1 and Time 4 was statistically significant, t(9) = –2.45, p = .04.
The range of scores for individual
performance was variable at both time periods, as evidenced by
the large standard deviations.
The percentage of classrooms in which teachers met PRT
fidelity was examined (i.e., correct
implementation of at least 80% of PRT components during the
observation). For average perfor-
22. mance, only 15% of classrooms met fidelity at Time 1, 31% at
Time 2, 11% at Time 3, and 19% at
Time 4. When examining best performance at each time period,
23% of classrooms met fidelity at
Time 1, 41% at Time 2, 17% at Time 3, and 30% at Time 4 (see
Figure 2).
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 189
Table 2
Mean Fidelity of Implementation by Time and Intervention
Strategy for Average and Best Fidelitya
Intervention
Discrete Trial Pivotal Response Functional Overall
Teaching Training Routines Fidelity
Time M (SD) M (SD) M (SD) M (SD)
Average Fidelity across All Assessments During Time Period
(%)
Time 1 78.54 53.41 56.43 65.14
(24.33) (24.09) (16.42) (16.47)
Time 2 73.94 58.43 69.77 68.45
(21.16) (26.66) (19.05) (15.39)
Time 3 71.04 68.39 75.56 71.66
(27.79) (20.25) (24.17) (20.01)
Time 4 80.46 60.19 78.51 73.58
23. (17.55) (21.39) (19.80) (12.98)
Best Fidelity for Each Time Period (%)
Time 1 81.64 54.64 63.53 69.86
(24.93) (25.60) (20.38) (18.00)
Time 2 84.53 65.22 79.96 77.72
(19.77) (23.38) (21.33) (16.28)
Time 3 79.21 73.78 81.59 81.33
(26.94) (21.21) (23.78) (11.19)
Time 4 90.74 74.16 91.45 85.70
(13.00) (21.96) (16.50) (11.19)
aFidelity of implementation is defined as the percentage of
strategy components implemented correctly.
Teaching in FR. The percentage of FR components on which
teachers met fidelity was used
as the dependent variable for these analyses. Mean results are
displayed in Table 2. Statistically
significant changes over time were found in average FR fidelity,
F(3, 154) = 9.11, p = .00) and best
FR fidelity, F(3, 155) = 12.13, p = .00). The range of scores for
individual performance was variable
at both time periods, as evidenced by the large standard
deviations. Statistically significant increase
were seen between Time 1 and each of the other time periods,
both for average fidelity (Time 2: t =
–3.71, p < .00; Time 3: t = –3.70, p = .00; Time 4: t = –6.14, p
= .00), and best fidelity (Time 2:
t = –3.83, p < .00; Time 3: t = –3.28, p = .00; Time 4: t = –6.93,
p = .00).
24. The percentage of classrooms in which teachers met FR fidelity
was examined (i.e., correct
implementation of 80% FR strategies during the observation).
For average performance, 11% of
classrooms met fidelity at Time 1, 34% at Time 2, 62% at Time
3, and 49% at Time 4. For best
performance, 16% met fidelity at Time 1, and 78% met fidelity
by Time 4 (see Figure 2).
Overall fidelity. Overall fidelity across the STAR program was
examined by averaging the
percentage of components implemented correctly in each
strategy (DT, PRT, and FR; Table 1).
No significant changes over time were seen in the average
overall fidelity. However, significant
increases in best overall fidelity were indicated, F(3, 178) =
8.14, p = .00). Post-hoc analyses
indicated that best fidelity at Time 1 was significantly lower
than at any of the other time periods
(Time 2: t = –2.72, p < .01; Time 3: t = –4.14, p = .00; Time 4: t
= –5.03, p = .00). The range
of scores for individual performance was variable at both time
periods, as evidenced by the large
standard deviations.
Psychology in the Schools DOI: 10.1002/pits
190 Stahmer et al.
FIGURE 2. Percentage of classrooms meeting 80%
implementation fidelity during each time period. FI = fidelity
implemen-
tation.
25. The percentage of classrooms meeting overall fidelity at each
time period (i.e., correctly
implementing at least 80% of components in all three
interventions) was examined. For average
performance, 17% of classrooms met fidelity at Time 1, 22% at
Time 2, and 42% at both Time 3
and Time 4. For best performance, 31% met fidelity at Time 1,
and 71% met fidelity by Time 4
(Figure 2).
Comparison of Intervention Fidelity across Intervention
Strategies
Mean fidelity of implementation was compared across the three
intervention strategies for
average and best fidelity. Significant differences in average,
F(109, 326) = 13.06, p � .00), and
best overall fidelity were indicated, F(110, 327) = 3.26, p �
.00l (means are presented in Table 2).
Analyses indicated that DTT average and best fidelity were
significantly greater than were PRT
average and best fidelity at Time 2 (average: t = 4.03, p � .00;
best: t = 5.14, p � .00) and Time
4 (average: t = –5.46, p � .00; best: t = –4.31, p � .00). FR
average and best fidelity were also
significantly greater than were PRT average and best fidelity
(average: t = 5.46, p � .00; best: t =
4.31, p � .00) at Time 4.
Associations between Intervention Fidelity and Experience,
Education, or Coaching
Pearson correlations indicated there was no statistically
significant association between the
number of years of either teaching or children with autism and
26. overall fidelity or fidelity on any
specific intervention strategy at any time point. The number of
hours of coaching received was not
associated with overall fidelity.
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 191
DISCUSSION
These results from one of the first field trials of evidence-based
practices for students with
autism in public schools suggest that classrooms vary greatly in
their implementation of evidence-
based practices. In general, the data suggest that the complexity
and structure of the intervention
strategy may affect intervention use and procedural fidelity;
more structured methods were more
likely to be implemented with higher fidelity than were less
structured strategies. Procedural fidelity
continued to increase through the second year of training,
suggesting the importance of continued
practice for extended periods. It is important to note that the
number of hours of coaching was not
associated with final fidelity, suggesting that in vivo support
may be important, but it is not sufficient
to improve practice in the field.
Classrooms implemented DTT more often in Year 1 and with
greater fidelity across both years
than PRT or FR. The curriculum materials and steps for
implementing DTT are clearly specified,
highly structured, and relatively easy to follow. Components
27. are, in general, scripted, straightforward,
and with the exception of determining appropriate prompting
levels, leave little room for clinical
judgment.
In contrast, PRT is a more naturalistic strategy, and several of
the components require clinical
judgment on the part of the adult. Teachers had, in general,
significantly greater difficulty imple-
menting PRT with fidelity than either DTT or FR. During Year
1, many teachers did not implement
PRT at all. By Year 2, although they were implementing the
strategy, few were doing so with high
fidelity. Both average and best fidelity scores across teachers
are lower for PRT than either DTT or
FR. Teachers may require additional time to develop and
integrate these intervention strategies into
the school day. It is possible that teachers have difficulty with
specific components of PRT that are
not well suited to the classroom environment. Recent data
indicate that teachers may consistently
leave out some components of PRT, which would reduce overall
implementation fidelity of the
comprehensive model (Suhrheinrich et al., 2013). How these
adaptations affect the effectiveness of
this intervention is not yet known.
FR strategies use many of the procedures of PRT in a group
format, but have a specified set of
goals and procedures. By the end of Year 2, procedural fidelity
was greater for FR than PRT. This may
indicate that the structure of the FR, including specific steps
and goals, may assist with appropriate
implementation of the naturalistic strategies. It may also be
helpful that the STAR program uses
FR strategies for activities that occur every day (e.g., snack
28. time, toileting), providing consistent
opportunities to implement the strategy independent of the
classroom’s schedule or structure.
Relatively high variability across classrooms and over time
within classrooms was evident
for both use of strategies (as measured by percentage of
classrooms filmed) and implementation
fidelity. It could be that classroom staff used the strategies with
a different child each time they
were filmed. Some students may present with behavior
challenges that make the use of a particular
intervention difficult. Variability in daily staffing, school
activities, and student needs may affect the
use of intervention strategies on any given day. It is also
possible that staff characteristics, such as
motivation to implement the intervention, experience,
education, and training may affect how well
they can use certain methods. Maintenance of all strategies may
be difficult, as suggested by the
decrease in fidelity at Time 3 (after summer break).
Limitations
There are several limitations to this study. First, implementation
fidelity was examined during
brief time periods each month. These data may provide only
limited insight into whether strategies
were well integrated into the daily classroom routine or used
consistently over time or with a
majority of students in the classroom. Second, the way fidelity
was rated was relatively general and
Psychology in the Schools DOI: 10.1002/pits
29. 192 Stahmer et al.
may not have captured important aspects of the implementation
that could affect student progress.
Understanding the active ingredients of effective intervention
and how to accurately measure those
strategies is an area of growth for the field. Third, adults in the
classroom knew they were being
observed, and this may have altered their use of the strategies.
Both the second and third limitations
would lead to an overestimate of fidelity. Still, fidelity was
relatively low across the three strategies.
Strategies may have only been implemented on observation days
or may have been implemented
differently (better or worse fidelity) during the observations.
Fourth, the use of filming as a proxy for
use in the classroom has not been validated. In addition, for
some observations, paraprofessionals
rather than classroom teachers implemented the strategies. A
closer examination of differences by
profession may be warranted.
CONCLUSIONS
Results of this study indicate that teachers and staff in public
school special education class-
rooms can learn to implement structured strategies that are the
foundation of many autism interven-
tion programs; however, they require a great deal of training,
coaching, and time to reach and maintain
implementation fidelity. A recent study indicates that ongoing
classroom coaching can result in the
use of important classroom practices, such as ongoing progress
monitoring (Pellecchia et al., 2010).
Even with ongoing support, however, not all staff will
30. implement interventions with high fidelity.
Highly structured strategies appear to be easier to learn, such
that practice and coaching may be
consistently required for teachers to use more naturalistic
strategies with high fidelity. Naturalistic
strategies may require additional training or adaptation for
classroom environments. Some recent
preliminary data indicate that teachers may be better able to
implement a classroom-adapted version
of PRT (Stahmer, Suhrheunrich, Reed, & Schreibman, 2012).
Providers who achieve mastery of
intervention strategies are likely to lose those skills or the
motivation to use those skills over breaks
from teaching; thus, ongoing consultation well past the initial
didactic training is likely needed to
maintain mastery. The same training and consultation strategy
was used for all three practices, but
with highly different results. These differential results may be
related to the intervention itself or to
the fit of the training and consultation model to the specific
intervention, teacher, and context.
Future Research
High-quality implementation of evidence-based practices for
children with autism in schools
is essential for ensuring the best outcomes for this growing
population of children. However,
research in this area is just beginning to address the complexity
of serving children with ASD
using comprehensive and complex methods. The development of
low-cost, accurate fidelity of
implementation measurement is important for helping to ensure
that teachers are accurately using
evidence-based interventions. In addition, future research
should address the development of training
31. methods for naturalistic strategies that address the complexities
of using these strategies in classroom
settings. Integrating these strategies throughout the school day
and for academic tasks can be
challenging; yet, they are considered a very effective practice
for children with autism. Often,
paraprofessional staff spend a great deal of time working with
children in the classroom. Specifically
examining training needs and fidelity of implementation of
paraprofessional staff compared with
teachers and other professionals is needed. In addition, there are
multiple interventions for ASD
that are “branded” by various research groups. Often, the
specific techniques or strategies overlap
significantly. Research examining the key ingredients necessary
for effective classroom intervention
is sorely needed. This has the potential to simplify and clarify
intervention for use by teachers and
other community providers.
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 193
REFERENCES
Arick, J. R., Loos, L., Falco, R., & Krug, D. A. (2004). The
STAR program: Strategies for teaching based on autism
research.
Austin, TX: Pro-Ed.
Brown, F., Evans, I., Weed, K., & Owen, V. (1987). Delineating
functional competencies: A component model. Journal of
the Association for Persons with Severe Handicaps, 12, 117–
32. 124.
Bush, R. N. (1984). Effective staff development. San Francisco,
CA: Far West Laboratory for Educational Research and
Development.
Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied
behavioral analysis. New York, NY: Macmillan.
Cornett, J., & Knight, J. (2009). Research on coaching. In J.
Knight (Ed.), Coaching: Approaches and perspectives (pp.
192–216). Thousand Oaks, CA: Corwin Press.
Dingfelder, H. E., Mandell, D. S., & Marcus, S. C. (2011, May).
Classroom climate program fidelity & outcomes for students
with autism. Paper presented at the 10th annual International
Meeting for Autism Research. San Diego, CA.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A
review of research on the influence of implementation on
program outcomes and the factors affecting implementation.
American Journal of Community Psychology, 41, 327–350.
Fenske, E., Krantz, P. J., & McClannahan, L. E. (2001).
Incidental teaching: A non-discrete trial teaching procedure. In
C.
Maurice, G. Green, & R. Foxx (Eds.), Making a difference:
Behavioral intervention for autism (pp. 75–82). Austin, TX:
Pro-Ed.
Garland, A. F., Bickman, L., & Chorpita, B. F. (2010). Change
what? Identifying quality improvements targets by investigating
usual mental health care. Administration and Policy in Mental
Health and Mental Health Services Research, 37, 15–26.
Gersten, R., Fuchs, L., Compton, D., Coyne, M., Greenwood,
33. C., & Innocenti, M. S. (2005). Quality indicators for group
experimental and quasi-experimental research in special
education. Exceptional Children, 71, 149–164.
Gresham, F. M. (1989). Assessment of treatment integrity in
school consultation and prereferral intervention. School Psy-
chology Review, 18, 37–50.
Gresham, F. M. (2009). Evolution of the treatment integrity
concept: Current status and future directions. School
Psychology
Review, 38, 533–540.
Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993).
Treatment integrity in applied behavior analysis with children.
Journal
of Applied Behavior Analysis, 26, 257–263.
Gresham, F. M., MacMillan, D. L., Beebe-Grankenberger, M.
E., & Bocian, K. M. (2000). Treatment integrity in learning
disabilities intervention research: Do we really know how
treatments are implemented? Learning Disabilities Research
and Practice, 15, 198–125.
Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing
fidelity with flexibility and fit: What do we really know about
fidelity of implementation in schools? Exceptional Children, 79,
181–193.
Hess, K. L., Morrier, M. J., Heflin, L. J., & Ivey, M. L. (2008).
Autism treatment survey: Services received by children with
autism spectrum disorders in public school classrooms. Journal
of Autism and Developmental Disorders, 38, 961–971.
Horner, R. H., Carr, E. G., Halle, J., McGee, G. G., Odom, S.
L., & Wolery, M. (2005). The use of single-subject research to
34. identify evidence-based practice in special education.
Exceptional Children, 71, 165–179.
Jennett, H. K., Harris, S. L., & Mesibov, G. B. (2003).
Commitment to philosophy, teacher efficacy, and burnout
among
teachers of children with autism. Journal of Autism and
Developmental Disorders, 33, 583–593.
Koegel, R. L., O’Dell, M. C., & Koegel, L. K. (1987). A natural
language teaching paradigm for nonverbal autistic children.
Journal of Autism & Developmental Disorders, 17, 187–200.
Koegel, R. L., Russo, D. C., & Rincover, A. (1977). Assessing
and training teachers in the generalized use of behavior
modification with autistic children. Journal of Applied Behavior
Analysis, 10, 197–205.
Koegel, R. L., Schreibman, L., Good, A., Cerniglia, L., Murphy,
C., & Koegel, L. K. (Eds.). (1989). How to teach pivotal
behaviors to children with autism: A training manual. Santa
Barbara: University of California–San Diego.
Krug, D. A., Arick, J., Almond, P., Rosenblum, J., Scanlon, C.,
& Border, M. (1979). Evaluation of a program of systematic
instructional procedures for pre-verbal autistic children.
Improving Human Performance, 8, 29–41.
Krug, D. A., Rosenblum, J. F., Almond, P. J., & Arick, J. R.
(1981). Autistic and severely handicapped in the classroom:
Assessment, behavior management, and communication
training. Portland, OR: ASIEP Education.
Laski, K. E., Charlop, M. H., & Schreibman, L. (1988).
Training parents to use the natural language paradigm to
increase
35. their autistic children’s speech. Journal of Applied Behavior
Analysis, 21, 391–400.
Leaf, R. B., & McEachin, J. J. (1999). A work in progress:
Behavior management strategies and a curriculum for intensive
behavioral treatment of autism. New York, NY: DRL Books.
Lovaas, O. I. (1981). Teaching developmentally disabled
children: Theme book. Austin, TX: PRO-ED.
Lovaas, O. I. (1987). Behavioral treatment and normal
educational and intellectual functioning of young autistic
children.
Journal of Consulting and Clinical Psychology, 55, 3–9.
Lovaas, O. I. (2003). Teaching individuals with developmental
delays: Basic intervention techniques. Austin, TX: Pro-Ed.
Psychology in the Schools DOI: 10.1002/pits
194 Stahmer et al.
Lovaas, O. I., & Buch, G. (1997). Intensive behavioral
intervention with young children with autism. In N. Singh (Ed.),
Prevention and treatment of severe behavior problems: Models
and methods in developmental disabilities (pp. 61–86).
Pacific Grove, CA: Brooks/Cole Publishing.
Mandell, D. S., Stahmer, A. C., Shin, S., Xie, M., Reisinger, E.,
& Marcus, S. C. (2013). The role of treatment fidelity on
outcomes during a randomized field trial of an autism
intervention. Autism, 17, 281–295.
Marcus, L., Schopler, E., & Lord, C. (2000). TEACCH services
for preschool children. In J. S. Handleman & S. L. Harris
36. (Eds.), Preschool education programs for children with autism
(pp. 215–232). Austin, TX: Pro-ED.
McClannahan, L. E., & Krantz, P. J. (1999). Activity schedules
for children with autism: Teaching independent behavior.
Bethesda, MD: Woodbine House.
McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D.
D. (2007). Treatment integrity of school-based interventions
with children in the Journal of Applied Behavior Analysis
1991–2005. Journal of Applied Behavioral Analysis, 40,
659–972.
National Standards Project. (2009). National Standards report.
Randolph, MA: National Autism Center.
Noell, G. H., Duhon, G. J., Gatti, S. L., & Connell, J. E. (2002).
Consultation, follow-up and implementation of behavior
management interventions in general education. School
Psychology Review, 31, 217–234.
Noell, G. H., Witt, J. C., Slider, N. J., Connel, J. E., Williams,
K. L., Resetar, J. L., & Koenig, J. L. (2005). Teacher
implementation following consultation in child behavior
therapy: A comparison of three follow-up strategies. School
Psychology Review, 37, 87–106.
Noldus Information Technology, Inc. (2008). The Observer XT
8.0 [computer software]. Wageningen, The Netherlands.
Odom, S. L., Boyd, B. A., Hall, L. J., & Hume, K. (2010).
Evaluation of comprehensive treatment models for individuals
with autism spectrum disorders. Journal of Autism and
Developmental Disorders, 40, 425–436.
Odom, S. L., & Wolery, M. (2003). A unified theory of practice
in early intervention/early childhood special education:
37. Evidence-based practices. Journal of Special Education, 37,
164–173.
Pellecchia, M., Connell, J. E., Eisenhart, D., Kane, M.,
Schoener, C., Turkel, K., & Mandell, D. S. (2010). Group
performance
feedback: Consultation to increase classroom team data
collection. Journal of School Psychology, 49, 411–431.
Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The
integrity of independent variables in behavior analysis. Journal
of Applied Behavior Analysis, 15, 477–492.
Pierce, K., & Schreibman, L. (1997). Multiple peer use of
pivotal response training to increase social behaviors of
classmates
with autism: Results from trained and untrained peers. Journal
of Applied Behavior Analysis, 30, 157–160.
Proctor, E., Simere, H., Raghavan, R., Hovmand, P., Aarons, G.,
Bunger, A., & Hensley, M. (2011). Outcomes for imple-
mentation research: Conceptual distinctions, measurement
challenges, and research agenda. Administration and Policy
in Mental Health and Mental Health Services Research, 38, 65–
76.
Rabin, B. A., Brownson, R. C., Haire-Joshu, D., Kreuter, M.
W., & Weaver, N. L. (2008). A glossary for dissemination and
implementation research in health. Journal of Public Health
Management and Practice, 14, 117–123.
Sarokoff, R. A., & Sturmey, P. (2004). The effects of behavioral
skills training on staff implementation of discrete-trial
teaching. Journal of Applied Behavior Analysis, 37, 535–538.
38. Schoenwald, S. K., Garland, A. F., Chapman, J. E., Frazier, S.
L., Sheidow, A. J., & Southam-Gerow, M. A. (2011). Toward
the effective and efficient measurement of implementation
fidelity. Administration and Policy in Mental Health and
Mental Health Services Research, 38, 32–43.
Schreibman, L., & Koegel, R. L. (1996). Fostering self-
management: Parent-delivered pivotal response training for
children
with autistic disorder. In E. D. Hibbs & P. S. Jensen (Eds.),
Psychosocial treatments for child and adolescent disorders:
Empirically based strategies for clinical practice (pp. 525–552).
Washington, DC: American Psychological Association.
Scull, J., & Winkler, A. M. (2011). Shifting trends in special
education. Washington, DC: Thomas B. Fordham Institute.
Simpson, R. L., de Boer-Ott, S. R., & Smith-Myles, B. (2003).
Inclusion of learners with autism spectrum disorders in general
education settings. Topics in Language Disorders, 23, 116–133.
Sindelar, P. T., Brownell, M. T., & Billingsley, B. (2010).
Special education teacher education research: Current status and
future directions. The Journal of the Teacher Education
Division of the Council for Exceptional Children, 33, 8–24.
Smith, T. (2001). Discrete trial training in the treatment of
autism. Focus on Autism and Other Developmental Disabilities,
16, 86–92.
Smith, T., Buch, G. A., & Gamby, T. E. (2000). Parent-directed,
intensive early intervention for children with pervasive
developmental disorder. Research in Developmental
Disabilities, 21, 297–309.
Smith, T., Donahoe, P. A., & Davis, B. J. (2000). The UCLA
young autism project. Austin, TX: Pro-Ed.
39. Smith, T., Parker, T., Taubman, M., & Lovaas, O. I. (1992).
Transfer of staff training from workshops to group homes: A
failure to generalize across settings. Research in Developmental
Disabilities, 13, 57–71.
Stahmer, A. (2007). The basic structure of community early
intervention programs for children with autism: Provider
descriptions. Journal of Autism and Developmental Disorders,
37, 1344–1354.
Stahmer, A. (2010). Examining methods of fidelity of
implementation measurement. Unpublished raw data.
Psychology in the Schools DOI: 10.1002/pits
Training Teachers in Autism Practices 195
Stahmer, A., Collings, N. M., & Palinkas, L. A. (2005). Early
intervention practices for children with autism: Descriptions
from community providers. Focus on Autism & Other
Developmental Disabilities, 20, 66–79.
Stahmer, A., & Gist, K. (2001). The effects of an accelerated
parent education program on technique mastery and child
outcome. Journal of Positive Behavior Interventions, 3, 75–82.
Stahmer, A., & Ingersoll, B. (2004). Inclusive programming for
toddlers with autism spectrum disorders: Outcomes from the
Children’s Toddler School. Journal of Positive Behavior
Interventions, 6, 67–82.
Stahmer, A., Suhrheinrich, J., Reed, S., Bolduc, C., &
Schreibman, L. (2011). Classroom pivotal response teaching: A
guide
40. to effective implementation. New York, NY: Guilford Press.
Stahmer, A. C., Suhrheinrich, J., Reed, S., & Schreibman, L.
(2012). What works for you? Using teacher feedback to inform
adaptations of pivotal response training for classroom use.
Autism Research and Treatment, 2012, 1–11.
Suhrheinrich, J. (2011). Training teachers to use pivotal
response training with children with autism; Coaching as a
critical
component. Teacher Education and Special Education, 34, 339–
349.
Suhrheinrich, J., Stahmer, A., Reed, S., Schreibman, L.,
Reisinger, E., & Mandell., D. (2013). Implementation
challenges
in translating pivotal response training into community settings.
Journal of Autism and Developmental Disorders, 43,
2970–2976.
Swanson, E., Wanzek, J., Haring, C., Ciullo, S., & McCulley, L.
(2011). Intervention fidelity in special and general education
research journals. The Journal of Special Education, 47, 13–33.
Wheeler, J. J., Baggett, B. A., Fox, J., & Blevins, L. (2006).
Treatment integrity: A review of intervention studies conducted
with children with autism. Focus on Autism and Other
Developmental Disabilities, 21, 1–10.
Wolery, M., & Garfinkle, A. N. (2002). Measures in
intervention research with young children who have autism.
Journal of
Autism and Developmental Disorders, 32, 463–478.
Psychology in the Schools DOI: 10.1002/pits
41. Copyright of Psychology in the Schools is the property of John
Wiley & Sons, Inc. and its
content may not be copied or emailed to multiple sites or posted
to a listserv without the
copyright holder's express written permission. However, users
may print, download, or email
articles for individual use.
Medical Case Study with Minitab for solutions
Background: You work for a government agency and your
management asked you to take a look at data from prescription
drugs administered at hospitals in your geography.
She asked you to analyze the data with some common tools and
build a DMAIC model for how you would work with the
hospitals to improve results, since their performance is below
the average. She would like a simple model for you to present
to her that you will propose to representatives from the
hospitals. The hospital representatives will have to be brought
on board and understand the issues and their role in the study.
Use the DMAIC model from the course material to create a
model of the effort to be completed by the hospitals.
Define:
1. What would you say about the DMAIC model to the hospital
staff on your team?
2. Write a problem statement for the work you are considering.
3. Develop a team charter so that each of the representatives
understands what is expected of them and to brainstorm
improvements upon it.
4. What are the key deliverables of the define step that you
expect of the team?
42. Measure:
1. What activities would you propose that the team work on?
2. What measures would you propose to the team to pursue?
3. What data collection would you propose?
4. What are the key steps to get to items 1-3 above?
Analyze: Prepare data to show the team about the extent of the
problem:
1. A Pareto chart of the errors from the Error Type chart below
1. What would you suggest the team focus upon?
2. What would you tell the team about the data they need to
collect and what will be done with it?
2. Another example of measures is the administration of Drug
A, which needs to be administered every 30 minutes. The
requirement for the drug is to be administered no more than 3
minutes early or 3 minutes late or between 27-33 minutes. Make
a histogram of the data below (Time between administration of
drug chart). What is it saying about the process?
3. Do a normalcy test. Is that a normal distribution?
43. Improve:
1. You don’t have a process flow or any information on how
hospitals administer drugs or their improvement plans if any.
What would you tell the participants about what is expected in
this phase of the program?
44. Control:
1. What are the key steps for control?
2. Develop a sample response plan that you would use to show
the team what is expected to be done.
3. What are the key deliverables for this step?
Test data in Excel format:
Error Type
Type of High Alert Medication Error
Omission
8461
Improper dose/quantity
7124
Unauthorized/wrong drug
5463
Prescribing error
2923
Wrong Time
2300
Extra Dose
2256
Wrong patient
1786
Mislabeling
636
Wrong dosage form
586
Wrong administration
335
Drug prepared incorrectly
311
Wrong route
252
Other
49. vation that “interests vested in the system-as-is suddenly
appear and typically deter attempts to change the system”
(Fixen et al., 2013, p. 218) has been made by ecologically
oriented observers of human behavior since time of Marx
(Bronfenbrenner, 1979; Lewin, 1951; Marx, 1888/1984).
One implication of this view, of course, is that the problem
of (non)implementation of EBP may be most usefully
viewed not simply as a “deficit” in the knowledge, skills, or
ideological commitments of practitioners but as a product
of the set of social, organizational, and material conditions
that operate in a given human service setting. In this article,
we draw on interviews conducted with special education
practitioners to investigate how these kinds of contextual
factors (and others) may affect the ways in which practitio-
ners interpret and respond to contemporary press for imple-
mentation of EBP.
We are by no means the first to recognize the importance
of seeking practitioner perspectives in understanding the
challenges of implementing EBP in special education. For
example, Landrum, Cook, Tankersley, and Fitzgerald (2002)
surveyed 127 teachers (60 special educators, 67 general edu-
cators) to assess their views about the value of four sources
of information about practice: university coursework,
613592 SEDXXX10.1177/0022466915613592The Journal of
Special EducationHudson et al.
research-article2015
1University of Washington, Seattle, USA
2Northern Illinois University, DeKalb, USA
3Central Michigan University, Mount Pleasant, USA
4American Institutes for Research, Washington, DC, USA
Corresponding Author:
50. Roxanne F. Hudson, Area of Special Education, University of
Washington, P.O. Box 353600, Seattle, WA 99195, USA.
E-mail: [email protected]
A Socio-Cultural Analysis of Practitioner
Perspectives on Implementation of
Evidence-Based Practice in Special
Education
Roxanne F. Hudson, PhD1, Carol A. Davis, EdD1, Grace Blum,
MEd1,
Rosanne Greenway, MEd1, Jacob Hackett, MEd1, James
Kidwell, MEd1,
Lisa Liberty, PhD1,2, Megan McCollow, PhD1,3, Yelena Patish,
MEd1,
Jennifer Pierce, PhD1,4, Maggie Schulze, MEd1, Maya M.
Smith, PhD1,
and Charles A. Peck, PhD1
Abstract
Despite the central role “evidence-based practice” (EBP) plays
in special education agendas for both research and policy,
it is widely recognized that achieving implementation of EBPs
remains an elusive goal. In an effort to better understand this
problem, we interviewed special education practitioners in four
school districts, inquiring about the role evidence and EBP
played in their work. Our data suggest that practitioners’
responses to policies that press for increased use of EBP are
mediated by a variety of factors, including their interpretations
of the EBP construct itself, as well as the organizational
conditions of their work, and their access to relevant knowledge
and related tools to support implementation. We
interpret these findings in terms of their implications for
understanding the problem of implementation through a more
contextual and ecological lens than has been reflected in much
of the literature to date.
51. Keywords
evidence-based practices, implementation, special education
practitioners
mailto:[email protected]
http://crossmark.crossref.org/dialog/?doi=10.1177%2F00224669
15613592&domain=pdf&date_stamp=2015-11-08
28 The Journal of Special Education 50(1)
research journals, teaching colleagues, and in-service/pro-
fessional development workshops. Their data indicated that
research journals and university courses (presumably
sources of relatively reliable information about EBP) were
viewed as less useful, less trustworthy, and less accessible
than either information from colleagues or information
received via professional development. Similarly, Boardman,
Argüelles, Vaughn, Hughes, and Klingner (2005) reported
that teachers often expressed the belief that the extant
research was not relevant to the populations they served in
their classrooms, and reported relying on colleagues for rec-
ommendations about practice.
In a more recent study, Jones (2009) investigated the
views of 10 novice special educators regarding EBP. Based
on interview, classroom observation, and rating scale data,
Jones suggested that the novice teachers she studied fell
into three broad groups. “Definitive supporters” expressed
clear and positive views about the importance of research in
decisions about classroom practice. “Cautious consumers”
felt research could be useful, but often did not reflect char-
acteristics and needs of their individual students. A third
group, “The Critics,” expressed skepticism about the value
of research for decisions about classroom practice.
52. Taken together, these studies (and others) provide a
rather robust picture of the tensions between research and
practice in special education. While significant variation
exists among special education practitioners in their views
about the value and relevance of research to their work in
the classroom, many express more confidence in the knowl-
edge and expertise of local colleagues than in information
they might receive from university coursework and/or
researchers. This result is consistent with research from
other fields and suggests that much remains to be learned
about the conditions under which practitioners utilize
knowledge from research in decisions about practice
(Aarons & Palinkas, 2007; Glasgow, Lichtenstein, &
Marcus, 2003).
In our review of the special education research on this
topic, we noted that most researchers have framed their
analysis of practitioner perspectives related to implementa-
tion of EBP in essentially individualistic and personological
terms—placing teachers (and, in some cases, administra-
tors) in the center of their analysis of the implementation
process. For example, as noted earlier, Jones (2009) parsed
individual teachers into groups such as “the Critics” and
“the Supporters.” Also focusing on individual practitioners,
Landrum et al. (2002) argued,
Only when we have confidence that teachers learn about
empirically sound practice in both their initial preparation and
ongoing professional development, and that their skills reflect
this training, can we predict that students with disabilities will
be afforded the most appropriate learning opportunities
available. (p. 48)
We do not entirely disagree with these conclusions, and
others like them that underscore the importance of persono-
logical variables (e.g., practitioner knowledge, prior train-
53. ing, attitudes) affecting implementation of EBP. But we
would also argue that in foregrounding characteristics of
individual practitioners as a focus of analysis, these studies
reflect a set of implicit assumptions about the nature of
practice and how it is constructed that narrows our view of
the problems of implementation, and the range of actions to
be considered in engaging those problems. In the present
study, we follow recent recommendations (Harn, Parisi, &
Stoolmiller, 2013; Klingner, Boardman, & McMaster, 2013;
Peck & McDonald, 2014) in undertaking a more holistic
and contextual approach to understanding how practitioner
perspectives on EBP are shaped by the conditions in which
they work.
Theoretical Framing
In conceptualizing “a more contextual” approach to under-
standing practitioner interpretation and implementation of
EBP, we drew on some of the precepts of sociocultural the-
ory as a general framework for investigating ways in which
social and material conditions shape workplace learning
and practice (Billett, 2003; Engeström, 2001; Scribner,
1997; Vygotsky, 1978). Our choice of a sociocultural per-
spective was based on several of its key precepts that we
believed would be useful in understanding practitioner per-
spectives on implementation of EBP. First, sociocultural
theory foregrounds analysis of relationships between indi-
vidual and collective dimensions of social practice—in this
case, the analysis of the transactions that take place between
individual practitioners and the organizations in which they
work (Engeström, 2001). Second, this view assumes that
human thought processes (including, of course, one’s views
about EBP) are shaped by the demands of the practical
activities in which people are regularly engaged. A third
assumption of this stream of sociocultural theory is that par-
ticipation in social practice is affected by the affordances
54. and constraints of the conceptual and material tools avail-
able (e.g., the characteristics and representations of EBP
available in local school districts and other professional
resources; Falmagne, 1995; Leontev, 1975/1978; Scribner,
1997). Overall, the sociocultural perspective suggests the
value of undertaking a more focused analysis of the social
and organizational conditions in which decisions about
practice are made than has been reflected in much of the
extant research on the problem of implementation. We used
the following research questions to guide our inquiry:
Research Question 1: How do special education practi-
tioners interpret the meaning of EBP in the context of
decisions they make about curriculum and instruction?
Hudson et al. 29
Research Question 2: What contextual factors are asso-
ciated with practitioner interpretations of the role EBP
can and should play in their decisions about instruction?
Method
We used a qualitative methodology (Merriam, 2009) to
investigate the perspectives—that is, the values, beliefs,
and attitudes—held by special education practitioners with
regard to their views about EBP, and the role research
played in their decisions about curriculum and instruction.
We elected this methodological approach because of the
hypothesis-generating, rather than hypothesis-testing, pur-
poses of the study (Glaser & Strauss, 1967).
Participants
55. A total of 27 special education practitioners participated in
our study. We contacted directors of special education via
email and invited participation from four school districts in
the Seattle/Puget Sound area. Demographics for these dis-
tricts are presented in Table 1.
Teacher participants were nominated by special educa-
tion directors, who were asked to identify individuals they
believed would be interested in being interviewed for the
study. In each district, we requested nominations of teachers
working in three types of programs or settings: resource
rooms serving students with a wide range of disability
labels placed primarily in general education classrooms,
self-contained classes serving students with emotional/
behavioral disabilities (EBD), and self-contained class-
rooms serving students with low-incidence developmental
disabilities. Table 2 reports the number, working context,
and experience level of study participants in each of the dis-
tricts in which we collected data.
Data Collection and Analysis
Interviews. The primary data source for our study consisted
of face-to-face interviews we conducted individually with
the 27 special educators who agreed to participate in the
study. We used semistructured interview protocols for each
of the four types of practitioners we interviewed: special
education directors, resource room teachers, EBD teachers,
and teachers of students with low-incidence developmental
disabilities. While the protocols for administrators and
teachers varied in some ways, both were structured to pro-
ceed from general, context-descriptive questions such as
“Tell me about the work you do,” to more focused questions
about daily practice (“Tell me about a typical day in your
classroom”). We asked each informant to define the term
56. EBP and tell us what it meant to them in terms of their deci-
sions about curriculum and instruction. Interview protocols
also included a series of questions about district policies
related to EBP in both general and special education, and
how these affected the decisions our informants made in the
classroom. Interviews were generally between 45 min to an
hour in length. Interviews were audio-recorded and subse-
quently transcribed verbatim for analysis. Transcripts were
entered into a web-based platform for qualitative and
mixed-method data analysis (http://www.dedoose.com).
Data analysis. We used the standard procedures for induc-
tive data analysis described by Charmaz (2002), Strauss and
Corbin (1997), and others. Thus, we began our analysis by
having each of the 11 members of our research team read
through the interview transcripts, identifying text segments
of potential relevance to our research questions. Each of
these segments was tagged using low inference descriptors,
such as “classroom assessment” or “progress monitoring.”
Members of the research team then met to discuss examples
of the text segments they had tagged, identifying and defin-
ing codes emerging from individual analysis to be formal-
ized and used collectively. The remainder of the interviews
were then coded, followed by an additional round of team
meetings in which examples of each code were discussed,
with some codes combined, others modified or deleted
based on their perceived value relative to our research ques-
tions. A set of interpretive categories were developed
through this process which were used to aggregate coded
data segments and which became the basis for further anal-
ysis. These categories were then used as a basis for develop-
ing a series of data displays (Miles & Huberman, 1994)
organized by district and by each type of participant (i.e.,
resource room teachers, special education directors, etc.).
Team members met to discuss the implications of these
analyses and to develop a set of analytic memos which inte-
57. grated the categorical data into larger and more interpretive
case summaries. These summaries were used to develop the
set of cross-case findings described below.
Results
Our findings suggest that personal characteristics (particu-
larly values and beliefs about EBP), the features of organi-
zations (particularly practitioner positionality within these
organizations), and access to relevant tools all affected the
Table 1. School District Characteristics.
District Enrollment
Special education
enrollment (%)
Students eligible for
free or reduced-price
meals (%)
A 18,123 9.70 22.10
B 20,659 13.60 35.10
C 17,973 13.60 66.90
D 8,920 12.40 26.00
http://www.dedoose.com
30 The Journal of Special Education 50(1)
ways practitioners interpreted the relevance of the EBP to
decisions they made about practice. We interpreted these as
dimensions of practical activity that were inseparable and
mutually constitutive (Billett, 2006). As depicted in
58. Figure 1, our data suggest these factors operate in a highly
interdependent manner. We use this conceptual model to
understand both the points of the triangle and the interac-
tions that take place between points as represented by the
lines of the triangle.
In the following sections, we will present findings both
related to the points of the triangle and the intersections of
elements. First, we use excerpts from our interviews to
illustrate how the practitioners we interviewed interpreted
the idea of EBP, the organizational contexts they worked in,
and the tools and resources available to them. Second, we
present findings that illuminate the connections and interac-
tions between them.
People: Practitioner Definitions of EBP
We asked each of our informants how they defined EBP in
the context of their work in special education. The predomi-
nance of responses to this question reflected the notion that
EBP meant that “someone” had researched a specific pro-
gram or practice and found it to be effective:
There’s obviously been research and studies so what I picture
in my mind is that they have a curriculum and they conduct a
study where they have kids who participate in the study and
then they probably have some pre- and posttest to see if they’ve
made gains.
I’d say evidence-based would be like, that it’s been tried in lots
of different settings, across you know lots of different
populations and there’s been demonstrated success using that
curriculum or whatever the thing is you’re talking about, you
know, the social skills sheet or something. So it’s used with lots
of people and over different settings.
59. We noticed that our participants typically defined EBP in
ways that emphasized its external origins, and its ostensive
function as a “prescription” for their practice, rather than as
a resource for their own decision making (Cook & Odom,
2013). In some cases, this interpretation was also congruent
with the stance taken by district administrators:
We have adults that want to continue to do what they’ve done
in the past. And it is not research-based nor if you look from a
data perspective has it been particularly effective and that’s not
going to happen and we say, “This is the research, this is what
you’re going to do.” (Special Education Director, District A)
This strong ideological commitment to use of EBP in the
classroom was shared by some teachers:
I believe that by using research-based instruction, and teaching
with fidelity, then you’re more likely to have an outcome that
is specific to the research, as long as we use the curriculum as
it’s designed. Um, I think it’s vital, I think it’s vital that we are
not pulling things out of a hat, that we are using. (Resource
Room Teacher, District A)
More often, however, we found that practitioner views
about research in general, and the value of EBP in decision
making about classroom practice in particular, were more
ambivalent. Perhaps the most widely shared concern about
EBP expressed by our informants had to do with the ten-
sions they perceived between the “general case” and the
specifics of local context, including the special needs of the
children served in the schools and classrooms in which they
worked (Cook, Tankersley, Cook, & Landrum, 2008).
While the value of research and the relevance of EBP were
often acknowledged in abstract terms, both teachers and
administrators were quick to identify what they perceived
60. to be limitations in the relevance of research for local deci-
sion making and special populations:
. . .well what makes me question it—I’m always curious about
what the norm population is because it is talking about typically
developing kids and research-based practices that are used for
those types of kids. It’s very different for my kids. So when I’m
looking at an evidenced-based practice I want to be clear on
what evidence [is about] Gen Ed versus the Special Ed
population. (Self-Contained Classroom Teacher, District B)
Table 2. Participant Characteristics.
Participants
Number of
participants per
district
Median years in
position District
Special education
director
EBD
teacher
Resource room
teacher
Self-contained
teacher
A 1 1 2 2 6 7
B 1 1 2 3 7 10
61. C 1 2 2 2 7 6
D 1 2 2 2 7 6
Note. EBD = emotional/behavioral disabilities.
Hudson et al. 31
For many teachers, ambivalence about EBP included
particular tension about who makes decisions about the rel-
evance of evidence to their classroom practice. These teach-
ers often made reference to local perspectives as “forgotten”
or “overlooked” in decisions about practice:
. . . evidence-based is very important because you do need to
look at what you’re doing but there is just the day-to-day
knowledge that is overlooked in the evidence-based piece.
(Self-Contained Classroom Teacher, District B)
Most of the teachers and many administrators we inter-
viewed appeared to locate the authority for making evi-
dence-based decisions about curriculum and instruction
with the district central office or with “general education.”
For example, one director of special education reported,
“for our resource room students . . . we always start with the
Gen Ed and then if we have curriculum, if a part of that cur-
riculum has a supported intervention component to it we
start with that.” Many teachers similarly viewed the locus
of decisions about curriculum and instruction as external to
their classrooms. As a Resource Room Teacher in District D
puts it,
They tell us what to teach and when to teach it. I mean, we
have a calendar and a pacing guide. We can’t, we really don’t
make the decisions too much. I would hope . . . that it’s
62. supported and making sure that students learn but I don’t
really know.
In some cases, these teachers expressed confidence that
the judgments of district curriculum decision makers were
grounded in appropriate evidence:
I kind of just trust that the district is providing me with
evidence-based stuff. So I’m trusting that the curriculum that
they’ve chosen and that my colleagues have done test work on
is really what they say it is. (EBD Teacher, District D)
However, in other cases, teachers expressed more skepti-
cal views about the trustworthiness of the data district offi-
cials used to make decisions about curriculum:
. . . over the years we’ve had so many evidence-based, research
based and so many changes, that . . . if you just want my honest
[opinion] . . . I know that there’s data behind it, but if it’s
evidence based or research based, why are we always changing?
(EBD Teacher, District B)
Figure 1. Relationships between people, organizations, and
tools. Adapted from McDiarmid & Peck (2012).
32 The Journal of Special Education 50(1)
To summarize, similar to earlier studies (Boardman
et al., 2005; Jones, 2009; Landrum et al., 2002), we found
that the personal characteristics of practitioners—that is,
their experiences, values, beliefs, and attitudes—functioned
as a powerful filter through which they interpreted the
meaning of EBP and evaluated the relevance of this con-
struct for their decision making about curriculum and
63. instruction. Practitioner definitions of EBP often reflected
the assumption that the locus of authority regarding EBP
lies outside the classroom, and the ostensive function of
EBP was to provide prescriptions for classroom practice.
In the following sections, we report findings related to
our second research question, describing ways in which
contextual features such as organization and access to tools
and resources may influence the way practitioners interpret
the value and relevance of EBP in their daily work.
Organizational Contexts of EBP
Our data suggest that our interviewees’ views about the
value and relevance of evidence in decision making about
practice were often part of a larger process of coping with
the organizational conditions of their work. Several specific
issues were salient in the interviews we conducted. One of
these, of course, had to do with district policies about evi-
dence-based decision making (Honig, 2006). In some dis-
tricts, special education administrators described strong
district commitments related to the use of research evidence
in decision making:
In this district, it’s [EBP] becoming really big. You don’t ever
hear them talk about any initiative without looking at the
research and forming some sort of committee to look at what
practices are out there and what does the research tell us about
it. And then identifying what are the things we’re after and how
well does this research say they support those specific things
we want to see happen. I would say that work has started, and
that is the lens that comes from Day One of anything we do.
(Special Education Administrator, District C)
However, strong district commitments to evidence-based
curriculum decisions in general education were sometimes
64. viewed as raising dilemmas for special education teachers.
A teacher of students with emotional and behavioral prob-
lems in District B described the problem this way:
I try to make our classroom setting as much like a general Ed
classroom as I can, because the goal is to give them strategies to
work with behaviors so that they can function in Gen Ed
classrooms. Which is a challenge, because they were in Gen Ed
classrooms before they came here, so something wasn’t
working.
Some special education teachers described being caught
between curriculum decisions made in general education
and practices they saw as more beneficial for their students
with disabilities:
. . . if (general education teachers) change their curriculum then
I need to follow it so my kids can be a part of it. Especially
with
my kids being more part of the classroom. So you know 4 years
ago I was not doing the Math Expressions, and now I am doing
the Math Expressions and it’s hard because I’m trying to follow
the Gen Ed curriculum and there’s times where the one lesson
ends up being a 2- or 3-day lesson with some added worksheets
because they just aren’t getting the skill and, being a spiraling
curriculum, it goes pretty fast sometimes too. (Self-Contained
Teacher, District B)
While the tensions between curriculum models in gen-
eral education and special education (with each claiming its
own evidentiary warrants) were problematic for many of
the resource room teachers we interviewed, these dilemmas
were less salient to teachers of children in self-contained
classrooms, including those serving students with EBD and
those serving students with low-incidence disabilities.
Instead, the challenges that these teachers described had to
65. do with isolation and disconnection from colleagues serv-
ing students like theirs. A Self-Contained Classroom
Teacher, District B, said, “Well, this job is very isolating.
I’m the only one that does it in this building . . . so uh I’m
kind of alone in the decisions I make.” Another Self-
Contained Classroom Teacher, District D, said,
Sometimes it makes me feel like it’s less than professional . . .
I don’t know, I just sometimes wish that, I feel like there’s not
always a lot of oversight as far as what am I doing. Is there a
reason behind what I’m doing? And did it work? I wish there
was more.
In cases where teachers felt isolated and disconnected
from district colleagues, they often reported relying on
other self-contained classroom teachers for support and
consultation, rather than resources in their district or from
the research literature:
When you’re in an EBD class you can get pretty isolated . . .
but the other beauty of being in an EBD room is you have
other adults in the room that you can talk to, or they can have
other ideas, or you can call other teachers. (EBD Teacher,
District B)
Resources and Tools Related to EBP
District commitments to use of EBPs in the classroom were
in many cases accompanied by allocation of both material
and conceptual resources. The resources and supports most
often cited by both administrators and teachers in this con-
nection were focused on curriculum materials:
Susan, who is our Special Ed curriculum developer, and
Louisa, who’s our curriculum facilitator . . . they recognize that
we’ve shifted practice to a really research-based practice . . .
66. We never really did this [before] we just bought books and
Hudson et al. 33
stuff. And I said, “Well, I don’t operate that way. We’re going
to shift practice and I’m going to help you” . . . and she has
actually been very, very successful at reaching out and
capturing the attention of folks, authors that publish . . . and
some other materials and some other research and then digging
deep. (Special Education Director, District A)
I think there’s something really powerful about having that
scope and sequence and that repetition that gradually builds on
itself. Yeah so instead of me trying to create it as I go, having
that research-based program, and of course I’m going to see if
it’s not working, then I’m flexible to change it, but I’m going to
have a good base to at least start with. (Resource Room
Teacher, District B)
While district curriculum resources (which were often
assumed to be evidence-based) were important tools for
many resource room teachers we interviewed, both teachers
and administrators expressed frustration about what they
viewed as a paucity of evidence-based curriculum and
instructional resources for students in self-contained pro-
grams, particularly those serving students with low-inci-
dence disabilities:
There’s not a lot of curriculum out there for the self-contained
classrooms. So we do have some, some for our more mild self-
contained programs, specific reading, writing, math
curriculums. But we also made our own library we call it “The
Structured Autism Library” . . . it’s a library of materials we
have online that we’ve developed ’cause you can’t really pull
67. anything off the shelves for those kinds of kids. (Special
Education Director, District D)
. . . in self-contained settings in Ocean View, I think that I am
expected to . . . sort of use Gen Ed assessments, but in
Kindergarten they already test out of those, they already fail so
miserably at those. I am then the expert because there’s no
assessments that assess these kids’ growth so I make my own
assessments. (Self-Contained Classroom Teacher, District D)
In the context of these perceptions about the lack of relevant
evidence-based resources, we found that only a few teach-
ers undertook individual efforts to locate and use available
research. Those who did often encountered considerable
difficulty in locating relevant research resources:
If I’m implementing something I’m not so good at then I’ll go
do some reading on it. The Autism modules online are helpful.
I’d say mostly every time I try and type something in for a
problem I’m having without knowing if there is any research
on it, I can never find it, hardly ever. (Self-Contained
Classroom
Teacher, District C)
More often, teachers adopted a “progress monitoring”
approach to evidence and decision making: “We take what
we learn about the kids and we change how we instruct
them, so that’s kind of what we do every day . . . based on
each kid and their individual performance” (Self-Contained
Classroom Teacher, District B).
After our examination of each separate element of the
system, we analyzed the interactions between elements to
understand the transactional, contextual nature of practitio-
ners’ understanding of EBPs.
68. A Holistic View: Relationships Between People,
Tools, and Organizations
Our data suggest that the positions practitioners occupied
within their schools and districts had considerable influence
on their access to useful resources and tools related to EBP.
The nature of the tools available to them, in turn, affected
practitioner experiences with and beliefs about the value
and relevance of EBPs in the context of decisions they were
making. Our findings are summarized below in terms of
some specific hypotheses regarding the ways in which these
dimensions of practice are related to one another.
Hypothesis 1: Practitioner beliefs about the value and
relevance of EBP are shaped by the affordances and con-
straints of the tools and resources they use for decision
making.
Similar to other researchers (e.g., Boardman et al., 2005;
Landrum et al., 2002), we found some skepticism about the
practical relevance of research for local decision making
across all of the practitioner groups we interviewed.
However, we also noted that this view was most prevalent
among self-contained classroom teachers and particularly
among teachers of students with low-incidence disabilities.
In many cases, both teachers and administrators working
with students with low-incidence disabilities expressed the
belief that research on “kids like ours” did not exist. For
example, a Self-Contained Classroom Teacher in District B
explained her stance about research and practice this way:
Well it’s just that there’s not a lot out there for me and maybe
it’s hard to find. I feel like through my multiple programs, I’ve
looked at a lot and there’s just not a lot out there. And I know
why—this is a small fraction of the population.
69. In some cases, teachers in our study described themselves
as having access to curriculum tools they considered to be
evidence-based for some populations, but inappropriate for
the specific students they served.
Hypothesis 2: Practitioner access to relevant tools and
resources is affected by the position they occupy within
the school and the district.
The self-contained classroom teachers we interviewed often
described themselves as being extremely isolated, and as
34 The Journal of Special Education 50(1)
having relatively little access to tools and resources (research
studies, evidence-based curriculum, and relevant evidence-
based professional development) they viewed as useful for
the students they taught. We found that teachers who worked
in positions that were relatively close to general education
often had access to more tools and resources (evaluation
instruments, curriculum guides, professional development)
related to EBP, but were also more likely to encounter ten-
sions between mandates from general education, and prac-
tices they viewed as appropriate for students with special
education needs. The effects of organizational position were
also mediated by district policies and practices around col-
laboration. For example, one district had strategically devel-
oped specific organizational policies and practices to support
collaboration among their self-contained classroom teachers.
One of the teachers in this district commented on the value of
these practices as a support for implementation of EBPs:
I definitely believe in the PLC (professional learning
70. community) model and just sharing ideas and having, taking on
new strategies, and monitoring our own use of them. I think
that is kind of the future. I think you’re going to get more
buy-in, than a sit and get PD session on evidence-based
practice. I think its been proven actually, that you just sit and
get it and then you don’t have to use it, no one checks in with
you to make sure you are using it. So making groups of people
accountable (to each other) makes a ton of sense to me. (Self-
Contained Classroom Teacher, District B)
More often, however, district policies, resources, and sup-
ports related to EBP were reported to be focused on curricu-
lum and instructional programs for students with
high-incidence disabilities.
Teachers in self-contained classrooms often found these
tools and resources, and the idea of EBP to be of little direct
value to their daily work.
Hypothesis 3: How practitioners define EBP affects how
they interpret the value and relevance of tools and other
organizational resources available to them related to EBP.
As we explain further later on, most of the teachers and
administrators we interviewed defined EBP primarily in
terms of prescriptions for practice that were made by exter-
nal authorities. Many of these informants were also those
who expressed ambivalence, and often outright skepticism,
about the value of EBPs for their work. In contrast to this
pattern, a few of the practitioners we talked with appeared
to have a more nuanced way of defining EBP. In these cases,
EBP was defined less as a prescription for practice than as a
resource for decisions they would make in the classroom:
I think it would be wonderful to be informed of that research,
and the best teacher would have all that information, and be
71. able to look at the kid, and provide them an opportunity with a
program that is research-based and validated and everything,
and look at how the child is responding to the program, give it
a little bit of time, make sure you’re delivering it with
authenticity and the way it’s supposed to be delivered, you
know, give it 3–4 weeks, and if it’s not working you need to
find something else. (Resource Room Teacher, District B)
Discussion
Over the last decade, the notion of EBP has become one of
the most influential policy constructs in the field of special
education. In this study, we sought to improve our under-
standing of the ways practitioners define the idea of EBP and
interpret its relevance in the contexts of their daily practice.
In addition, we hoped to extend previous research by learn-
ing more about some of the contextual factors that might
influence practitioner views about EBP. To investigate these
two issues, we conducted interviews with special education
professionals in four local school districts, including those
occupying positions as directors of special education,
resource room teachers, self-contained classroom teachers
of students with emotional and behavioral disabilities, and
teachers of students with low-incidence developmental dis-
abilities. Our analysis of these data was guided by some gen-
eral precepts drawn from sociocultural theory (Chaiklin &
Lave, 1993; Vygotsky, 1978), particularly the idea that social
practice can be understood as a process in which individuals
are continually negotiating ways of participating in collec-
tive activity (Nicolini, Gherardi, & Yanow, 2003).
We found that the practitioners we interviewed often
defined EBP in ways that externalized the locus of authority
for what constituted relevant evidence for practice as the
results of “studies someone had done.” Ironically, this view