SINGLE SUBJECT
EXPERIMENTAL DESIGN
Mohsen Sarhady MSc. OT & Soraya Gharebaghy MSc. OT
© 2015 msarhady1980@gmail.com
©2015MSARHADY1980@GMAIL.COM
Outline
Introduction to SSED
Foundations for SSED
Common types of SSED
Data analysis in SSED
Evidence based practice and SSED
Evaluating quality of SSED
Ethical issues in SSED
©2015MSARHADY1980@GMAIL.COM
Think of these research problems:
 If there were two hearing aids, which one is more suitable for a 5
years old child with cerebral palsy and inability in lip-reading?
 Is repeated storybook reading with adult scaffolding effective in
increasing spontaneous speech in a 3 years old autistic boy?
 Which one is more effective in enhancing the ability to stand up
from chair in a child with dyskinetic CP: A weighted vest or
strengthening exercise?
INTRODUCTION
What is Single Subject Experimental Design
©2015MSARHADY1980@GMAIL.COM
What is SSED?
 A research methodology in which the subject serves as
his/her own control, rather than using another
individual/group
 Single-subject (sometimes referred to as single-case or
single-system) designs offer an alternative to group designs
 The focus is on an N=1, a single subject, in which the “1” can
be an individual or a group of individuals
©2015MSARHADY1980@GMAIL.COM
The unit of clinical interest: the individual
Example:
Gait training Results for 30 subjects:
Improved: 10
Unchanged: 10
Decline: 10
Conclusion: treatment had no effect
©2015MSARHADY1980@GMAIL.COM
Assumptions of the SSED
©2015MSARHADY1980@GMAIL.COM
Contrast between Case Study and SSED
 The case study is a subjective description of an individual's
behavior
 Providing impetus for further study or for the generation of theoretical
hypotheses
 Case study lacks variable controls and systematic data
collection
 It cannot document causal relationships between intervention and
changes in behavior
 Single subject research demands
 careful control of variables,
 clearly delineated and reliable data collection,
 and the introduction and manipulation of only one intervention
at a time
©2015MSARHADY1980@GMAIL.COM
Contrast between Group Designs and SSED
 Group designs do not naturally conform to practice
 Particularly when the practice involves interventions with
individuals
 Analysis of group designs typically refers to:
 “group’s average change score” or
 “the number of subjects altering their status.”
 In group design we miss each individual’s experience
with the intervention
 Individual participants within the group may not respond to the
particular type of treatment offered
FOUNDATIONS
For Single Subject Experimental Design
©2015MSARHADY1980@GMAIL.COM
Components of SSED
 The underlying principle of a single subject design:
 If an intervention with a client or a group of individuals is
effective, it should be possible to see a change in status
from the period prior to intervention to the period during
and after the intervention.
 This type of design minimally has three components:
 (a)Repeated measurement,
 (b)Baseline phase, and
 (c)Treatment phase
©2015MSARHADY1980@GMAIL.COM
Repeated Measurement
 Single-subject designs require the repeated measurement
of a dependent variable (target problem)
 Prior to starting and during the intervention, we must be
able to measure the subject’s status on the target problem
at regular time intervals, whether the intervals are hours,
days, weeks, or months
©2015MSARHADY1980@GMAIL.COM
Baseline Phase
 The period in which the intervention to be evaluated is not
offered to the subject
 Abbreviated by the letter “A”
 During the baseline phase, repeated measurements of the
dependent variable are taken
 These measures reflect the status of the client(s) on the
dependent variable prior to the implementation of the
intervention
 Baseline measurements provide two aspects of control
analogous to a control group in a group design:
 Its scores serve as a control group
 Its repeated measures solve the internal validity issue
©2015MSARHADY1980@GMAIL.COM
Treatment Phase
 The time period during which the intervention is implemented
 Signified by the letter “B”
 During the treatment phase, repeated measurements of the
same dependent variable using the same measures are
obtained
 Patterns and magnitude of the data points are
compared to the data points in the baseline phase to
determine whether a change has occurred
 It is recommended that the length of the treatment phase be
as long as the baseline phase
©2015MSARHADY1980@GMAIL.COM
Measuring Dependent Variable
 Measures of behaviors, status, or functioning are often
characterized in four ways:
 Frequency refers to counting the number of times an event occurs
 Duration refers to the length of time an event or some symptom
lasts and usually is measured for each occurrence of the event or
symptom
 Interval refers to length of time between events
 Magnitude refers to intensity of a particular event, behavior or
state
 The measures of phases are almost always summarized on a graph
 The y axis: used to represent the scores of the dependent
variable,
 The x axis: represents a unit of time, such as an hour, a day,
a week, or a month
COMMON TYPES
Of Single Subject Experimental Design
©2015MSARHADY1980@GMAIL.COM
Common types of SSEDs
 Basic Design (A-B)
 Withdrawal Designs (A-B-A)
 Multiple Treatment Designs
 Multiple Baseline Designs
“A” : No treatment phases
“B”: Treatment phases
©2015MSARHADY1980@GMAIL.COM
Basic Design (A-B)
 An A-B design represents a baseline phase followed by a treatment
phase
 No causal statements can be made
 A-B design provides evidence of an association between the
intervention and the change
©2015MSARHADY1980@GMAIL.COM
Withdrawal Designs
 There are two withdrawal designs:
 the A-B-A design
 the A-B-A-B design
 Withdrawal:
 Intervention is concluded (A-B-A design) or
 is stopped for some period of time before it is begun again (A-
B-A-B design)
 The premise:
 If the intervention is effective, the target problem should be
improved only during the course of intervention
 The target scores should worsen when the intervention is
removed
©2015MSARHADY1980@GMAIL.COM
A-B-A Design:
 Based on the A-B design by integrating a post-treatment follow-up
 This design answers the question left unanswered by the A-B design:
 Does the effect of the intervention persist beyond the period in which
treatment is provided?
 It may also be possible to learn how long the effect of the intervention
persists
 The follow-up period should include multiple measures until a follow-
up pattern emerges
 A-B-A design provides additional support for the effectiveness of an
intervention
©2015MSARHADY1980@GMAIL.COM
©2015MSARHADY1980@GMAIL.COM
A-B-A-B Design:
 The A-B-A-B design builds in a second intervention phase
 The intervention is identical to the intervention in the first B
phase
©2015MSARHADY1980@GMAIL.COM
Multiple Treatment Designs
 The nature of the intervention changes over time, and each
change represents a new phase of the design
 One type of change that might occur is the intensity of the
intervention: (A-B1-B2-B3)
 Another type of changing intensity design is when you add
additional tasks to be accomplished (e.g. The B1 may involve
walking safely within the house, the B2 may add methods for
using a checkbook, the B3 adds a component on cooking)
 Other type: the actual intervention may change over time (A-B-C-
D)
©2015MSARHADY1980@GMAIL.COM
Limitation:
 Only adjacent phases can be compared so that the effect for
nonadjacent phases cannot be determined
 There might have been a carryover effect from the previous
interventions
single subject design 26
Multiple Baseline Designs
 A single transition from baseline to treatment
(AB) is instituted at different times across
multiple clients, behavior or settings.
 This staggered or unequal baseline period is
what gives the design its name.
 Internal validity is ensured by the multiple
replications of the intervention delivered
across client, behaviors or settings.
27single subject design
28single subject design
 Each transition from baseline to
intervention is a opportunity to observe the
effects of treatment.
 Transition at different times allows to rule
out alternative explanations for behavior
change
 Concurrent measurement controls better
for threats to internal validity
29single subject design
Multiple base
design across
behavior
There is one
client, and the
same intervention
is applied to
different but
related problems
or behaviors
30single subject design
 Multiple base
design across
subjects
 each subject receives
the same intervention
sequentially to
address the same
target problem
31single subject design
 Multiple
base design
across
settings
 Multiple baseline
designs can be
applied to test the
effect of an
intervention as it is
applied to one client,
dealing with one
behavior but
sequentially applied
as the client moves
to different settings
32single subject design
Multiple-baseline designs
strengths
 internal validity
 no reversal or withdrawal of
the intervention
 they are useful when behaviors
are not likely to be reversible
Weakness: require more data
collection time
33single subject design
Eliminating Alternative
Hypotheses
 By systematically delivering the
treatment and continuously
measuring the relevant target
behavior, change in the behavior
can be monitored and conclusions
drawn about determinates of this
change.
 Ability to eliminate alternative
explanations for behavior change
34single subject design
Internal validity
 How confident that changes in the
dependent variable are due to
introduction of independent variable
and not to some other factors.
35single subject design
Threats of internal validity
 Confound variable
 Maturation effects
 History effects
 Statistical regression toward
the mean
36single subject design
Threats to internal validity are
controlled primarily through:
 Replication: each replication allows for a
comparison between the subject behavior during
baseline and during treatment:
a) Phase change
b) Intersubject replication
 Repeated measure
37single subject design
External validity
 Whether their finding applicable
to subjects and or settings
beyond the research.
38single subject design
Data Analysis in
Single-Case
research
 Isolate causal relationships between
independent and dependent variables
 By systematically delivering the
treatment (independent variable) and
continuously measuring the relevant
target behavior (dependent variable),
changes in behavior can be monitored.
40single subject design
 single-subject researchers rely on visual
analysis of graphed data
 Are there changes in the data patterns?
 If changes do exist, do they correspond
with the experimental manipulations?
41single subject design
Data Graphs
 Graphing the data facilitates monitoring and
evaluating the impact of the intervention
 data for each variable for each participant or
system are graphed: dependent variable on the y-axis
& time (e.g., hour, a day, a week, or a month) on
the x-axis.
 Graphing data for one variable for more than
one participant, the scale for each graph should
be the same to facilitate comparisons across
graphs
42single subject design
Visual Analysis
 Differences in level
 Changes in trend or slope:
direction of the trend/ rate of increase or
decrease
 Change in variability
43single subject design
A) simple method to describe the level is to
inspect the actual data points
Differences in level
44single subject design
Differences in level
B) using the mean (the average of the
observations in the phase), or the median
(the value at which 50% of the scores in
the phase are higher and 50% are lower).
45single subject design
Changes in level are typically used when the
observations fall along relatively stable
lines.
46single subject design
Changes in Trend and slope
 compare trends in the baseline and
intervention stages.
 direction in the pattern of the data points
and can be increasing, decreasing,
cyclical, or curvilinear.
 rate of increase or decrease
 Magnitude and rapidity of behavior
transitions
Nugent’s method
split-middle lines
47single subject design
Nugent Method
48single subject design
Split-middle lines
49single subject design
Variability
 stability or variability of the data
points
draw range lines
50single subject design
Interpreting Visual Patterns
 patterns of level and trend
51single subject design
stable line (or a close approximation
of a stable line
A: the intervention has only made the problem worse,
B : the intervention has had no effect,
C: suggests that there has been an improvement
52single subject design
trend changes
53single subject design
F: no effect
G:no change in the direction of the trend, but the rate of deterioration has
slowed
H: improved the situation only to the extent that it is not getting worse
I: improvement in the subject’s status.
54single subject design
No change in…..? Change in…..?
55single subject design
No change in….?
Change in…..?
56single subject design
No Change in……?
Change in……?
57single subject design
The PND statistic
 Percentage of nonoverlapping Data
 Percentage of treatment data that
overlap with the most extreme data
point
 Reduce maladaptive behavior: the
most extreme data point in baseline
with lowest numerical value
 Increase adaptive behavior: most
extreme data point in baseline with
highest numerical value
58single subject design
PND≥90% Very effective treatment
PND 70-90 % Effective treatment
PND 50-70 % Questionable effectiveness
PND< 50 Ineffective treatment
59single subject design
conservative dual-criterion(CDC)
 mean line based on baseline
data
 split-middle line is calculated
based on baseline data
60single subject design
61single subject design
62single subject design
Question??
63single subject design
64single subject design
EVIDENCE BASED PRACTICE
And Single Subject Experimental Design
©2015MSARHADY1980@GMAIL.COM
Evidence based medicine (that is, EBP specific to the field
of medicine) is “the conscientious, explicit, and judicious
use of current best evidence in making decisions about the
care of individual patients . . . [by] integrating individual
clinical expertise with the best available external clinical
evidence from systematic research”
©2015MSARHADY1980@GMAIL.COM
©2015MSARHADY1980@GMAIL.COM
©2015MSARHADY1980@GMAIL.COM
EVALUATING QUALITY
Of Single Subject Experimental Design
©2015MSARHADY1980@GMAIL.COM
Questions for evaluating quality of SSED
DESCRIPTION OF PARTICIPANTS AND SETTINGS
1. Was/were the participant(s) sufficiently well
described to allow comparison with other studies or
with the reader’s own patient population?
©2015MSARHADY1980@GMAIL.COM
Questions for evaluating quality of SSED
INDEPENDENT VARIABLE
2. Were the independent variables operationally
defined to allow replication?
3. Were intervention conditions operationally defined
to allow replication?
©2015MSARHADY1980@GMAIL.COM
Questions for evaluating quality of SSED
DEPENDENT VARIABLE
4. Were the dependent variables operationally defined as
dependent measures?
5. Was interrater or intra-rater reliability of the dependent
measures assessed before and during each phase of the study?
6. Was the outcome assessor unaware of the phase of the
study (intervention vs control) in which the participant was
involved?
7. Was stability of the data demonstrated in baseline, namely
lack of variability or a trend opposite to the direction one
would expect after application of the intervention?
©2015MSARHADY1980@GMAIL.COM
Questions for evaluating quality of SSED
DESIGN
8. Was the type of SSED clearly and correctly stated, for
example A–B, multiple baseline across subjects?
9. Were there an adequate number of data points in each
phase (minimum of five) for each participant?
10. Were the effects of the intervention replicated across
three or more subjects?
©2015MSARHADY1980@GMAIL.COM
Questions for evaluating quality of SSED
ANALYSIS
11. Did the authors conduct and report appropriate visual
analysis, for example, level, trend, and variability?
12. Did the graphs used for visual analysis follow standard
conventions, for example x- and y-axes labeled clearly and
logically, phases clearly labeled (A, B, etc.) and delineated
with vertical lines, data paths separated between phases,
consistency of scales?
13. Did the authors report tests of statistical analysis, for
example celeration line approach, two-standard deviation
band method, C-statistic, or other?
14. Were all criteria met for the statistical analyses used?
©2015MSARHADY1980@GMAIL.COM
External validity of SSED
• Three sequential replication strategies to enhance the
external validity:
Direct replication:
repeating the same procedures, by the same researchers,
including the same treatment, in the same setting, and in
the same situation, with different clients who have similar
characteristics
Systematic replication:
repeating the experiment in different settings, using different
providers, and other related behaviors
Clinical replication:
combining different interventions in the same setting and
with clients who have the same types of problems
ETHICAL ISSUES
In Single Subject Experimental Design
©2015MSARHADY1980@GMAIL.COM
Like any form of research, single-subject designs require the
informed consent
Participants must understand that the onset of the intervention
is likely to be delayed until either a baseline pattern emerges or
some assigned time period elapses
The risks associated with prematurely ending treatment in
withdrawal may be hard to predict
THANK YOU!

Single subject experimental design

  • 1.
    SINGLE SUBJECT EXPERIMENTAL DESIGN MohsenSarhady MSc. OT & Soraya Gharebaghy MSc. OT © 2015 msarhady1980@gmail.com
  • 2.
    ©2015MSARHADY1980@GMAIL.COM Outline Introduction to SSED Foundationsfor SSED Common types of SSED Data analysis in SSED Evidence based practice and SSED Evaluating quality of SSED Ethical issues in SSED
  • 3.
    ©2015MSARHADY1980@GMAIL.COM Think of theseresearch problems:  If there were two hearing aids, which one is more suitable for a 5 years old child with cerebral palsy and inability in lip-reading?  Is repeated storybook reading with adult scaffolding effective in increasing spontaneous speech in a 3 years old autistic boy?  Which one is more effective in enhancing the ability to stand up from chair in a child with dyskinetic CP: A weighted vest or strengthening exercise?
  • 4.
    INTRODUCTION What is SingleSubject Experimental Design
  • 5.
    ©2015MSARHADY1980@GMAIL.COM What is SSED? A research methodology in which the subject serves as his/her own control, rather than using another individual/group  Single-subject (sometimes referred to as single-case or single-system) designs offer an alternative to group designs  The focus is on an N=1, a single subject, in which the “1” can be an individual or a group of individuals
  • 6.
    ©2015MSARHADY1980@GMAIL.COM The unit ofclinical interest: the individual Example: Gait training Results for 30 subjects: Improved: 10 Unchanged: 10 Decline: 10 Conclusion: treatment had no effect
  • 7.
  • 8.
    ©2015MSARHADY1980@GMAIL.COM Contrast between CaseStudy and SSED  The case study is a subjective description of an individual's behavior  Providing impetus for further study or for the generation of theoretical hypotheses  Case study lacks variable controls and systematic data collection  It cannot document causal relationships between intervention and changes in behavior  Single subject research demands  careful control of variables,  clearly delineated and reliable data collection,  and the introduction and manipulation of only one intervention at a time
  • 9.
    ©2015MSARHADY1980@GMAIL.COM Contrast between GroupDesigns and SSED  Group designs do not naturally conform to practice  Particularly when the practice involves interventions with individuals  Analysis of group designs typically refers to:  “group’s average change score” or  “the number of subjects altering their status.”  In group design we miss each individual’s experience with the intervention  Individual participants within the group may not respond to the particular type of treatment offered
  • 10.
    FOUNDATIONS For Single SubjectExperimental Design
  • 11.
    ©2015MSARHADY1980@GMAIL.COM Components of SSED The underlying principle of a single subject design:  If an intervention with a client or a group of individuals is effective, it should be possible to see a change in status from the period prior to intervention to the period during and after the intervention.  This type of design minimally has three components:  (a)Repeated measurement,  (b)Baseline phase, and  (c)Treatment phase
  • 12.
    ©2015MSARHADY1980@GMAIL.COM Repeated Measurement  Single-subjectdesigns require the repeated measurement of a dependent variable (target problem)  Prior to starting and during the intervention, we must be able to measure the subject’s status on the target problem at regular time intervals, whether the intervals are hours, days, weeks, or months
  • 13.
    ©2015MSARHADY1980@GMAIL.COM Baseline Phase  Theperiod in which the intervention to be evaluated is not offered to the subject  Abbreviated by the letter “A”  During the baseline phase, repeated measurements of the dependent variable are taken  These measures reflect the status of the client(s) on the dependent variable prior to the implementation of the intervention  Baseline measurements provide two aspects of control analogous to a control group in a group design:  Its scores serve as a control group  Its repeated measures solve the internal validity issue
  • 14.
    ©2015MSARHADY1980@GMAIL.COM Treatment Phase  Thetime period during which the intervention is implemented  Signified by the letter “B”  During the treatment phase, repeated measurements of the same dependent variable using the same measures are obtained  Patterns and magnitude of the data points are compared to the data points in the baseline phase to determine whether a change has occurred  It is recommended that the length of the treatment phase be as long as the baseline phase
  • 15.
    ©2015MSARHADY1980@GMAIL.COM Measuring Dependent Variable Measures of behaviors, status, or functioning are often characterized in four ways:  Frequency refers to counting the number of times an event occurs  Duration refers to the length of time an event or some symptom lasts and usually is measured for each occurrence of the event or symptom  Interval refers to length of time between events  Magnitude refers to intensity of a particular event, behavior or state  The measures of phases are almost always summarized on a graph  The y axis: used to represent the scores of the dependent variable,  The x axis: represents a unit of time, such as an hour, a day, a week, or a month
  • 16.
    COMMON TYPES Of SingleSubject Experimental Design
  • 17.
    ©2015MSARHADY1980@GMAIL.COM Common types ofSSEDs  Basic Design (A-B)  Withdrawal Designs (A-B-A)  Multiple Treatment Designs  Multiple Baseline Designs “A” : No treatment phases “B”: Treatment phases
  • 18.
    ©2015MSARHADY1980@GMAIL.COM Basic Design (A-B) An A-B design represents a baseline phase followed by a treatment phase  No causal statements can be made  A-B design provides evidence of an association between the intervention and the change
  • 19.
    ©2015MSARHADY1980@GMAIL.COM Withdrawal Designs  Thereare two withdrawal designs:  the A-B-A design  the A-B-A-B design  Withdrawal:  Intervention is concluded (A-B-A design) or  is stopped for some period of time before it is begun again (A- B-A-B design)  The premise:  If the intervention is effective, the target problem should be improved only during the course of intervention  The target scores should worsen when the intervention is removed
  • 20.
    ©2015MSARHADY1980@GMAIL.COM A-B-A Design:  Basedon the A-B design by integrating a post-treatment follow-up  This design answers the question left unanswered by the A-B design:  Does the effect of the intervention persist beyond the period in which treatment is provided?  It may also be possible to learn how long the effect of the intervention persists  The follow-up period should include multiple measures until a follow- up pattern emerges  A-B-A design provides additional support for the effectiveness of an intervention
  • 21.
  • 22.
    ©2015MSARHADY1980@GMAIL.COM A-B-A-B Design:  TheA-B-A-B design builds in a second intervention phase  The intervention is identical to the intervention in the first B phase
  • 23.
    ©2015MSARHADY1980@GMAIL.COM Multiple Treatment Designs The nature of the intervention changes over time, and each change represents a new phase of the design  One type of change that might occur is the intensity of the intervention: (A-B1-B2-B3)  Another type of changing intensity design is when you add additional tasks to be accomplished (e.g. The B1 may involve walking safely within the house, the B2 may add methods for using a checkbook, the B3 adds a component on cooking)  Other type: the actual intervention may change over time (A-B-C- D)
  • 24.
    ©2015MSARHADY1980@GMAIL.COM Limitation:  Only adjacentphases can be compared so that the effect for nonadjacent phases cannot be determined  There might have been a carryover effect from the previous interventions
  • 26.
  • 27.
    Multiple Baseline Designs A single transition from baseline to treatment (AB) is instituted at different times across multiple clients, behavior or settings.  This staggered or unequal baseline period is what gives the design its name.  Internal validity is ensured by the multiple replications of the intervention delivered across client, behaviors or settings. 27single subject design
  • 28.
  • 29.
     Each transitionfrom baseline to intervention is a opportunity to observe the effects of treatment.  Transition at different times allows to rule out alternative explanations for behavior change  Concurrent measurement controls better for threats to internal validity 29single subject design
  • 30.
    Multiple base design across behavior Thereis one client, and the same intervention is applied to different but related problems or behaviors 30single subject design
  • 31.
     Multiple base designacross subjects  each subject receives the same intervention sequentially to address the same target problem 31single subject design
  • 32.
     Multiple base design across settings Multiple baseline designs can be applied to test the effect of an intervention as it is applied to one client, dealing with one behavior but sequentially applied as the client moves to different settings 32single subject design
  • 33.
    Multiple-baseline designs strengths  internalvalidity  no reversal or withdrawal of the intervention  they are useful when behaviors are not likely to be reversible Weakness: require more data collection time 33single subject design
  • 34.
    Eliminating Alternative Hypotheses  Bysystematically delivering the treatment and continuously measuring the relevant target behavior, change in the behavior can be monitored and conclusions drawn about determinates of this change.  Ability to eliminate alternative explanations for behavior change 34single subject design
  • 35.
    Internal validity  Howconfident that changes in the dependent variable are due to introduction of independent variable and not to some other factors. 35single subject design
  • 36.
    Threats of internalvalidity  Confound variable  Maturation effects  History effects  Statistical regression toward the mean 36single subject design
  • 37.
    Threats to internalvalidity are controlled primarily through:  Replication: each replication allows for a comparison between the subject behavior during baseline and during treatment: a) Phase change b) Intersubject replication  Repeated measure 37single subject design
  • 38.
    External validity  Whethertheir finding applicable to subjects and or settings beyond the research. 38single subject design
  • 39.
  • 40.
     Isolate causalrelationships between independent and dependent variables  By systematically delivering the treatment (independent variable) and continuously measuring the relevant target behavior (dependent variable), changes in behavior can be monitored. 40single subject design
  • 41.
     single-subject researchersrely on visual analysis of graphed data  Are there changes in the data patterns?  If changes do exist, do they correspond with the experimental manipulations? 41single subject design
  • 42.
    Data Graphs  Graphingthe data facilitates monitoring and evaluating the impact of the intervention  data for each variable for each participant or system are graphed: dependent variable on the y-axis & time (e.g., hour, a day, a week, or a month) on the x-axis.  Graphing data for one variable for more than one participant, the scale for each graph should be the same to facilitate comparisons across graphs 42single subject design
  • 43.
    Visual Analysis  Differencesin level  Changes in trend or slope: direction of the trend/ rate of increase or decrease  Change in variability 43single subject design
  • 44.
    A) simple methodto describe the level is to inspect the actual data points Differences in level 44single subject design
  • 45.
    Differences in level B)using the mean (the average of the observations in the phase), or the median (the value at which 50% of the scores in the phase are higher and 50% are lower). 45single subject design
  • 46.
    Changes in levelare typically used when the observations fall along relatively stable lines. 46single subject design
  • 47.
    Changes in Trendand slope  compare trends in the baseline and intervention stages.  direction in the pattern of the data points and can be increasing, decreasing, cyclical, or curvilinear.  rate of increase or decrease  Magnitude and rapidity of behavior transitions Nugent’s method split-middle lines 47single subject design
  • 48.
  • 49.
  • 50.
    Variability  stability orvariability of the data points draw range lines 50single subject design
  • 51.
    Interpreting Visual Patterns patterns of level and trend 51single subject design
  • 52.
    stable line (ora close approximation of a stable line A: the intervention has only made the problem worse, B : the intervention has had no effect, C: suggests that there has been an improvement 52single subject design
  • 53.
  • 54.
    F: no effect G:nochange in the direction of the trend, but the rate of deterioration has slowed H: improved the situation only to the extent that it is not getting worse I: improvement in the subject’s status. 54single subject design
  • 55.
    No change in…..?Change in…..? 55single subject design
  • 56.
    No change in….? Changein…..? 56single subject design
  • 57.
    No Change in……? Changein……? 57single subject design
  • 58.
    The PND statistic Percentage of nonoverlapping Data  Percentage of treatment data that overlap with the most extreme data point  Reduce maladaptive behavior: the most extreme data point in baseline with lowest numerical value  Increase adaptive behavior: most extreme data point in baseline with highest numerical value 58single subject design
  • 59.
    PND≥90% Very effectivetreatment PND 70-90 % Effective treatment PND 50-70 % Questionable effectiveness PND< 50 Ineffective treatment 59single subject design
  • 60.
    conservative dual-criterion(CDC)  meanline based on baseline data  split-middle line is calculated based on baseline data 60single subject design
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
    EVIDENCE BASED PRACTICE AndSingle Subject Experimental Design
  • 66.
    ©2015MSARHADY1980@GMAIL.COM Evidence based medicine(that is, EBP specific to the field of medicine) is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients . . . [by] integrating individual clinical expertise with the best available external clinical evidence from systematic research”
  • 67.
  • 68.
  • 69.
  • 70.
    EVALUATING QUALITY Of SingleSubject Experimental Design
  • 71.
    ©2015MSARHADY1980@GMAIL.COM Questions for evaluatingquality of SSED DESCRIPTION OF PARTICIPANTS AND SETTINGS 1. Was/were the participant(s) sufficiently well described to allow comparison with other studies or with the reader’s own patient population?
  • 72.
    ©2015MSARHADY1980@GMAIL.COM Questions for evaluatingquality of SSED INDEPENDENT VARIABLE 2. Were the independent variables operationally defined to allow replication? 3. Were intervention conditions operationally defined to allow replication?
  • 73.
    ©2015MSARHADY1980@GMAIL.COM Questions for evaluatingquality of SSED DEPENDENT VARIABLE 4. Were the dependent variables operationally defined as dependent measures? 5. Was interrater or intra-rater reliability of the dependent measures assessed before and during each phase of the study? 6. Was the outcome assessor unaware of the phase of the study (intervention vs control) in which the participant was involved? 7. Was stability of the data demonstrated in baseline, namely lack of variability or a trend opposite to the direction one would expect after application of the intervention?
  • 74.
    ©2015MSARHADY1980@GMAIL.COM Questions for evaluatingquality of SSED DESIGN 8. Was the type of SSED clearly and correctly stated, for example A–B, multiple baseline across subjects? 9. Were there an adequate number of data points in each phase (minimum of five) for each participant? 10. Were the effects of the intervention replicated across three or more subjects?
  • 75.
    ©2015MSARHADY1980@GMAIL.COM Questions for evaluatingquality of SSED ANALYSIS 11. Did the authors conduct and report appropriate visual analysis, for example, level, trend, and variability? 12. Did the graphs used for visual analysis follow standard conventions, for example x- and y-axes labeled clearly and logically, phases clearly labeled (A, B, etc.) and delineated with vertical lines, data paths separated between phases, consistency of scales? 13. Did the authors report tests of statistical analysis, for example celeration line approach, two-standard deviation band method, C-statistic, or other? 14. Were all criteria met for the statistical analyses used?
  • 76.
    ©2015MSARHADY1980@GMAIL.COM External validity ofSSED • Three sequential replication strategies to enhance the external validity: Direct replication: repeating the same procedures, by the same researchers, including the same treatment, in the same setting, and in the same situation, with different clients who have similar characteristics Systematic replication: repeating the experiment in different settings, using different providers, and other related behaviors Clinical replication: combining different interventions in the same setting and with clients who have the same types of problems
  • 77.
    ETHICAL ISSUES In SingleSubject Experimental Design
  • 78.
    ©2015MSARHADY1980@GMAIL.COM Like any formof research, single-subject designs require the informed consent Participants must understand that the onset of the intervention is likely to be delayed until either a baseline pattern emerges or some assigned time period elapses The risks associated with prematurely ending treatment in withdrawal may be hard to predict
  • 79.

Editor's Notes

  • #46 زمانی که یک یا دو متغیر باعث شود میانگین تغییر کند استفاده از میانه روش مناسب تریست.
  • #47 در این نمودار الگو و جهت شیب تغییر کرده اما میانگین همچنان ثابت است. که تعیین سطح روش مناسبی نیست.
  • #48 When there is a trend in the baseline, you might ask whether the intervention altered the direction of the trend. When the direction does not change, you may be interested in whether the rate of increase or decrease in the trend has changed. Does it alter the slope of the line?
  • #51 Widely divergent scores in the baseline make the assessment of the intervention more difficult, as do widely different scores in the intervention phase. There are some conditions and concerns for which the lack of stability is the problem, and so creating stability may represent a positive change. One way to summarize variability with a visual analysis is to draw range lines,