Enhancing Psychotherapy Process With Common Factors
Feedback:
A Randomized, Clinical Trial
Andrew S. McClintock, Matthew R. Perlman, Shannon M.
McCarrick,
Timothy Anderson, and Lina Himawan
Ohio University
In this study, we developed and tested a common factors
feedback (CFF) system. The CFF system was
designed to provide ongoing feedback to clients and therapists
about client ratings of three common
factors: (a) outcome expectations, (b) empathy, and (c) the
therapeutic alliance. We evaluated the CFF
system using randomized, clinical trial (RCT) methodology.
Participants: Clients were 79 undergradu-
ates who reported mild, moderate, or severe depressive
symptoms at screening and pretreatment
assessments. These clients were randomized to either: (a)
treatment as usual (TAU) or (b) treatment as
usual plus the CFF system (TAU � CFF). Both conditions
entailed 5 weekly sessions of evidence-based
therapy delivered by doctoral students in clinical psychology.
Clients completed measures of common
factors (i.e., outcome expectations, empathy, therapeutic
alliance) and outcome at each session. Clients
and therapists in TAU � CFF received feedback on client
ratings of common factors at the beginning of
Sessions 2 through 5. When surveyed, clients and therapists
indicated that that they were satisfied with
the CFF system and found it useful. Multilevel modeling
revealed that TAU � CFF clients reported
larger gains in perceived empathy and alliance over the course
of treatment compared with TAU clients.
No between-groups effects were found for outcome expectations
or treatment outcome. These results
imply that our CFF system was well received and has the
potential to improve therapy process for clients
with depressive symptoms.
Public Significance Statement
In this study, we developed a system that provides ongoing
feedback to clients and therapists about
what is transpiring in therapy. Results suggest that the feedback
system may help to improve the
process of treatment for clients with depressive symptoms.
Keywords: common factors, feedback, empathy, alliance,
randomized clinical trial
A growing body of research attests to the utility and
effectiveness
of outcome feedback (Connolly Gibbons et al., 2015; De Jong et
al.,
2014; Shimokawa, Lambert, & Smart, 2010). In outcome
feedback
systems, client progress is monitored and reviewed by therapists
(and,
in some cases, by clients as well) to guide ongoing treatment
(Lam-
bert, 2007). Specifically, these systems collect
distress/symptomatol-
ogy data from clients on a routine basis, and then compare these
data
with norms or expected treatment responses (see Lambert, 2007;
Lutz
et al., 2006). When a client is off-track (i.e., is projected to
have a
relatively poor treatment response), the therapist is alerted and
is
then typically provided with strategies for improving quality of
care (Lambert et al., 2004; Miller, Duncan, Sorrell, & Brown,
2005).
Although outcome feedback has demonstrated efficacy (e.g.,
Shimokawa et al., 2010), there is undoubtedly room for
improve-
ment. Effects for outcome feedback systems are often only
small
or medium in size and, in some samples, are nonsignificant
(Con-
nolly Gibbons et al., 2015; De Jong et al., 2014; Shimokawa et
al.,
2010). In a recent study, Connolly Gibbons et al. (2015) found
that
64% of clients who received treatment with outcome feedback
did
not achieve clinically significant change. Clearly, modifications
to
these systems are warranted.
One novel approach is to utilize process-based feedback. Pro-
cess feedback may be advantageous for several reasons. First,
there is evidence from educational psychology (e.g.,
Zimmerman
& Kitsantas, 1997) that the development of a skill (e.g., consis-
tently hitting a bull’s-eye on a dartboard) is enhanced through a
focus on process (e.g., the mechanics of dart-throwing). From
this,
it stands to reason that the development of psychological well-
being may be enhanced by focusing on the therapeutic processes
that foster well-being. Second, certain treatment modalities
(e.g.,
humanistic and psychodynamic therapies) do not target
symptoms
per se and thus may be more compatible with a process
feedback
system than an outcome/symptom-based feedback system.
Third,
whereas therapists may view outcome feedback as evaluative
and
threatening (Boswell, Krauss, Miller, & Lambert, 2015),
therapists
This article was published Online First January 23, 2017.
Andrew S. McClintock, Matthew R. Perlman, Shannon M.
McCarrick,
Timothy Anderson, and Lina Himawan, Department of
Psychology, Ohio
University.
The ideas and data reported in this article have not been
previously
disseminated.
Correspondence concerning this article should be addressed to
Andrew
S. McClintock, 264 Porter Hall, Athens, OH 45701. E-mail:
[email protected]
ohio.edu
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
Journal of Counseling Psychology © 2017 American
Psychological Association
2017, Vol. 64, No. 3, 247–260 0022-0167/17/$12.00
http://dx.doi.org/10.1037/cou0000188
247
mailto:[email protected]
mailto:[email protected]
http://dx.doi.org/10.1037/cou0000188
may be more receptive to feedback about what is transpiring in
therapy. Thus, process feedback has the potential to be more
widely implemented. Fourth, a process feedback system could
yield information that is actionable and immediately useful. For
example, disagreement about treatment tasks could be readily
addressed by exploring discrepancies between the implemented
techniques and the client’s perceptions about which techniques
should be implemented.
An exemplary system that integrates process and outcome feed-
back is the Partners for Change Outcome Management System
(PCOMS; Miller et al., 2005; Duncan, 2012). PCOMS monitors
the therapeutic alliance (i.e., agreement on therapeutic goals
and
tasks in the context of a positive affective bond; Bordin, 1979)
at
every session, enabling therapists to identify and repair alliance
ruptures on an ongoing basis. Although the effectiveness of
PCOMS is well documented (e.g., Duncan, 2012), it is unclear
whether PCOMS’ effectiveness is because of outcome feedback,
process feedback, or both. Indeed, no research to date has exam-
ined the efficacy of process feedback in and of itself.
To build a process feedback system that could be widely im-
plemented, it seems prudent to track processes that are common
across treatment approaches (i.e., “common factors”). Common
factors account for the lion’s share of outcome variance
(�50%),
more so than theory-specific techniques (�15%) and extrathera-
peutic factors (�25%) (Cuijpers et al., 2012; Lambert, 2013). In
a
landmark text, Wampold and Imel (2015) highlighted three spe-
cific common factors that drive change in psychotherapy: (a)
client’s outcome expectations, (b) a genuinely empathic connec-
tion between client and therapist, and (c) the therapeutic
alliance.
Outcome expectations, empathy, and the alliance are discussed
in
the following sections to highlight their suitability for inclusion
in
a process feedback system.
Outcome Expectations
Outcome expectations are anticipatory beliefs about a treat-
ment’s personal efficacy (Constantino, Ametrano, & Greenberg,
2012). A recent meta-analysis (Constantino, Glass, Arnkoff,
Ame-
trano, & Smith, 2011) that included 8,016 clients across 46
inde-
pendent samples revealed that client outcome expectations ac-
counted for a significant, albeit modest, percentage (1.4%) of
outcome variance. It is worth noting that this association was
derived predominantly from studies that assessed outcome
expec-
tations before or very early in treatment. An alternative
approach
is to conceptualize outcome expectations as a dynamic process,
wherein the client’s expectations are influenced by the
developing
client-therapist relationship, the credibility of the treatment
ratio-
nale, the effectiveness of early treatment procedures, and so
forth.
That is, according to this approach, outcome expectations may
evolve over the course of therapy and thus should be measured
beyond the first few sessions. Underscoring the utility of
monitor-
ing outcome expectations over the course of treatment, Newman
and Fisher (2010) found that a midtreatment assessment of
expec-
tancy/credibility accounted for nearly 40% of the variance in
therapeutic change.
Empathy
Empathy is a complex, interactional process involving three
temporal stages: (a) the therapist’s attunement to the client’s
experience, (b) the therapist’s communication about the client’s
experience, and (c) the client’s receipt of the empathic
communi-
cation (Barrett-Lennard, 1981; MacFarlane, Anderson, & Mc-
Clintock, 2015). A focus on the third stage is particularly
impor-
tant because client’s perceptions of therapist empathy may have
the largest effect on outcome (Elliott, Bohart, Watson, & Green-
berg, 2011); a meta-analysis of 38 studies (Elliott et al., 2011)
showed that client-perceived empathy accounted for over 10%
of
outcome variance.
Alliance
A related construct is the therapeutic alliance, which refers to
the collaborative, working relationship between client and
thera-
pist. Bordin (1979) conceptualized the alliance as involving
three
components: goals, tasks, and bond. The goals component is the
level of agreement between client and therapist on the
objectives
of treatment (e.g., anxiety reduction). The tasks component is
the
level of client–therapist agreement on the techniques (e.g.,
cogni-
tive restructuring, dream interpretation) used to attain treatment
goals. Finally, the bond is the degree of emotional connection
(e.g.,
care, liking, trust) between client and therapist. In a meta-
analysis
of 112 studies, Horvath, Del Re, Flückiger, and Symonds (2011)
found that client-rated alliance accounted for about 8% of
outcome
variance.
Current Research
In contrast to outcome feedback systems, we developed a sys-
tem that focuses exclusively on psychotherapy process. We se-
lected outcome expectations, empathy, and the alliance for
routine
monitoring because these processes: (a) are common across
treat-
ment approaches, (b) are emphasized in Wampold and Imel’s
(2015) widely influential model of therapeutic change, and (c)
are
among the strongest predictors of treatment success.
We anticipated that the provision of common factors feedback
(CFF) would help therapists to identify poor process. Indeed,
therapists do not always share their client’s perceptions of
thera-
peutic process, as evidenced by relatively weak correlations be-
tween therapist-rated process and client-rated/observer-rated
pro-
cess (Cecero, Fenton, Frankforter, Nich, & Caroll, 2001;
Greenberg, Watson, Elliott, & Bohart, 2001). Not only did we
want to assist therapists in identifying poor process, but we also
wanted to help therapists to intervene in ways that would
improve
that process. Therefore, we created a manual detailing evidence-
based strategies for enhancing outcome expectations (e.g., Con-
stantino et al., 2012; Swift & Derthick, 2013), empathy (e.g.,
Bohart & Greenberg, 1997; Bruce, Shapiro, & Constantino, &
Manber, 2010; Dowell & Berman, 2013), and the alliance (e.g.,
Hill & O’Brien, 1999; Safran & Muran, 2000; Safran & Muran,
2006), and through prestudy training and ongoing supervision,
encouraged study therapists to employ these strategies when
com-
mon factor ratings were suboptimal.
The effects of the CFF system were tested using randomized,
clinical trial (RCT) methodology. Given the exploratory nature
of
this research, we enrolled clinical analogues who reported at
least
a mild level of depressive symptoms on two separate occasions.
These participants were randomly assigned to either treatment
as
usual (TAU) or TAU plus the CFF system (TAU � CFF). The
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
248 MCCLINTOCK ET AL.
CFF system monitored client ratings of outcome expectations,
empathy, and the alliance and provided feedback on this
informa-
tion to clients and therapists in order to facilitate an open
discus-
sion about the therapeutic process. Our CFF system fed back
information to both clients and therapists because there is some
evidence that the provision of feedback to the client-therapist
dyad
is more effective than the provision of feedback to the therapist
alone (De Jong et al., 2014) and because the provision of
feedback
to the client might increase the client’s sense of agency in treat-
ment (see De Jong et al., 2014; Flückiger et al., 2012; Zuroff et
al.,
2007).
We hypothesized that clients in TAU � CFF would report
greater increases in outcome expectations, empathy, and the
alli-
ance over the course of therapy, compared with clients in TAU.
Because common factors are purportedly therapeutic (see
Wampold & Imel, 2015) and thus improvements in the common
factors should lead to better outcomes, we further hypothesized
that clients in TAU � CFF would report greater decreases in
depressive symptoms and greater increase in psychological
well-
being over the course of therapy, compared with clients in TAU.
Method
Participants
Clients. Seventy-nine undergraduates at a Midwestern
university
met inclusion criteria and were randomized to a treatment
condition
(see Procedure). These participants were either in their
freshman
(59.5%), sophomore (25.3%), junior (5.1%), or senior year
(10.1%) of
college, with a mean age of 19.3 years (SD � 3.0). Most
(82.3%)
identified as female. About 81.0% identified as
White/Caucasian,
5.1% as Black or African American, 3.8% as American Indian
or
Alaska Native, 3.8% as Multiracial, 2.5% as Hispanic or Latino/
Latina, 2.5% as Asian or Asian American, and 1.3% as Middle
Eastern. About 13.9% were currently receiving psychological or
phar-
macological treatment at the pretreatment assessment.
Participants
reported a mean BDI-II score at pretreatment (23.68; SD �
8.21) that
fell in the moderate depression range (31 participants reported
mild
depression, 27 reported moderate depression, and 21 reported
severe
depression; see Beck et al., 1996).
Therapists. Client participants received treatment from one of
six doctoral students in a clinical psychology training program.
All
therapist participants had completed graduate-level assessment
and
treatment courses and were involved in practicum/traineeship
as-
sociated with the training program. Therapists had acquired a
mean
of 313.17 face-to-face clinical hours (SD � 261.31) by the start
of
the study. Three therapists were male, and three were female.
Therapists had a mean age of 26.00 years (SD � 2.19), and all
identified as White/Caucasian. With regard to theoretical
orienta-
tion, three therapists identified as cognitive– behavioral, two
iden-
tified as integrative/eclectic, and one identified as humanistic.
Measures
Outcome expectations. The Outcome Expectations Question-
naire (OEQ; Constantino, McClintock, McCarrick, Anderson, &
Himawan, 2016) is a recently developed, 10-item measure of
client
outcome expectations. Each item reflects a facet of treatment
outcome about which clients may form expectations (example
item: “My self-esteem”). Items are rated on a 7-point Likert
scale
ranging from (0) “I expect no improvement,” to (6) “I expect
very
substantial improvements.” Exploratory and confirmatory factor
analyses (Constantino et al., 2016) of the OEQ items supported
a
two-factor solution, with one factor pertaining to the specific
problems that bring the client to treatment (example item: “My
distress about the problems that brought me to treatment”), and
the
second factor pertaining to more global issues (example item:
“My
sense of purpose”). These two factors have been found to be
strongly correlated (rs ranged from 0.60 to 0.71; Constantino et
al.,
2016). We used total OEQ scores (sum of all items) in the
current
study. Akin to the original research (Constantino et al., 2016),
the
OEQ demonstrated good internal consistency in the present
study
(Cronbach’s alpha � .93 at pretreatment).
Empathy. The Barrett-Lennard Relationship Inventory-
Empathy Scale (BLRI-E; Barrett-Lennard, 2015) is the most
widely used client rated measure of empathy (Elliott et al.,
2011).
The 16 BLRI-E items (example item: “My counselor usually
senses or realizes what I am feeling”) are rated on a 6-point
Likert
scale ranging from (�3) “No, I strongly feel that it is not true”
to
(3) “Yes, I strongly feel that it is true.” A total BLRI-E score is
derived by taking the mean of all items (after reverse-scoring
eight
items). Past research has established the internal consistency,
test–retest reliability, convergent/divergent validity, and
predictive
validity of the BLRI-E (see Barrett-Lennard, 2015). The BLRI-
E
exhibited acceptable internal consistency in the current study
(Cronbach’s alpha � .73 after Session 1).
Therapeutic alliance. The Working Alliance Inventory-Short
Form Revised (WAI-SR; Hatcher & Gillaspy, 2006) is a widely
used 12-item measure of the therapeutic alliance. Each item (ex-
ample item: “I feel that the things I do in therapy will help me
to
accomplish the changes that I want.”) is rated on a 5-point
Likert
scale ranging from 1 (seldom) to 5 (always) and loads onto one
of
three factors: goals (i.e., agreement on the goals of therapy),
tasks
(i.e., agreement on the tasks of therapy), and bond (i.e., the
emotional connection between client and therapist). The
measure
has demonstrated excellent reliability, a clean factor structure,
convergent validity, and predictive validity (Hatcher &
Gillaspy,
2006; McClintock, Anderson, & Petrarca, 2015). The total score
(used for analyses) is calculated by summing all items. The
inter-
nal consistency of the WAI-SR was high in the present sample
(Cronbach’s alpha � .88 after Session 1).
Depression. The Beck Depression Inventory-II (BDI-II;
Beck, Steer, & Brown, 1996) is the most widely used measure
of
depressive symptoms. The measure features 21 items
representing
depressive symptoms. Respondents rate the presence of each
symptom on a 4-point Likert scale. An example item is
“Sadness”
with response options (0) “I do not feel sad,” (1) “I feel sad
much
of the time,” (2) “I am sad all of the time,” (3) “I am so sad or
unhappy that I can’t stand it.” BDI-II total scores (sum of all
items) can be categorized in the following ranges: minimal (0 –
13),
mild (14 –19), moderate (20 –28), and severe (29 – 63). The
BDI-II
has sound psychometric properties in both clinical and
nonclinical
samples (Beck et al., 1996). The BDI-II demonstrated good
inter-
nal consistency in the current study (Cronbach’s alpha � .84 at
pretreatment).
Psychological well-being. The Schwartz Outcome Scale-10
(SOS-10; Blais et al., 1999) is a 10-item self-report measure of
psychological well-being. The SOS-10 was developed using
clas-
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
249COMMON FACTORS FEEDBACK
sical test theory and Rasch item analysis and has been employed
extensively to assess the effectiveness of mental health
treatments.
Each item features a 7-point Likert scale ranging from 0 (never)
to
6 (nearly all of the time). Sample items include “I have
confidence
in my ability to sustain important relationships” and “I am
gener-
ally satisfied with my psychological health.” The SOS-10 is
scored
by summing the 10 items (higher scores indicate better well-
being). The measure has demonstrated good internal
consistency,
test–retest reliability, and convergent/discriminant validity
(Hag-
gerty, Blake, Naraine, Siefert, & Blais, 2010; Young, Waehler,
Laux, McDaniel, & Hilsenroth, 2003). The SOS-10 exhibited
high
internal consistency in the present research (Cronbach’s alpha
�
.84 at pretreatment).
Client satisfaction survey. We developed a brief client satis-
faction survey to evaluate client perceptions about the CFF
system.
Because we were concerned that clients might find the
completion of
measures burdensome, we asked clients to rate the degree to
which
they enjoyed the completion of measures at the end of each
session
using a 7-point Likert scale (1 � not at all, 7 � very much). In
addition, clients were asked about the degree to which the
feedback
reports helped improve treatment on a 7-point Likert scale (1 �
not at
all, 7 � very much). Clients completed the satisfaction survey
at the
end of their treatment.
Therapist satisfaction survey. We also developed a brief
therapist satisfaction survey to evaluate the degree to which
ther-
apists were satisfied with the CFF system and found it useful.
Clinicians were asked to rate their satisfaction on a 5-point
Likert
scale (1 � dissatisfied, 5 � completely satisfied) and the utility
of
the CFF system on a 5-point Likert scale (1 � not useful, 5 �
very
useful). Therapists completed the satisfaction survey at the end
of
the research project.
Procedures
This study was conducted in the psychology department of a
large Midwestern university during the 2015–2016 academic
year.
Institutional review board approval was obtained, and all ethical
standards were followed; no adverse events were reported
during
the study. To be consistent with research on outcome feedback
(e.g., see Connolly Gibbons et al., 2015), we aimed to recruit a
sample of 75–100 participants. See Figure 1 for a procedure
flowchart.
We recruited undergraduates with depressive symptoms via the
psychology department’s Web based screening system. Specifi-
cally, we administered the BDI-II in the screening system (n �
1862) and recruited only those who scored in the mild range or
higher (i.e., �14; see Beck et al., 1996). These students
reporting
mild, moderate, or severe depressive symptoms (n � 463) were
given a vague description of the study (titled “A Study of
Psycho-
therapy”) and were offered time slots; participation in the RCT
was on a first-come, first-serve basis.
The RCT was conducted in a psychotherapy laboratory on the
university’s campus. Students arrived to the laboratory
individually
and all provided informed consent (n � 95). The OEQ, BDI-II,
and
SOS-10 were then administered for the pretreatment assessment.
At
this pretreatment assessment, 13 participants did not score in
the mild
40 completed at least two sessions
32 completed all five sessions
29 completed at least two sessions
24 completed all five sessions
Assessed for eligibility via screening assessment (n=1862)
463 scored 14 or higher on BDI-II and were eligible for study
Excluded (n=16)
13 scored below 14 on BDI-II
3 exhibited active suicidality, mania,
or psychosis
Analyzed (n=35)
Excluded from analysis (n=0)
Allocated to TAU (n=44)
Analyzed (n=44)
Excluded from analysis (n=0)
Randomized (N=79)
Assessed for eligibility via pretreatment
assessment (n=95)
Enrollment
Alloca�on
Analysis
Allocated to TAU+CFF (n=35)
Figure 1. Procedure flow chart.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
250 MCCLINTOCK ET AL.
range or higher on the BDI-II (i.e., score �14). These 13
participants
were immediately deemed ineligible (and no additional data
were
collected) to maintain the integrity of a symptomatic sample.
There-
fore, to be eligible for the RCT, participants had to score in the
mild
range or higher on the BDI-II at both the screening assessment
and
pretreatment assessment (time between these assessments
ranged
from 3 days to 8 weeks). Participants were also excluded for
exhib-
iting or reporting active suicidality, mania, or psychosis (n �
3). All
excluded participants were referred to local mental health
providers.
The remaining 79 participants were randomly assigned to either
TAU
(n � 44) or TAU � CFF (n � 35); the first author determined
condition assignment using a table of random numbers.
Therapists were crossed with treatment condition (to balance
therapist skill across conditions; see Heppner, Wampold, Owen,
Thompson, & Wang, 2016). Therapists were assigned clients
based on mutual availability, but they did not know which
condi-
tion a client was assigned to until after the pretreatment assess-
ment. Therapists’ beliefs about the effectiveness of each
treatment
condition were assessed after they were trained to use the CFF
system; on a 5-point Likert scale (1 � ineffective, 5 � highly
effective), therapists reported a mean rating of 3.83 (SD � 0.75)
for
TAU and a mean rating of 4.50 (SD � 0.55) for TAU � CFF.
Therapists participated in weekly, group supervision to discuss
individual cases and to maintain adherence to the CFF system.
Supervision was provided by a licensed clinical psychologist
with
over 25 years of clinical experience.
Both TAU and TAU � CFF entailed five, 50-min individual
treatment sessions delivered once per week. The treatments
were
limited to five sessions because of department restrictions on
the
use of the subject pool. Five session treatments have been
shown
to be effective in past research (e.g., McClintock, Anderson, &
Cranston, 2015). To increase external validity, therapists
selected
from a range of evidence-based treatment approaches based on
their
theoretical orientation, case conceptualization, and supervisor
input.
The following treatment approaches were used in TAU:
cognitive–
behavioral (50%), emotion-focused (22%),
mindfulness/acceptance-
based (13%), client-centered (13%), and interpersonal (3%).
The
following treatment approaches were used in TAU � CFF:
cognitive–
behavioral (46%), emotion-focused (31%),
mindfulness/acceptance-
based (12%), client-centered (8%), and interpersonal (4%). As
previ-
ously noted, the OEQ, BDI-II, and SOS-10 were administered at
pretreatment (i.e., before the first session). The OEQ, BLRI-E,
and
WAI-SR were administered to clients after the first session. The
OEQ,
BLRI-E, WAI-SR, BDI-II, and SOS-II were administered to
clients
after sessions two through five.
In total, clients attended a mean of 4.13 sessions (SD � 1.48,
range � 1–5 sessions). Clients dropped from the study for a
variety
of reasons (e.g., study too cumbersome, no longer interested in
therapy, etc.); 69 (40 in TAU and 29 in TAU � CFF) completed
at least two sessions, and 56 (32 in TAU and 24 in TAU � CFF)
completed all five sessions. Clients who completed all treatment
sessions were compensated with $10 and five course credits
(par-
tial credit was awarded for partial participation).
CFF System. The CFF system was a novel procedure devel-
oped for the current research. The CFF system monitors client
ratings of three common factors (i.e., outcome expectations,
em-
pathy, and alliance) and provides feedback on this information
to
clients and therapists in order to facilitate an open discussion
about
the therapeutic process and to help therapists to make
adjustments
when process is suboptimal.
In Session 1 or the beginning of Session 2 in TAU � CFF,
therapists described the three common factors (outcome
expecta-
tions, empathy, and alliance) and provided a jargon-free
rationale
for using the CFF system (e.g., “Each of these components is
strongly related to treatment success, and so by maximizing
these
components in our treatment, we might be able to maximize
your
improvement as well”). Clients were told that their ratings of
these
factors would be reviewed and discussed in session.
As mentioned, clients completed the OEQ, BLRI-E, and
WAI-SR after each session. For TAU � CFF clients, these
ratings
were entered into an Excel spreadsheet (Microsoft, 2013) that
visually depicted the client’s ratings in a line graph relative to
percentile-based tracks. The tracks, derived from normative data
(Anderson, Patterson, McClintock, & Song, 2013; Barrett-
Lennard, 2015; Constantino et al., 2016), are color-coded green
(highest 33% of scores in normative data), yellow (middle 33%
of
scores), and red (lowest 33% of scores). High, middle, and low
tracks were created for each of the following variables: outcome
expectations (i.e., OEQ scores), empathy (i.e., BLRI-E scores),
alliance (i.e., WAI-SR-Total scores), goals facet of the alliance
(i.e., WAI-SR-Goals scores), tasks facet of the alliance (WAI-
SR-
Tasks scores), and the bond facet of the alliance (i.e., WAI-SR-
Bond scores). In this way, clients and therapists could view the
client’s common factors ratings over time (i.e., within-client
change) as well as relative to normative data (i.e., between
clients).
A screen shot of the Excel output is presented in Figure 2,
showing
a client’s alliance scores by the beginning of the fifth session.
In addition to the Excel graphs, a common factors enhancement
manual was created that details the general principles
underlying
the CFF system and the specific strategies that can be employed
to
enhance outcome expectations, empathy, and the alliance.1 In
creating this manual, we drew heavily from existing strategies
and
guidelines (e.g., Bohart & Greenberg, 1997; Bruce et al., 2010;
Constantino et al., 2012; Dowell & Berman, 2013; Safran &
Muran, 2000; Safran & Muran, 2006; Swift & Derthick, 2013).
Prior to study initiation, therapists were asked to read the
manual
and to role-play the discussion of client common factors ratings
and the delivery of common factors enhancement strategies. Im-
portantly, while the viewing of client data in the TAU condition
was forbidden, therapists were not forbidden from using the
com-
mon factors enhancement strategies with their TAU clients.
Therapists were instructed to review the outcome expectations,
empathy, and alliance graphs with their TAU � CFF clients at
the
beginning of Sessions 2–5 and to initiate an exploration of the
client’s perspective, particularly when ratings were suboptimal
(i.e., in the yellow or red tracks). TAU � CFF data discussions
were designed to be collaborative between client and therapist;
clients were invited to share their perceptions of therapeutic
pro-
cesses, and therapists were instructed to validate the client’s
per-
ceptions while employing techniques— adapted to the
individual
needs of the client—to bolster outcome expectations, empathy,
and/or the therapeutic alliance. For example, a therapist could
intervene with a client reporting low WAI-SR scores by
exploring
1 For a copy of the common factors enhancement manual, please
contact
first author.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
251COMMON FACTORS FEEDBACK
and attempting to repair alliance ruptures (see Safran & Muran,
2000).
CFF system adherence. Near the end of the research project,
therapists were surveyed about their adherence to the CFF
system.
Therapists were first asked how frequently they discussed the
feedback with TAU � CFF clients; on a 5-point Likert scale (1
�
never, 5 � always), therapists reported a mean rating of 4.67
(SD � 0.82). Therapists were also asked how much time they
spent, in an average TAU � CFF session, discussing the
feedback;
with the options 0 –1 min, 1–5 min, 5–10 min, 10 –20 min,
and �20 min, three therapists reported 1–5 min, and the other
three therapists reported 5–10 min. Finally, therapists were
asked
about the extent to which the feedback influenced their
interven-
tion strategy; on a 5-point Likert scale (1 � not at all, 5 �
substantially), therapists reported a mean rating of 3.67 (SD �
0.52).
Plan of Analysis
Correlations (r) were used to investigate associations between
study measures. To assess differences on demographic and pre-
treatment data between conditions, independent samples t tests
and
chi-square tests of independence were used. An independent
sam-
ples t test and a logistic regression were used to evaluate differ-
ences in drop out/number of sessions attended. To assess
clinically
significant change, we identified participants who evidenced a
20% reduction in BDI-II scores and fell in the nonclinical range
on
the BDI-II (i.e., score �13) by the end of treatment (see
Borkovec,
Newman, Pincus, & Lytle, 2002; McClintock et al., 2015;
Roemer,
Orsillo, & Salters-Pedneault, 2008). Satisfaction ratings were
an-
alyzed with descriptive statistics.
To model changes in process (i.e., WAI-SR, BLRI-E and OEQ)
and outcome (i.e., BDI-II and SOS-10) measures, a three-level
hierarchical linear model (HLM) was used for each measure
with
sessions nested within clients and clients nested within
therapists.
Thus, within-client variability was modeled at Level 1, the
between-client and within-therapist variability was modeled at
Level 2, and the between-therapist variability was modeled at
Level 3. Time/session variables were entered as Level 1
predictors.
The time/session variables were centered at the first session
(i.e.,
first session coded as 0, second session coded as 1, etc.).
Because
randomization to treatment conditions occurred at the client
level,
treatment condition was entered as a Level 2 predictor.
Treatment
condition was centered at TAU (i.e., TAU coded as 0, TAU �
CFF
coded as 1). Analyses did not include Level 3 predictors.
For each process/outcome measure, an unconditional growth
curve was fitted first to investigate whether scores changed sig-
nificantly over time. These unconditional growth curves only
included time/session variable(s) as predictor(s). If a
time/session
predictor was significant (i.e., significant change in scores over
time), then treatment condition was added as a Level 2 predictor
to
investigate whether the change over time differed between TAU
and TAU � CFF.
To account for the different shapes that the growth curve might
take, four different unconditional growth curves were fitted to
the
data, and the best model was obtained by comparing the
informa-
tion criteria (i.e., Akaike Information Criteria [AIC] and
Bayesian
Information Criteria [BIC]). The four unconditional growth
curves
were as follows: (a) a linear unconditional growth curve (i.e., a
model with only a linear term of session number included as the
Level 1 predictor) to assess the possibility that scores decrease
or
increase at a constant rate over time; (b) a log unconditional
growth curve (i.e., a model with only a log of session number
included as the Level 1 predictor) to assess the possibility that
scores decrease or increase at a faster rate during the early ses-
sions, then decrease or increase at a slower rate during the later
sessions; (c) a quadratic unconditional growth curve (i.e., a
model
with linear and quadratic terms of session number as the Level 1
predictors) to assess the possibility that scores first decrease
over
time then increase or first increase then decrease; and (5) a
cubic
unconditional quadratic growth curve (i.e., a model with linear,
quadratic, and cubic terms of session number as the Level 1
predictors) to assess the possibility that scores decrease first
over
Figure 2. Example of feedback graph. See the online article for
the color version of this figure.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
252 MCCLINTOCK ET AL.
time, then increase before decreasing again or increase first,
then
decrease before increasing again.
For illustration purposes, the model fitted for the linear uncon-
ditional growth curve is provided below:
Level 1:
(Measure)tij � �0ij � �1ij(Session)tij � etij
Level 2:
�0ij � �00j � r0ij
�1ij � �10j � r1ij
Level 3:
�00j � �000 � u00j
�10j � �100 � u10j
The complete model:
(Measure)tij � �000 _ �100(Session)tij � [u00j(Session)tij �
r0ij
� r1ij(Session)tij � etij]
In the previous model, (Measure)tij is the process/outcome
measure
(i.e., BDI-II, SOS-10, WAI-SR, BLRI-E, or OEQ) at time t for
client
i seeing therapist j; because Session was centered at the first
session
(i.e., immediately before first session for BDI-II, SOS-10 and
OEQ
and immediately after first session for BLRI-E and WAI-SR),
�000 is
the average of the scores at the first session; and �100 is the
rate of
change of the scores over one unit of time (i.e., session). A
significant
�000 means that the average of the scores at the first session is
significantly different than zero. A significant �100 means that
the
scores change significantly over time (i.e., the rate of change of
the scores is significantly different than zero). The parameters
inside
the brackets are the random effects: etij is the session
variability within
a client; r0ij and r1ij are client variability within a therapist
around �000
and �100, respectively; and u00j and u10j are therapist
variability
around �000 and �100, respectively. In the beginning of the
model
fitting, �000 and �100 were treated as random effects at both
Levels 2
and 3. However, when there was an indication that the model
was
overspecified, these random effects were dropped one by one
starting
from the highest level, until the model fit properly.
Also for illustration purposes, the linear model fitted with treat-
ment condition (TC) as a Level 2 predictor is provided here:
Level 1:
(Measure)tij � �0ij � �1ij(Session)tij � etij
Level 2:
�0ij � �00j � �01j(TC)ij � r0ij
�1ij � �10j � �11j(TC)ij � r1ij
Level 3:
�00j � �000 � u00j
�01j � �010
�10j � �100 � u10j
�11j � �110
The complete model:
(Measure)tij � �000 � �010(TC)ij � �100(Session)tij
� �110(TC)ij(Session)tij � [u00j � u10j(Session)tij
� r0ij � r1ij(Session)tij � etij]
In the previous model, (Measure)tij is the process/outcome
measure
at time t for client i seeing therapist j; because Session was
centered
at the first session and Treatment Condition was centered at
TAU,
�000 is the average of the TAU scores at the first session;
�010 is the
effect of TAU � CFF on �000; �100 is the rate of change of
TAU
scores over one unit of time (i.e., session); and �110 is the
effect of
TAU � CFF on �100. A significant �000 means that the
average of the
TAU scores at the first session is significantly different than
zero; a
significant �010 means that the average of TAU � CFF scores
at the
first session is significantly different than that of TAU; a
significant
�100 means that the rate of change of TAU scores is
significantly
different than zero; and a significant �110 means that the rate
of
change of TAU � CFF scores is significantly different than that
of
TAU. The parameters inside the brackets are the random effects
as
described previously. In the beginning of the model fitting,
�000 and
�100 were also treated as random effects at both Levels 2 and
3.
However, when there was an indication that this model was
over-
specified, these random effects were dropped one by one
starting from
the highest level, until the model fit properly.
Because the main goal of the study was to investigate whether
there
was a significant difference in the rate of change of the process/
outcome scores over time between clients in the TAU and TAU
�
CFF conditions, the focus of the study was �110. Because
clients were
randomized into the two conditions, we did not expect that the
two
conditions would significant differ in average scores at the first
session (i.e., �010).
Results
Preliminary Analyses
Data were evaluated and found to be within normal limits in
regards to outliers and degree of normality. Correlations (r)
between
study measures at first session are presented in Table 1.
Correlation
size ranged from trivial (e.g., correlation between BLRI-E and
SOS-
10) to large (e.g., correlation between BLRI-E and WAI-SR),
al-
though even the large correlations were not so large as to
suggest
measure redundancy. At pretreatment, TAU participants and
TAU �
CFF participants did not significantly differ (p � .05) on any of
the
demographic and pretreatment data, implying that
randomization was
successful. An independent samples t test showed that TAU
partici-
pants and TAU � CFF participants did not significantly differ
(p �
.05) on the number of sessions attended. Similarly, a logistic
regres-
sion showed that TAU and TAU � CFF did not significant
differ
(p � .05) in the number of participants who dropped out (i.e.,
did not
complete all five sessions). Of the 79 enrolled participants
(TAU n �
44, TAU � CFF n � 35), 45.6% (TAU n � 21, TAU � CFF n �
15)
achieved clinically significant change (i.e., evidenced a 20%
reduc-
tion in BDI-II scores and fell in the nonclinical range on the
BDI-II by
the end of treatment).
Client Satisfaction Ratings
At the end of treatment, clients in TAU � CFF were asked
about
the degree to which they enjoyed the completion of measures at
the
end of each session; clients reported a mean rating of 5.15 (SD
�
1.35) on a 7-point Likert scale (1 � not at all, 7 � very much).
Clients
in TAU � CFF were also asked about the degree to which the
feedback reports helped improve treatment; clients reported a
mean
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
253COMMON FACTORS FEEDBACK
rating of 5.63 (SD � 1.15) on a 7-point Likert scale (1 � not at
all,
7 � very much).
Therapist Satisfaction Ratings
At the end of the research project, therapists were asked about
their level of satisfaction with the CFF system; therapists
reported
a mean rating of 4.17 (SD � 0.75) on a 5-point Likert scale (1
�
dissatisfied, 5 � completely satisfied). Therapists were also
asked
about the degree to which they found the CFF system to be
useful;
therapists reported a mean rating of 4.00 (SD � 0.89) on a 5-
point
Likert scale (1 � not useful, 5 � very useful).
Between-Group Effects on Process and Outcome
A three-level HLM was fitted for each process and outcome
variable. Comparison of AIC and BIC showed that BDI-II, SOS-
10, and OEQ were best represented with linear unconditional
growth curves, while WAI-SR and BLRI-E were best
represented
with log unconditional growth curves. The final unconditional
growth curves were:
(1) BDI-II
(BDI-II)tij � �000 � �100(Session)tij � [u00j �
u10j(Session)tij � r0ij
� r1ij(Session)tij � etij]
(2) SOS-10
(SOS-10)tij � �000 � �100(Session)tij � [u10j(Session)tij �
r0ij
� r1ij(Session)tij � etij]
(3) WAI-SR
(WAI-SR)tij � �000 � �100(LogSession)tij
� [r0ij � r1ij(LogSession)tij � etij]
(4) BLRI-E
(BLRI-E)tij � �000 � �100(LogSession)tij �
[u10j(LogSession)tij
� r0ij � r1ij(LogSession)tij � etij]
(5) OEQ
(OEQ)tij � �000 � �100(Session)tij � [r0ij � r1ij(Session)tij
� etij]
Results indicated that the average of the scores at the first
session (i.e.,
�000) and the rate of change over time/session (i.e., �100)
were
significantly different than zero. As expected, BDI-II scores
decreased
over time, while SOS-10, WAI-SR, BLRI-E, and OEQ scores in-
creased over time. Over one unit of time/session, BDI-II scores
decreased by 2.74 points, SOS-10 scores increased by 2.42
points,
WAI-SR scores increased by 5.75 points, BLRI-E scores
increased by
0.32 points, and OEQ scores increased by 2.21 points. A
summary of
the unconditional growth curve results is presented in Table 2.
In the next set of analyses, treatment condition was entered as a
Level 2 predictor. The final conditional growth curves were:
(1) BDI-II
(BDI-II)tij � �000 � �010(TC)ij � �100(Session)tij
� �110(TC)ij(Session)tij � [u00j � u10j(Session)tij
� r0ij � r1ij(Session)tij � etij]
(2) SOS-10
(SOS-10)tij � �000 � �010(TC)ij � �100(Session)tij
� �110(TC)ij(Session)tij � [u10j(Session)tij � r0ij
� r1ij(Session)tij � etij]
(3) WAI-SR
(WAI-SR)tij � �000 � �010(TC)ij � �100(LogSession)tij
� �110(TC)ij(LogSession)tij � [u10j(LogSession)tij
� r0ij � r1ij(LogSession)tij � etij]
(4) BLRI-E
(BLRI-E)tij � �000 � �010(TC)ij � �100(LogSession)tij
� �110(TC)ij(LogSession)tij � [u10j(LogSession)tij
� r0ij � r1ij(LogSession)tij � etij]
(5) OEQ
(OEQ)tij � �000 � �010(TC)ij � �100(Session)tij
� �110(TC)ij(Session)tij � [r0ij � r1ij(Session)tij � etij]
Results indicated that for each process/outcome measure, the
average of TAU scores at the first session (i.e., �000) was
signif-
icantly different than zero. For each process/outcome measure,
the
effect of the TAU � CFF condition on �000 (i.e., �010) was
not
significant. This implies that, as would be expected given
random-
Table 1
Means (SDs) and Correlations for Study Measures at First
Session (N � 79)
Study measures M (SD) BDI-II SOS-10 WAI-SR BLRI-E OEQ
BDI-II 23.68 (8.21) �.65��� �.17 .22 .03
SOS-10 31.26 (8.32) .19 �.02 .08
WAI-SR 43.60 (7.83) .63��� .48���
BLRI-E 1.61 (.50) .43���
OEQ 39.42 (11.23)
Note. BDI-II � Beck Depression Inventory-II (before first
session); SOS-10 � Schwartz Outcome Scale-10
(before first session); WAI-SR � Working Alliance Inventory-
Short Form Revised (after first session);
BLRI-E � Barrett-Lennard Relationship Inventory-Empathy
Scale (after first session); OEQ � Outcome
Expectations Questionnaire (after first session).
��� p � .001.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
254 MCCLINTOCK ET AL.
ization, TAU and TAU � CFF did not significantly differ in
process/outcome scores at the first session.
Results also indicated that for each process/outcome mea-
sure, the rate of change of TAU scores over time/session (i.e.,
�100) was significantly different than zero; directions of
change
were as expected (i.e., BDI-II scores decreased and SOS-10,
WAI-SR, BLRI-E, and OEQ scores increased). Over one unit of
time/session in the TAU condition, BDI-II scores decreased by
2.46 points, SOS-10 scores increased by 2.18 points, WAI-SR
scores increased by 4.63 points, BLRI-E scores increased by
0.22 points, and OEQ scores increased by 2.01 points.
The effect of the TAU � CFF condition on �100 (i.e., �110)
was significant for WAI-SR and BLRI-E. That is, while TAU
and TAU � CFF did not significantly differ in the rates of
change of BDI-II, SOS-10, and OEQ scores, the two conditions
significantly differed in the rates of change of WAI-SR and
BLRI-E scores. Specifically, participants in TAU � CFF re-
ported greater increases in WAI-SR and BLRI-E scores relative
to TAU participants (over one unit of time/session, WAI-SR
scores increased by 2.61 points more in TAU � CFF relative to
TAU and BLRI-E scores increased by 0.20 points more in
TAU � CFF relative to TAU). A summary of the conditional
growth curve results is presented in Table 3. Figures 3 and 4
depict mean BLRI-E and WAI-SR scores at each session for
TAU and TAU � CFF.
We calculated the proportions of variance explained at Level
1 coefficients (i.e., 0ij and 1ij) by treatment condition,
above
and beyond the time/session variable. Treatment condition ac-
counted for 0.59%, 0.47%, 0.49%, 1.01%, and 0.36% of the
variance in 0ij for BDI-II, SOS-10, WAI-SR, BLRI-E and
OEQ scores, respectively, and accounted for 2.77%, 0.81%,
9.01%, 13.67%, and 0.70% of the variance in 1ij for BDI-II,
SOS-10, WAI-SR, BLRI-E, and OEQ scores, respectively.
Discussion
The present research marks the first attempt to develop and
evaluate a common factors feedback (CFF) intervention. Re-
sults suggest that our CFF system holds promise. Clients and
therapists reported satisfaction with the CFF system and en-
dorsed its utility. Multilevel modeling showed that, while there
were no between-groups effects on client ratings of outcome
expectations, depressive symptoms, and psychological well-
being, treatment condition had a medium-to-large sized effect
(see Lambert, 2013) on empathy (accounted for about 13.7% of
variability) and alliance ratings (accounted for about 9.0% of
the variability). Specifically, clients who received treatment
with CFF reported greater increases in perceived empathy and
alliance over the course of treatment relative to clients who
received treatment as usual. These results imply that our brief
feedback intervention, which on average took less than 10 min
to implement per session, may have enhanced the process of
psychotherapy.
Although we did not assess how the CFF system produced
these results, we can speculate about potential mechanisms.
Outcome feedback systems, on which our CFF system is based,
are effective in part because they help to identify patients who
are at risk for treatment failure. Research shows that, unaided,
therapists are relatively poor in identifying off-track patientsT
ab
le
2
E
st
im
a
te
s
o
f
th
e
U
n
co
n
d
it
io
n
a
l
G
ro
w
th
C
u
rv
es
P
a
ra
m
et
er
s
W
it
h
S
E
s
M
ea
su
re
s
F
ix
ed
ef
fe
ct
s
co
ef
fi
ci
en
ts
(S
E
)
R
an
do
m
ef
fe
ct
s
va
ri
an
ce
co
m
po
ne
nt
s
(S
E
)
�
0
0
0
�
1
0
0
V
ar
(u
0
0
j)
V
ar
(u
1
0
j)
C
ov
(u
0
0
j,u
1
0
j)
V
ar
(r
0
ij
)
V
ar
(r
1
ij
)
C
ov
(r
0
ij
,r
1
ij
)
e t
ij
B
D
I-
II
23
.4
9�
�
�
(1
.0
41
)
�
2.
74
�
�
�
(.
39
2)
.6
7
(4
.6
54
)
.4
3
(.
46
3)
�
.5
3
(1
.0
35
)
66
.1
1�
�
�
(1
2.
79
8)
3.
51
�
�
�
(.
94
3)
�
3.
94
(2
.5
86
)
13
.6
5�
�
�
(1
.4
43
)
S
O
S
-1
0
30
.9
5�
�
�
(.
92
1)
2.
42
�
�
(.
36
1)
.1
7
(.
35
1)
57
.7
4�
�
�
(1
0.
67
1)
5.
00
�
�
�
(1
.1
95
)
�
4.
50
(2
.6
43
)
13
.1
4�
�
�
(1
.3
95
)
W
A
I-
S
R
43
.4
8�
�
�
(.
88
3)
5.
75
�
�
�
(.
60
7)
51
.1
4�
�
�
(9
.8
27
)
16
.9
0�
�
�
(4
.5
82
)
�
11
.4
8�
(5
.4
08
)
12
.0
3�
�
�
(1
.2
76
)
B
L
R
I-
E
1.
60
�
�
�
(.
05
6)
.3
2�
�
(.
06
6)
.0
1
(.
01
5)
.1
8�
�
�
(.
04
1)
.0
7�
�
(.
02
4)
�
.0
2
(.
02
4)
.0
7�
�
�
(.
00
8)
O
E
Q
36
.0
6�
�
�
(1
.2
53
)
2.
21
�
�
�
(.
32
6)
11
3.
16
�
�
�
(1
9.
74
4)
6.
19
�
�
�
(1
.3
31
)
�
11
.1
5�
�
(3
.9
75
)
19
.1
0�
�
�
(1
.6
91
)
N
o
te
.
B
D
I-
II
�
B
ec
k
D
ep
re
ss
io
n
In
ve
nt
or
y-
II
;
S
O
S
-1
0
�
S
ch
w
ar
tz
O
ut
co
m
e
S
ca
le
-1
0;
W
A
I-
S
R
�
W
or
ki
ng
A
ll
ia
nc
e
In
ve
nt
or
y-
S
ho
rt
F
or
m
R
ev
is
ed
;
B
L
R
I-
E
�
B
ar
re
tt
-L
en
na
rd
R
el
at
io
ns
hi
p
In
ve
nt
or
y-
E
m
pa
th
y
S
ca
le
;
O
E
Q
�
O
ut
co
m
e
E
xp
ec
ta
ti
on
s
Q
ue
st
io
nn
ai
re
.
F
or
in
te
rp
re
ta
ti
on
of
�
0
0
0
,
�
1
0
0
,
u 0
0
j,
u 1
0
j,
r 0
ij
,
r 1
ij
an
d
e t
ij
,
pl
ea
se
se
e
“P
la
n
of
A
na
ly
si
s”
se
ct
io
n.
�
�
p
�
.0
1.
�
�
�
p
�
.0
01
.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
255COMMON FACTORS FEEDBACK
(Hannan et al., 2005), and so any tool that detects these patients
serves a critical need. This same concept may translate to
process-based feedback systems. Specifically, given that
therapist-rated process is relatively weakly correlated with both
client-rated and observer-rated process (Cecero et al., 2001;
Greenberg et al., 2001), it stands to reason that poor process is
frequently missed. Our CFF system might thus be useful be-
cause it helps therapists perform an otherwise challenging
task—the identification of poor process.
Once poor process is recognized, therapists are presumably in
a better position to tailor their behavior to the specific needs of
the client (see therapist responsiveness; Stiles, Honos-Webb, &
Surko, 1998). For example, therapists could respond to low
empathy ratings by not only exploring areas of misunderstand-
ing but by also increasing reflective listening and validation.
This adaptation of behavior to match the client’s needs would
likely improve process and the client’s perceptions of that
process.
Although this discussion focuses on therapist behavior, it is
critical that we not lose sight of the important role that clients
can play in process development. Flückiger and colleagues
(2012) showed that clients who are explicitly encouraged to be
proactive participants in their treatment tend to report greater
improvements in the alliance relative to clients who do not
receive this encouragement. Thus, it could be that CFF facili-
tates increased client agency and engagement in treatment,
which in turn improves process (see also Ryan & Deci, 2008;
Zuroff et al., 2007). Examination of these and other mecha-
nisms should be a high priority for future research.
Our finding that CFF influenced empathy and alliance but not
treatment outcome ratings may, at first blush, appear inconsis-
tent with common factors theory. That is, if common factors are
therapeutic, then an improvement in the common factors should
coincide with an improvement in treatment outcomes. A num-
ber of factors could explain our nonsignificant effects on treat-
ment outcome (as well as on outcome expectations). First, our
study was underpowered for the purpose of detecting small
between-groups effects. A second explanation is that our short
treatment length (i.e., five sessions) may have precluded some
between-groups effects. Indeed, the benefits of outcome feed-
back are relatively minimal over the first few sessions (De Jong
et al., 2014). Not only could treatment length be an issue, but
the amount of time devoted to feedback discussion may have
been insufficient; feedback was discussed, on average, less than
10 min per session, and so a greater focus on feedback may be
needed to maximize its effects. Alternatively, the more time
spent on process leaves less time for the client’s presenting
issue, and so an increased focus on process may unwittingly
attenuate treatment outcome effects. Yet another explanation
for our nonsignificant effects pertains to our analog sample; by
enrolling undergraduates with only moderate depressive symp-
toms (M BDI-II score � 23.68), floor/ceiling effects could have
contributed to our nonsignificant findings. It could also be the
case that feedback effects are specific to the variables that are
monitored; outcome feedback may primarily affect outcome,
and process feedback may primarily affect process. Future
research is needed to determine whether our nonsignificant
findings reflect Type II errors or reflect inherent limitations of
the feedback system.T
ab
le
3
E
st
im
a
te
s
o
f
th
e
C
o
n
d
it
io
n
a
l
G
ro
w
th
C
u
rv
es
P
a
ra
m
et
er
s
W
it
h
S
E
s
M
ea
su
re
s
F
ix
ed
ef
fe
ct
s
co
ef
fi
ci
en
ts
(S
E
)
R
an
do
m
ef
fe
ct
s
va
ri
an
ce
co
m
po
ne
nt
s
(S
E
)
�
0
0
0
�
0
1
0
�
1
0
0
�
1
1
0
V
ar
(u
0
0
j)
V
ar
(u
1
0
j)
C
ov
(u
0
0
j,u
1
0
j)
V
ar
(r
0
ij
)
V
ar
(r
1
ij
)
C
ov
(r
0
ij
,r
1
ij
)
e t
ij
B
D
I-
II
23
.0
8�
�
�
(1
.3
60
)
.9
1
(1
.9
81
)
�
2.
46
�
�
(.
48
0)
�
.6
0
(.
56
7)
.8
1
(4
.7
02
)
.4
8
(.
49
6)
�
.6
4
(1
.0
83
)
65
.7
3�
�
�
(1
2.
74
4)
3.
41
�
�
�
(.
92
7)
�
3.
67
(2
.5
58
)
13
.6
3�
�
�
(1
.4
41
)
S
O
S
-1
0
30
.5
5�
�
�
(1
.2
27
)
.9
1
(1
.8
53
)
2.
18
�
�
(.
47
5)
.5
0
(.
63
8)
.2
5
(.
40
9)
57
.4
7�
�
�
(1
0.
61
7)
4.
96
�
�
�
(1
.1
86
)
�
4.
69
(2
.6
51
)
13
.1
1�
�
�
(1
.3
91
)
W
A
I-
S
R
43
.0
5�
�
�
(1
.1
86
)
.9
8
(1
.7
72
)
4.
63
�
�
�
(.
78
2)
2.
61
�
(1
.1
83
)
50
.8
9�
�
�
(9
.7
76
)
15
.3
8�
�
�
(4
.2
82
)
�
11
.8
8�
(5
.2
63
)
11
.9
7�
�
�
(1
.2
65
)
B
L
R
I-
E
1.
58
�
�
�
(.
07
5)
.0
3
(.
11
2)
.2
2�
(.
08
4)
.2
0�
(.
08
7)
.0
2
(.
01
8)
.1
8�
�
�
(.
04
0)
.0
6�
�
(.
02
2)
�
.0
2
(.
02
3)
.0
7�
�
�
(.
00
8)
O
E
Q
35
.5
6�
�
�
(1
.6
76
)
1.
13
(2
.5
18
)
2.
01
�
�
�
(.
43
5)
.4
5
(.
65
5)
11
2.
75
�
�
�
(1
9.
67
5)
6.
15
�
�
�
(1
.3
22
)
�
11
.2
0�
�
(3
.9
60
)
19
.0
8�
�
�
(1
.6
88
)
N
o
te
.
B
D
I-
II
�
B
ec
k
D
ep
re
ss
io
n
In
ve
nt
or
y-
II
;
S
O
S
-1
0
�
S
ch
w
ar
tz
O
ut
co
m
e
S
ca
le
-1
0;
W
A
I-
S
R
�
W
or
ki
ng
A
ll
ia
nc
e
In
ve
nt
or
y-
S
ho
rt
F
or
m
R
ev
is
ed
;
B
L
R
I-
E
�
B
ar
re
tt
-L
en
na
rd
R
el
at
io
ns
hi
p
In
ve
nt
or
y-
E
m
pa
th
y
S
ca
le
;
O
E
Q
�
O
ut
co
m
e
E
xp
ec
ta
ti
on
s
Q
ue
st
io
nn
ai
re
.
F
or
in
te
rp
re
ta
ti
on
of
�
0
0
0
,
�
0
1
0
,
�
1
0
0
,
�
1
1
0
,
u 0
0
j,
u 1
0
j,
r 0
ij
,
r 1
ij
an
d
e t
ij
,
pl
ea
se
se
e
“P
la
n
of
A
na
ly
si
s”
se
ct
io
n.
�
p
�
.0
5.
�
�
p
�
.0
1.
�
�
�
p
�
.0
01
.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
256 MCCLINTOCK ET AL.
There are additional points to be made regarding this study’s
limitations. Although the array of implemented evidence-based
therapies increases the generalizability of findings, generaliz-
ability is limited by the short therapy duration, participant
compensation, and use of a mostly White and female analog
sample. A different pattern of results could have emerged, for
instance, with clients who are more difficult to engage in
therapy (e.g., clients with personality disorders, chronic depres-
sion, etc.) and who require longer-term treatment (see De Jong
et al., 2014). It is also worth noting that study therapists were
involved in the design of the CFF system and, as such, may
have had a strong allegiance (see De Jong, van Sluis, Nugter,
Heiser, & Spinhoven, 2012; Falkenström, Markowitz, Jonker,
Philips, & Holmqvist, 2013) to the TAU � CFF condition.
Demand characteristics and social desirability bias may have
influenced the present results as well; TAU � CFF clients knew
their data would be reviewed and discussed with their therapist,
and so they may have overreported process quality. This con-
cern is somewhat mitigated, however, by Reese et al.’s (2013)
finding that alliance scores are not influenced by the presence
of a therapist or the knowledge that one’s scores would be
reviewed by the therapist. Nevertheless, in light of these short-
comings, we recommend that future investigations (a) enroll
demographically diverse, treatment-seeking participants; (b)
employ longer treatments; and (c) evaluate the CFF system
relative to an outcome-based feedback system.
The CFF system developed in the present study represents a
novel synthesis of outcome feedback systems and the common
factors literature. Our CFF system monitors three common
factors (i.e., outcome expectations, empathy, and alliance) over
the course of therapy, visually presents these ratings—relative
to normative data—to clients and therapists, and provides use-
ful, empirically based strategies for improving suboptimal pro-
cess. This approach has a number of strengths. First, since
identifying poor process is the first step in repairing process,
the
CFF system fills a vital role in providing both a signal for
off-track process and a context for collaboratively addressing
concerns. Second, the CFF system yields targeted, actionable
information that has direct implications for treatment planning.
For example, low ratings on the tasks component of the alliance
can be readily addressed by exploring discrepancies between
the implemented techniques and the client’s perceptions about
which techniques should be implemented in therapy. A third
strength of the CFF system is that it focuses on factors common
across treatments and thus could be useful in a wide range of
settings and contexts. Finally, whereas outcome feedback is
often met with fear and mistrust (Boswell, Kraus, Miller, &
Lambert, 2015), feedback about what is simply transpiring in
therapy might be more palatable for therapists and thus has the
potential to be widely implemented. We are hopeful that our
CFF system will advance outcome feedback and common fac-
Figure 3. Mean empathy scores over time for TAU and TAU �
CFF. Note. BLRI-E � Barrett-Lennard
Relationship Inventory-Empathy Scale; TAU � Treatment as
usual; TAU � CFF � Treatment as usual plus
common factors feedback. See the online article for the color
version of this figure.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
257COMMON FACTORS FEEDBACK
tors literatures and will improve the care provided to psycho-
therapy clients.
References
Anderson, T., Patterson, C. L., McClintock, A. S., & Song, X.
(2013).
Factorial and predictive validity of the expectations about
counseling-
brief (EAC-B) with clients seeking counseling. Journal of
Counseling
Psychology, 60, 496 –507. http://dx.doi.org/10.1037/a0034222
Barrett-Lennard, G. T. (1981). The empathy cycle: Refinement
of a nuclear
concept. Journal of Counseling Psychology, 28, 91–100.
http://dx.doi
.org/10.1037/0022-0167.28.2.91
Barrett-Lennard, G. T. (2015). The Relationship Inventory: A
complete
resource and guide. West Sussex, United Kingdom: Wiley.
Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck
Depression
Inventory (2nd ed.). San Antonio, TX: The Psychological
Corporation.
Blais, M. A., Lenderking, W. R., Baer, L., deLorell, A., Peets,
K., Leahy,
L., & Burns, C. (1999). Development and initial validation of a
brief
mental health outcome measure. Journal of Personality
Assessment, 73,
359 –373. http://dx.doi.org/10.1207/S15327752JPA7303_5
Bohart, A. C., & Greenberg, L. S. (1997). Empathy
reconsidered: New
directions in psychotherapy. Washington, DC: American
Psychological
Association. http://dx.doi.org/10.1037/10226-000
Bordin, E. S. (1979). The generalizability of the psychoanalytic
concept of
the working alliance. Psychotherapy: Theory, Research, &
Practice, 16,
252–260. http://dx.doi.org/10.1037/h0085885
Borkovec, T. D., Newman, M. G., Pincus, A. L., & Lytle, R.
(2002). A
component analysis of cognitive-behavioral therapy for
generalized anx-
iety disorder and the role of interpersonal problems. Journal of
Consult-
ing and Clinical Psychology, 70, 288 –298.
http://dx.doi.org/10.1037/
0022-006X.70.2.288
Boswell, J. F., Kraus, D. R., Miller, S. D., & Lambert, M. J.
(2015).
Implementing routine outcome monitoring in clinical practice:
Benefits,
challenges, and solutions. Psychotherapy Research, 25, 6 –19.
http://dx
.doi.org/10.1080/10503307.2013.817696
Bruce, N., Shapiro, S. L., Constantino, M. J., & Manber, R.
(2010).
Psychotherapist mindfulness and the psychotherapy process.
Psycho-
therapy: Theory, Research, Practice, Training, 47, 83–97.
http://dx.doi
.org/10.1037/a0018842
Cecero, J. J., Fenton, L. R., Frankforter, T. L., Nich, C., &
Caroll, K. M.
(2001). Focus on therapeutic alliance: The psychometric
properties of
six measures across three instruments. Psychotherapy: Theory,
Research, Practice, Training, 38, 1–11.
http://dx.doi.org/10.1037/0033-
3204.38.1.1
Connolly Gibbons, M. B., Kurtz, J. E., Thompson, D. L., Mack,
R. A., Lee,
J. K., Rothbard, A., . . . Crits-Christoph, P. (2015). The
effectiveness of
clinician feedback in the treatment of depression in the
community
mental health system. Journal of Consulting and Clinical
Psychology,
83, 748 –759. http://dx.doi.org/10.1037/a0039302
Constantino, M. J., Ametrano, R. M., & Greenberg, R. P.
(2012). Clinician
interventions and participant characteristics that foster adaptive
patient
expectations for psychotherapy and psychotherapeutic change.
Psycho-
therapy, 49, 557–569. http://dx.doi.org/10.1037/a0029440
Constantino, M. J., Glass, C. R., Arnkoff, D. B., Ametrano, R.
M., &
Smith, J. Z. (2011). Expectations. In J. Norcross (Ed.),
Psychotherapy
relationships that work: Evidence-based responsiveness (2nd
ed., pp.
181–192). New York, NY: Oxford University Press.
http://dx.doi.org/
10.1093/acprof:oso/9780199737208.003.0018
Figure 4. Mean alliance scores over time for TAU and TAU �
CFF. Note. WAI-SR � Working Alliance
Inventory-Short Form Revised; TAU � Treatment as usual;
TAU � CFF � Treatment as usual plus common
factors feedback. See the online article for the color version of
this figure.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
258 MCCLINTOCK ET AL.
http://dx.doi.org/10.1037/a0034222
http://dx.doi.org/10.1037/0022-0167.28.2.91
http://dx.doi.org/10.1037/0022-0167.28.2.91
http://dx.doi.org/10.1207/S15327752JPA7303_5
http://dx.doi.org/10.1037/10226-000
http://dx.doi.org/10.1037/h0085885
http://dx.doi.org/10.1037/0022-006X.70.2.288
http://dx.doi.org/10.1037/0022-006X.70.2.288
http://dx.doi.org/10.1080/10503307.2013.817696
http://dx.doi.org/10.1080/10503307.2013.817696
http://dx.doi.org/10.1037/a0018842
http://dx.doi.org/10.1037/a0018842
http://dx.doi.org/10.1037/0033-3204.38.1.1
http://dx.doi.org/10.1037/0033-3204.38.1.1
http://dx.doi.org/10.1037/a0039302
http://dx.doi.org/10.1037/a0029440
http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0018
http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0018
Constantino, M. J., McClintock, A. S., McCarrick, S. M.,
Anderson, T., &
Himawan, L. (2016). Outcome Expectations Questionnaire.
Manuscript
in preparation.
Cuijpers, P., Driessen, E., Hollon, S. D., van Oppen, P., Barth,
J., &
Andersson, G. (2012). The efficacy of non-directive supportive
therapy
for adult depression: A meta-analysis. Clinical Psychology
Review, 32,
280 –291.
De Jong, K., Timman, R., Hakkaart-Van Roijen, L., Vermeulen,
P.,
Kooiman, K., Passchier, J., & Van Busschbach, J. (2014). The
effect of
outcome monitoring feedback to clinicians and patients in short
and
long-term psychotherapy: A randomized controlled trial.
Psychotherapy
Research, 24, 629 – 639.
http://dx.doi.org/10.1080/10503307.2013
.871079
De Jong, K., van Sluis, P., Nugter, M. A., Heiser, W. J., &
Spinhoven, P.
(2012). Understanding the differential impact of outcome
monitoring:
Therapist variables that moderate feedback effects in a
randomized
clinical trial. Psychotherapy Research, 22, 464 – 474.
http://dx.doi.org/
10.1080/10503307.2012.673023
Dowell, N. M., & Berman, J. S. (2013). Therapist nonverbal
behavior and
perceptions of empathy, alliance, and treatment credibility.
Journal of
Psychotherapy Integration, 23, 158 –165.
http://dx.doi.org/10.1037/
a0031421
Duncan, B. L. (2012). The partners for change outcome
management
system (PCOMS): The heart and soul of change project.
Canadian
Psychology, 53, 93–104. http://dx.doi.org/10.1037/a0027762
Elliott, R., Bohart, A. G., Watson, J. C., & Greenberg, L. S.
(2011).
Empathy. In J. Norcross (Ed.), Psychotherapy relationships that
work:
Evidence-based responsiveness (2nd ed., pp. 89 –108). New
York, NY:
Oxford University Press. http://dx.doi.org/10.1093/acprof:oso/
9780199737208.003.0006
Falkenström, F., Markowitz, J. C., Jonker, H., Philips, B., &
Holmqvist, R.
(2013). Can psychotherapists function as their own controls?
Meta-
analysis of the crossed therapist design in comparative
psychotherapy
trials. The Journal of Clinical Psychiatry, 74, 482– 491.
http://dx.doi
.org/10.4088/JCP.12r07848
Flückiger, C., Del Re, A. C., Wampold, B. E., Znoj, H., Caspar,
F., & Jörg,
U. (2012). Valuing clients’ perspective and the effects on the
therapeutic
alliance: A randomized controlled study of an adjunctive
instruction.
Journal of Counseling Psychology, 59, 18 –26.
http://dx.doi.org/10
.1037/a0023648
Greenberg, L. S., Watson, J. C., Elliott, R., & Bohart, A. C.
(2001).
Empathy. Psychotherapy: Theory, Research, Practice, Training,
38,
380 –384. http://dx.doi.org/10.1037/0033-3204.38.4.380
Haggerty, G., Blake, M., Naraine, M., Siefert, C., & Blais, M.
A. (2010).
Construct validity of the Schwartz Outcome Scale-10:
Comparisons to
interpersonal distress, adult attachment, alexithymia, the five-
factor
model, romantic relationship length and ratings of childhood
memories.
Clinical Psychology & Psychotherapy, 17, 44 –50.
Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart,
D. W.,
Shimokawa, K., & Sutton, S. W. (2005). A lab test and
algorithms for
identifying clients at risk for treatment failure. Journal of
Clinical
Psychology, 61, 155–163. http://dx.doi.org/10.1002/jclp.20108
Hatcher, R. L., & Gillaspy, J. A. (2006). Development and
validation of a
revised short version of the Working Alliance Inventory.
Psychotherapy
Research, 16, 12–25.
http://dx.doi.org/10.1080/10503300500352500
Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., &
Wang,
K. T. (2016). Research design in counseling. Boston, MA:
Cengage
Learning.
Hill, C. E., & O’Brien, K. M. (1999). Helping skills:
Facilitating explo-
ration, insight, and action. Washington, DC: American
Psychological
Association.
Horvath, A. O., Del Re, A. C., Flückiger, C., & Symonds, D.
(2011).
Alliance in individual psychotherapy. In J. Norcross (Ed.),
Psychother-
apy relationships that work: Evidence-based responsiveness
(2nd ed.,
pp. 25– 69). New York, NY: Oxford University Press.
http://dx.doi.org/
10.1093/acprof:oso/9780199737208.003.0002
Lambert, M. J. (2007). Presidential address: What we have
learned from a
decade of research aimed at improving psychotherapy outcome
in rou-
tine care. Psychotherapy Research, 17, 1–14.
http://dx.doi.org/10.1080/
10503300601032506
Lambert, M. J. (2013). The efficacy and effectiveness of
psychotherapy. In
M. J. Lambert (Ed.), Bergin and Garfield’s handbook of
psychotherapy
and behavior change (pp. 169 –218). Oxford, England: Wiley.
Lambert, M. J., Whipple, J. L., Harmon, C., Shimokawa, K.,
Slade, K., &
Christofferson, C. (2004). Clinical support tools manual. Provo,
UT:
Department of Psychology, Brigham Young University.
Lutz, W., Lambert, M. J., Harmon, S. C., Tschitsaz, A.,
Schürch, E., &
Stulz, N. (2006). The probability of treatment success, failure
and
duration: What can be learned from empirical data to support
decision
making in clinical practice? Clinical Psychology &
Psychotherapy, 13,
223–232. http://dx.doi.org/10.1002/cpp.496
MacFarlane, P., Anderson, T., & McClintock, A. S. (2015).
Empathy from
the client’s perspective: A grounded theory analysis.
Psychotherapy
Research. Advance online publication.
http://dx.doi.org/10.1080/
10503307.2015.1090038
McClintock, A. S., Anderson, T., & Cranston, S. (2015).
Mindfulness
therapy for maladaptive interpersonal dependency: A
preliminary ran-
domized controlled trial. Behavior Therapy, 46, 856 – 868.
http://dx.doi
.org/10.1016/j.beth.2015.08.002
McClintock, A. S., Anderson, T., & Petrarca, A. (2015).
Treatment expec-
tations, alliance, session positivity, and outcome: An
investigation of a
three-path mediation model. Journal of Clinical Psychology, 71,
41– 49.
http://dx.doi.org/10.1002/jclp.22119
Microsoft. (2013). Microsoft Excel. Redmond, WA: The
Microsoft Cor-
poration.
Miller, S. D., Duncan, B. L., Sorrell, R., & Brown, G. S. (2005).
The
partners for change outcome management system. Journal of
Clinical
Psychology, 61, 199 –208. http://dx.doi.org/10.1002/jclp.20111
Newman, M. G., & Fisher, A. J. (2010). Expectancy/credibility
change as
a mediator of cognitive behavioral therapy for generalized
anxiety
disorder: Mechanism of action or proxy for symptom change?
Interna-
tional Journal of Cognitive Therapy, 3, 245–261.
http://dx.doi.org/10
.1521/ijct.2010.3.3.245
Reese, R. J., Gillaspy, J. A., Jr., Owen, J. J., Flora, K. L.,
Cunningham,
L. C., Archie, D., & Marsden, T. (2013). The influence of
demand
characteristics and social desirability on clients’ ratings of the
therapeu-
tic alliance. Journal of Clinical Psychology, 69, 696 –709.
http://dx.doi
.org/10.1002/jclp.21946
Roemer, L., Orsillo, S. M., & Salters-Pedneault, K. (2008).
Efficacy of an
acceptance-based behavior therapy for generalized anxiety
disorder:
Evaluation in a randomized controlled trial. Journal of
Consulting and
Clinical Psychology, 76, 1083–1089. http://dx.doi.org/10.1037/
a0012720
Ryan, R. M., & Deci, E. L. (2008). A self-determination theory
approach
to psychotherapy: The motivational basis for effective change.
Canadian
Psychology, 49, 186 –193. http://dx.doi.org/10.1037/a0012753
Safran, J. D., & Muran, J. C. (2000). Negotiating the
therapeutic alliance:
A relational treatment guide. New York, NY: Guilford Press.
Safran, J. D., & Muran, J. C. (2006). Resolving therapeutic
impasses: A
training DVD. Santa Cruz, CA: Custom-flix.com.
Shimokawa, K., Lambert, M. J., & Smart, D. W. (2010).
Enhancing
treatment outcome of patients at risk of treatment failure: Meta-
analytic
and mega-analytic review of a psychotherapy quality assurance
system.
Journal of Consulting and Clinical Psychology, 78, 298 –311.
http://dx
.doi.org/10.1037/a0019247
Stiles, W. B., Honos-Webb, L., & Surko, M. (1998).
Responsiveness in
psychotherapy. Clinical Psychology: Science and Practice, 5,
439 – 458.
http://dx.doi.org/10.1111/j.1468-2850.1998.tb00166.x
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
259COMMON FACTORS FEEDBACK
http://dx.doi.org/10.1080/10503307.2013.871079
http://dx.doi.org/10.1080/10503307.2013.871079
http://dx.doi.org/10.1080/10503307.2012.673023
http://dx.doi.org/10.1080/10503307.2012.673023
http://dx.doi.org/10.1037/a0031421
http://dx.doi.org/10.1037/a0031421
http://dx.doi.org/10.1037/a0027762
http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0006
http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0006
http://dx.doi.org/10.4088/JCP.12r07848
http://dx.doi.org/10.4088/JCP.12r07848
http://dx.doi.org/10.1037/a0023648
http://dx.doi.org/10.1037/a0023648
http://dx.doi.org/10.1037/0033-3204.38.4.380
http://dx.doi.org/10.1002/jclp.20108
http://dx.doi.org/10.1080/10503300500352500
http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0002
http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0002
http://dx.doi.org/10.1080/10503300601032506
http://dx.doi.org/10.1080/10503300601032506
http://dx.doi.org/10.1002/cpp.496
http://dx.doi.org/10.1080/10503307.2015.1090038
http://dx.doi.org/10.1080/10503307.2015.1090038
http://dx.doi.org/10.1016/j.beth.2015.08.002
http://dx.doi.org/10.1016/j.beth.2015.08.002
http://dx.doi.org/10.1002/jclp.22119
http://dx.doi.org/10.1002/jclp.20111
http://dx.doi.org/10.1521/ijct.2010.3.3.245
http://dx.doi.org/10.1521/ijct.2010.3.3.245
http://dx.doi.org/10.1002/jclp.21946
http://dx.doi.org/10.1002/jclp.21946
http://dx.doi.org/10.1037/a0012720
http://dx.doi.org/10.1037/a0012720
http://dx.doi.org/10.1037/a0012753
http://dx.doi.org/10.1037/a0019247
http://dx.doi.org/10.1037/a0019247
http://dx.doi.org/10.1111/j.1468-2850.1998.tb00166.x
Swift, J. K., & Derthick, A. O. (2013). Increasing hope by
addressing
clients’ outcome expectations. Psychotherapy, 50, 284 –287.
http://dx
.doi.org/10.1037/a0031941
Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy
debate:
The evidence for what makes psychotherapy work (2nd ed.).
New York,
NY: Routledge.
Young, J. L., Waehler, C. A., Laux, J. M., McDaniel, P. S., &
Hilsenroth,
M. J. (2003). Four studies extending the utility of the Schwartz
Outcome
Scale (SOS-10). Journal of Personality Assessment, 80, 130 –
138. http://
dx.doi.org/10.1207/S15327752JPA8002_02
Zimmerman, B. J., & Kitsantas, A. (1997). Development phases
in self-
regulation: Shifting from process goals to outcome goals.
Journal of
Educational Psychology, 89, 29 –36.
http://dx.doi.org/10.1037/0022-
0663.89.1.29
Zuroff, D. C., Koestner, R., Moskowitz, D. S., McBride, C.,
Marshall, M.,
& Bagby, M. R. (2007). Autonomous motivation for therapy: A
new
common factor in brief treatments for depression.
Psychotherapy Re-
search, 17, 137–147.
http://dx.doi.org/10.1080/10503300600919380
Received August 20, 2016
Revision received November 7, 2016
Accepted November 7, 2016 �
Members of Underrepresented Groups:
Reviewers for Journal Manuscripts Wanted
If you are interested in reviewing manuscripts for APA journals,
the APA Publications and
Communications Board would like to invite your participation.
Manuscript reviewers are vital to the
publications process. As a reviewer, you would gain valuable
experience in publishing. The P&C
Board is particularly interested in encouraging members of
underrepresented groups to participate
more in this process.
If you are interested in reviewing manuscripts, please write
APA Journals at [email protected]
Please note the following important points:
• To be selected as a reviewer, you must have published articles
in peer-reviewed journals. The
experience of publishing provides a reviewer with the basis for
preparing a thorough, objective
review.
• To be selected, it is critical to be a regular reader of the five
to six empirical journals that are most
central to the area or journal for which you would like to
review. Current knowledge of recently
published research provides a reviewer with the knowledge base
to evaluate a new submission
within the context of existing research.
• To select the appropriate reviewers for each manuscript, the
editor needs detailed information.
Please include with your letter your vita. In the letter, please
identify which APA journal(s) you
are interested in, and describe your area of expertise. Be as
specific as possible. For example,
“social psychology” is not sufficient—you would need to
specify “social cognition” or “attitude
change” as well.
• Reviewing a manuscript takes time (1– 4 hours per manuscript
reviewed). If you are selected to
review a manuscript, be prepared to invest the necessary time to
evaluate the manuscript
thoroughly.
APA now has an online video course that provides guidance in
reviewing manuscripts. To learn
more about the course and to access the video, visit
http://www.apa.org/pubs/authors/review-
manuscript-ce-video.aspx.
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
260 MCCLINTOCK ET AL.
http://dx.doi.org/10.1037/a0031941
http://dx.doi.org/10.1037/a0031941
http://dx.doi.org/10.1207/S15327752JPA8002_02
http://dx.doi.org/10.1207/S15327752JPA8002_02
http://dx.doi.org/10.1037/0022-0663.89.1.29
http://dx.doi.org/10.1037/0022-0663.89.1.29
http://dx.doi.org/10.1080/10503300600919380Enhancing
Psychotherapy Process With Common Factors Feedback: A
Randomized, Clinical TrialOutcome
ExpectationsEmpathyAllianceCurrent
ResearchMethodParticipantsClientsTherapistsMeasuresOutcome
expectationsEmpathyTherapeutic
allianceDepressionPsychological well-beingClient satisfaction
surveyTherapist satisfaction surveyProceduresCFF SystemCFF
system adherencePlan of AnalysisResultsPreliminary
AnalysesClient Satisfaction RatingsTherapist Satisfaction
RatingsBetween-Group Effects on Process and
OutcomeDiscussionReferences
Traumatology
19(3) 171 –178
© The Author(s) 2012
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1534765612459891
tmt.sagepub.com
Article
Concern over the best methods to prevent and treat combat-
related posttraumatic stress disorder (PTSD) in military ser-
vice members and military veterans has been of particular
interest with the resurgence of military service members who
are serving multiple tours in Iraq and Afghanistan. Preventa-
tive programs such as comprehensive soldier fitness (Casey,
2011) acknowledge the need for the military services to have
a more frank discussion with their service members about
PTSD. The stigma within the military of a PTSD diagnosis
prevents many service members from seeking treatment,
even when they recognize the symptoms of PTSD in them-
selves. While military programs, such as the Department of
Veterans Affairs, have sought to educate service members
and their families about the importance of seeking treatment
for PTSD, the threat of having a diagnosis of PTSD on their
service record stops many service members from seeking
help. Some have been able to seek treatment outside of the
military health care system, but such treatment can be costly.
Another related population is military veterans who, like
their present-day counterparts, did not seek treatment or for
whom no appropriate treatment was available.
The preferred treatment for anxiety disorders is exposure
therapy (Powers & Emmelkamp, 2008), also known as pro-
longed or gradual exposure therapy. Exposure therapy is a
type of behavior therapy where the client is taught cognitive
and behavioral techniques such as progressive muscle relax-
ation, breathing exercises, recognition of automatic thoughts
and schemas, and cognitive restructuring (Pull, 2005). The
client is taught to utilize these interventions while the thera-
pist gradually exposes the client to the cause of anxiety,
increasing the intensity of exposure as the client is able to
tolerate in order to help the client become more accustomed
to the stimuli-evoking anxiety. Clients also undertake self-
conducted exposure-based exercises as homework in
between formal treatment sessions. Two types of exposure
therapy have dominated the field: in vivo therapy, where the
459891TMTXXX10.1177/153
4765612459891TraumatologyNelson
1Florida State University, Tallahassee, FL, USA
Corresponding Author:
Rebekah J. Nelson, Florida State University, 296 Champions
Way,
University Center, Building C, Tallahassee, FL 32306, USA.
Email: [email protected]
Is Virtual Reality Exposure Therapy
Effective for Service Members and
Veterans Experiencing Combat-
Related PTSD?
Rebekah J. Nelson1
Abstract
Purpose: Exposure therapy has been identified as an effective
treatment for anxiety disorders, including posttraumatic stress
disorder (PTSD). The use of virtual reality exposure therapy
(VRET) in the past decade has increased due to improvements
in
virtual reality technology. VRET has been used to treat active
duty service members and veterans experiencing posttraumatic
stress symptoms by exposing them to a virtual environment
patterned after the real-world environment in which the trauma
occurred. This article is a systematic review of the effectiveness
of using VRET with these two populations. Method: A
search of 14 databases yielded 6 studies with experimental or
quasi-experimental designs where VRET was used with active
duty service members or veterans diagnosed with combat-
related PTSD. Results: Studies show positive results for the
use of VRET in treating combat-related PTSD, though more
trials are needed with both active duty service members and
veterans. Conclusions: VRET is an effective treatment, however
more studies including random assignment are needed in
order to show whether it is more effective than other treatments.
There are still many barriers that the use of VRET with
military populations would need to overcome in order to be
widely used, including helping veterans become accustomed to
the technology; assisting veterans who have spent a longer
period of time avoiding anxiety-inducing stimuli in accepting an
initial increase in anxiety; clinician concerns about the
technology interfering with the therapeutic alliance, and
clinician biases
against the use of exposure therapy in general; and high
treatment dropout rates.
Keywords
combat, posttraumatic stress disorder, service members,
veterans, virtual reality exposure therapy
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
172 Traumatology 19(3)
therapist and client are able to experience exposure to anxi-
ety-evoking stimuli in increasingly naturalistic settings; and
imaginal exposure therapy, where the therapist leads the cli-
ent in imagining the cause of anxiety. Usually exposure ther-
apy (ET) in imagination is followed by real-life exposure.
These two types of exposure therapy are sometimes poorly
tolerated by service members and veterans who have com-
bat-related PTSD because of the distinctness of the settings
in which the trauma occurred, and because of the tendency of
clients to suppress thoughts that activate PTSD symptoms
(Riva et al., 2010).
A relatively new exposure-based treatment for PTSD that
has gained attention in the media and the therapeutic com-
munity is the use of virtual reality programs. Using virtual
reality in place of real-life or imaginal exposure therapy
allows clients to receive and process exposure to traumatic
events in a relatively safe environment. Virtual reality expo-
sure therapy (VRET) has been tested with persons experi-
encing PTSD symptoms in multiple trials and with many
different causes of anxiety (Gerardi, Cukor, Difede, Rizzo,
& Rothbaum, 2010; Pull, 2005). In their meta-analysis on the
use of VRET for anxiety disorders, Powers and Emmelkamp
(2008) found VRET to have a slightly more powerful effect
than did real-life exposure treatment.
This review will assess studies of the effectiveness of
VRET when used to treat service members and military vet-
erans diagnosed with combat-related PTSD. It will also con-
sider the practical use of the technology, including the cost
of treatment and the possible application of VRET in the
assessment and prevention of PTSD in active duty soldiers.
PTSD is a type of anxiety disorder brought on by experi-
encing or witnessing a traumatic event or events. Traumatic
events are defined by the Diagnostic and Statistical Manual
of Mental Disorders (4th ed., text rev.; DSM-IV-TR;
American Psychiatric Association, 2000) as events “that
involved actual or threatened death or serious injury, or a
threat to the physical integrity of self or others” (p. 467). The
response to the trauma also yields feelings of hopelessness,
fear, or horror. The traumatic event must be reexperienced in
some way, such as through nightmares or physical reactions
to events resembling the trauma. There must also be an
avoidance of stimuli that cause thoughts about the trauma
and increased arousal, such as hypervigilance. These symp-
toms must have lasted for more than 1 month, and must be
causing clinically significant distress for the individual.
Therapists using VRET to treat PTSD seek to simulate a
virtual world that is as similar as possible to the real-world
environment in which the traumatic event occurred. This is
referred to as a “sense of presence” in the virtual world, or
the level to which clients actually feel the virtual environ-
ment mirrors reality. In a qualitative study of clinician
perceptions about VRET (Kramer et al., 2010), clinicians
expressed concern that the virtual environment would not be
realistic enough in order to trigger and then reduce anxiety.
However, in their evaluation study of the realism of two vir-
tual Iraq scenarios, Reger, Gahm, Rizzo, Swanson, and
Duma (2009) conducted a convenience sample study with 93
soldiers not diagnosed with PTSD to see if the soldiers, who
had been deployed to Iraq one or more times, felt this sense
of presence. A majority of the soldiers rated the convoy sce-
nario (86%) and the city environment (82%) from adequate
to excellent.
VRET uses several technology-based methods to engage
all five senses of the client, making the exposure feel as real-
istic as possible. The technology used generally includes a
“controlled delivery of sensory stimulation via the therapist,
including visual, auditory, olfactory, and tactile cues”
(Gerardi et al., 2010, p. 299). Visual effects include being
able to change the time of day, weather, number of pedestri-
ans and vehicles, street debris, Humvees, planes, and heli-
copters clients see within the virtual world. Most of the
machines include an orientation tracker, which allows clients
to move about the virtual environment via headgear that
responds to the movements of the participant. Olfactory
senses are also engaged using scent palettes, which blow
smells such as spices or burning rubber, and are controlled
by the therapist. The therapist can also include sounds such
as sirens, people crying, gunshots and mortars, helicopters,
improvised explosive devices, rocket-propelled grenades,
car bombs, and sounds of an insurgent attack. In their narra-
tive review of the many uses of VRET, Gerardi et al. (2010)
describe two scenarios available in their virtual Iraq:
The city incorporates scenes such as marketplaces,
security checkpoints, mosques, apartment buildings
that can be entered, and rooftops that can be accessed.
The Humvee scenario includes a desert setting with
overpasses, checkpoints, debris, broken-down struc-
tures, and ambushes that can be introduced. (p. 303)
Finally, clients’ seats are manipulated to create tactile
vibrations in order to mimic a car ride, helicopter ride, or an
explosion. An example of the equipment and virtual reality
scenarios utilized in VRET can be seen in many media
reports on the subject, such as a news report conducted by
the Canadian Broadcasting Corporation on the costs and
benefits of VRET (Virtual Iraq Afghanistan Media Story
CBC, video file).
Similar virtual settings can be created for veterans of wars
in other areas of the world. Specific to this article are virtual
environments that mimic settings in Vietnam for veterans of
the Vietnam War, and in Africa, where a war was fought by
Portuguese soldiers in Africa between 1963 and 1970.
As explained in Rothbaum, Hodges, Ready, Graap, and
Alarcon’s (2001) article on the use of VRET with Vietnam
veterans, VRET treatment spans several weeks of therapy,
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
Nelson 173
Table 1. VRET Equipment Needed to Set Up a VRET
Environment.
Two Pentium 4 computers with 1 GB RAM each
DirectX 9
128 MB DirectX 9-compatible NVIDIA 3D graphics card
Ethernet cable
Head Mounted Display and Navigation Interface (eMagin z800)
Numerical Design Limited’s Gamebryo rendering library
Alias’ Maya 6 and Autodesk 3D Studio Max 7
Envirodine, Inc. Scent Palette
Logitech force-feedback game control pad and audio-tactile
sound transducers from Aura Sound Inc.
Table 2. VRET Reviewed Studies.
Study Year Intervention Study population Study design Primary
outcome
Ready et al. 2006 VRET Vietnam veterans
diagnosed with PTSD
(n = 14)
OXO Change in CAPS scores for participants
were statistically significantly different at
posttreatment, 3 month, and 6 month follow
up; BDI scores were statistically significantly
different at posttreament and 6 month follow
up, but not at 3 month follow up
Gamito et al. 2010 VRET vs. exposure in
imagination vs. waiting
list
Portuguese war veterans
(n = 10)
VRET (n = 5)
EI (n = 2)
Waiting list (n=3)
R OXO
R OYO
R O O
CAPS scores were not statistically significantly
different; IES-R, BDI, and SCL-90-R scores were
collected only for the VRET group
Ready et al.
2010 VRET vs. present-
centered therapy
Vietnam veterans with
combat-related PTSD
(n = 11)
R OXO
R OYO
Both VRET and PCT lowered mean CAPS scores
at posttreatment and follow up, with VRET
yielding higher levels of improvement
McLay et al.
2011 VRET vs. treatment as
usual
Active duty soldiers
from two hospital sites
with PTSD related to
their duties in Iraq or
Afghanistan (n = 19)
R OXO
R OYO
No significant difference between the two groups
before or after treatment; however, there
was a significant (p < .05) difference in the
mean CAPS change score over the course of
treatment
Reger et al. 2011 VRET, adapted from
prolonged exposure
manual
Active duty soldiers
(n = 24), diagnosed
with PTSD (n = 18), or
Anxiety NOS (n = 6)
OXO At posttreatment, 62% (n = 15) had reliably
improved on the PCL-M
McLay et al. 2012 Virtual reality exposure
therapy (VRET)
Active duty soldiers
from a naval medical
center and a marine
corps base (n = 42)
with multiple drop
outs before session
4 (n = 12) and after
session 4 (n = 10)
OXO PCL-M scores between baseline and
posttreatment were statistically significant
(p < .0001), as were PHQ-9 and BAI scores
at baseline and posttreatment. For n = 17
participants, scores at 3 month follow up were
also significantly different from baseline on the
PCL-M, PHQ-9, and the BAI
generally meeting twice a week for 90 to 120 min each ses-
sion. The first session of VRET treatment is spent in assess-
ing clients and gathering information about the traumatic
event they experienced. Sessions 2 and 3 are spent in accli-
matizing clients to the virtual reality equipment and environ-
ment, and in teaching clients cognitive behavioral
interventions to practice when their symptoms increase, such
as breathing and relaxation techniques. Further therapy
sessions are spent in using the virtual environment to expose
clients to traumatic memories while they describe the events
in detail. Homework assigned to clients generally includes
listening to recordings of the therapy sessions while practic-
ing cognitive behavioral interventions learned in therapy.
Some who have been working on virtual reality technol-
ogy have anticipated the use of it as an assessment tool to
determine whether a soldier is emotionally and mentally fit
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
174 Traumatology 19(3)
to return for another tour (McLay et al., 2012). Others (Kraft,
Amick, Barth, French, & Lew, 2010) also anticipate the use
of virtual reality in reassessing the driving ability of combat
service members returning from Iraq or Afghanistan who
have been diagnosed with PTSD or traumatic brain injury
(TBI), as these two disorders may critically affect returning
soldiers’ ability to drive. In these ways, virtual reality tech-
nology may benefit soldiers as an assessment tool, rather
than solely a treatment for PTSD.
VRET has also been looked at as a prevention tool. Stetz,
Long, Wiederhold, and Turner (2008) conducted a study in
which virtual reality and stress inoculation training were
used to try and prevent medics who would be serving in Iraq
or Afghanistan from later developing PTSD. Stress inocula-
tion training consists of exposing the participant through
virtual reality technology to traumatic events they may
encounter in their future service in hopes that when they
encounter similar traumatic events in reality they will be able
to use their practiced cognitive behavioral skills to lessen
their chances of developing PTSD in the future. While no
distal measures of whether the stress inoculation training
provided in Stetz et al. (2008) are given, posttests suggest
exposing military medics preemptively to stressful situations
may harden them against trauma. Such a preventative effort
may be useful to all military personnel, as Reger et al. (2009)
report that 67% of a convenience sample (n = 93) of military
service members had provided aid to persons who were
wounded during their combat experience. Exposure to such
secondary trauma can sometimes serve as the initiating event
triggering the onset of PTSD; however, if service members
were given preventative virtual reality stress inoculation
training, their chances of developing PTSD due to this expo-
sure may decrease.
One of the concerns of implementing virtual reality ther-
apy on a wide-scale basis is the approximate cost of purchas-
ing and setting up the virtual reality equipment and in training
therapists to use the equipment effectively. In their prelimi-
nary results utilizing virtual reality technology with active
duty soldiers with PTSD, Rizzo, Reger, Gahm, Difede, and
Rothbaum (2009) approximate some of the costs of setting
up an adequate amount of virtual reality equipment in order
to make the virtual environment realistic enough to help
effect change. However, the authors only provide actual dol-
lar amounts regarding the Head Mounted Display (US$1500)
and the Logitech control pad (<US$120) that creates vibra-
tions in the seat of the participant. Table 1 is a list Rizzo
et al. (2009) give of equipment needed to set up a virtual
reality therapy environment. In an interview with CBC news,
Rizzo (Virtual Iraq Afghanistan Media Story CBC, video
file) estimates the total cost of virtual reality hardware to be
approximately US$15,000, stating that the computer soft-
ware for conducting VRET can be obtained through him at
no cost.
Wood et al. (2009) articulate the possible financial ben-
efits implementing virtual reality technology could have if
the military were saved the money of having to replace
service members who would have left the military due to
PTSD symptoms. They estimated that the training cost sav-
ings for the 12 participants in their study would be just
under US$330,000, whereas the training cost savings of
treating PTSD with treatment as usual would be close to
US$193,000.
Pull (2005), Riva et al. (2010), and Gerardi et al. (2010)
state that VRET may be more cost effective than imaginal or
real-life exposure therapy because it can be less time-
consuming. This may be because the technological equip-
ment allows the clinician to have greater control over the
magnitude of exposure in a virtual environment than they
would have in trying to help the client imagine graded
images of the trauma or feared object, thus taking less time
overall to treat clients. Using virtual technology may also be
less costly than trying to have a real-life experience with
the client. For example, Gerardi et al. (2010) cites the cost to
the patient of having a virtual experience with flying versus the
cost of paying for a genuine flight.
Method
Search Strategy
Academic Search Complete, JSTOR, Applied Social Sciences
Index and Abstracts (ASSIA), Computer and Information
Systems Abstracts, ERIC, ProQuest Dissertations, and
Theses (PQDT), PsycINFO, Social Services Abstracts,
Sociological Abstracts, Social Sciences Citation Index, Web
of Knowledge, Web of Science, Military and Government
Collection, and Dissertation Abstracts were searched in order
to find studies pertaining to the topic. While the gray litera-
ture was not specifically searched, efforts were made to
obtain copies of articles and conference proceedings that
were a result of the search strategy. Where possible, search
terms were limited to abstracts. The search terms used for
this review were virtual and realit* and (military or veteran*)
and (PTSD or posttraumatic or post-traumatic). A flow chart
(Figure 1) depicts the disposition of retrieved articles.
Data Collection and Analysis Methods
While multiple case studies were found on this topic, only
experimental and quasi-experimental studies looking at the
use of VRET as a treatment for military service members or
veterans experiencing combat-related PTSD will be included.
The literature search yielded 100 studies. Seventy-one were
ineligible based on review of the title (including repeats of
previously acquired studies), and a further 16 were excluded
after reviewing abstracts. Following a full-text review, seven
more studies were excluded because they were found to be
preliminary results of studies already acquired, the text or
pertinent information was unavailable, or the study was ana-
lyzed in more than one of the resulting studies and the article
with the most information was chosen, leaving a total of six
studies to include in the review (Table 2).
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
Nelson 175
Common Measures Used to
Assess PTSD in Military Service Members
The first, most common measure used in studying the effec-
tiveness of treatment for PTSD is the Clinician Administered
PTSD Scale (CAPS; Gamito et al., 2010; McLay et al.,
2011; Ready, Gerardi, Backsheider, Mascaro, & Rothbaum,
2010; Ready, Pollack, Rothbaum, & Alarcon, 2006). The
CAPS is a measure that assesses the frequency and intensity
of PTSD symptoms. Another measure commonly used with
military service members is self-report PTSD Checklist,
Military Version (PCL-M; McLay et al., 2012; Reger
et al., 2011). The Impact of Events Scale Revised (IES-R;
Gamito et al., 2010) is a self-report instrument that measures
PTSD symptoms of avoidance, intrusion, and hyperarousal,
and the Symptoms Checklist Revised (SCL-90-R; Gamito
et al., 2010) is used to measure psychopathology. Finally,
the Patient Health Questionnaire-9 (PHQ-9; McLay et al.,
2012) and the Beck Depression Inventory (BDI; Gamito
et al., 2010; Ready et al., 2006) are used to measure depres-
sion levels in clients, while the Beck Anxiety Inventory
(BAI; McLay et al., 2012) is used to measure anxiety levels
in participants.
Therapists also use what are called Subjective Units of
Discomfort/Distress Scale (SUDS) when using VRET.
SUDS are generally not tracked or measured for experimen-
tal purposes, but are used to understand how the client is
responding in the moment to the level of exposure in the
virtual reality environment, and to decide if the level of
exposure should be increased or decreased based on partici-
pant reactivity. Physiological monitoring through biofeed-
back is also often used to monitor client response to the
virtual environment and, in one study in this review (Wood
et al., 2008) was used to measure the effectiveness of VRET.
Results
VRET With Active Duty Service Members
In their randomized controlled trial of VRET with active
duty soldiers, McLay et al. (2011) used a convenience sam-
ple to locate potential patients. They assigned 20 service
members to VRET (n = 10) or to treatment as usual (n = 10,
with one participant not completing postassessment tests)
using random assignment, and used the CAPS as their outcome
measure. While the VRET intervention appeared to follow
the standard VRET treatment protocol, the treatment as
usual group was not assigned to any particular treatment.
Rather, they were assigned to receive one or more of the
treatments available for PTSD provided by the two hospital
locations, which included prolonged exposure (PE) therapy,
EMDR, group therapy, psychiatric medication management,
substance rehab, and inpatient services. Unfortunately, only
the number of mental health visits, and not what type of
treatment the TAU patients were receiving, was tracked. The
findings for this study may not, therefore, truly reflect the
comparison between a VRET group and a TAU group, as
we are unsure what type of treatment the TAU group spe-
cifically received. Also, the TAU group was, at some point,
changed to a waiting-list group. It is unclear at what point
this information was given to TAU group members, which
may have affected their confidence in the TAU treatment
they received. McLay et al. (2011) found seven out of ten of
the VRET patients improved at least 30% on their CAPS
scores from pretest to posttest. There was no significant dif-
ference between CAPS scores after treatment; however, the
authors found a significant difference (p < .05) in the change
from pretest to posttest mean scores between the VRET
group (M = 35.4, SD = 24.7) and the TAU (M = 9.4, SD = 26.6)
group, favoring VRET.
Reger et al. (2011) conducted a convenience sample study
with 24 active duty soldiers diagnosed with PTSD (n = 18) or
anxiety NOS (n = 6) who had been deployed at least once to
Iraq or Afghanistan. The service members had either requested
to receive VRET as a treatment or had received previous treat-
ment for their disorder that was unsuccessful. The VRET was
based on a training manual for the conduct of PE, delivered by
a clinical psychologist with formal training in both VRET and
PE. Patients received treatment a mean of 27.8 months
(SD = 17.3) after the trauma, and received an average of 7.4
(SD = 3.3) treatment sessions. Researchers used the self-report
PCL-M to measure treatment outcomes, and found patients
reported a significant (p < .001) improvement in PTSD symp-
toms from pretest (M = 60.92, SD = 11.03) to posttest (M =
47.08, SD = 12.7), with a large effect size (Cohen’s d = 1.17).
Figure 1. Search strategy results for VRET treatment for
military
service members and veterans with PTSD.
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
176 Traumatology 19(3)
A quasi-experimental convenience sample study of
20 active duty service members (McLay et al., 2012) used
the PCL-M, PHQ-9, and BAI to measure PTSD symptoms,
depression, and anxiety. Their study revealed a large effect
size (Cohen’s d = 1.34) between baseline PCL-M scores
(n = 20, M = 53.8, SD = 9.6) and posttreatment (M = 35.6,
SD = 17.4) scores. For n = 17 participants, scores on the
PCL-M also showed a large effect size (Cohen’s d = 2.17)
between baseline (M = 53.8, SD = 9.6) and 3-month follow
up (M = 28.9, SD = 13.0). PHQ-9 scores at baseline (n = 20,
M = 13.3, SD = 5.4) and posttreatment (M = 7.1, SD = 6.7)
were statistically significant (p < .002), as was the difference
between baseline (n = 17, M = 12.9, SD = 5.4) and 3-month
follow up (M = 5.7, SD = 6.1, p < .001). Scores on the BAI
showed a medium effect size (Cohen’s d = 0.56) between
baseline (n = 20, M = 18.1, SD = 10.6) and posttreatment
(M = 8.12, SD = 9.0), and a large effect size (Cohen’s d = 1.01)
between baseline (n = 17, M = 18.1, SD = 10.6) and 3-month
follow up (M = 8.12, SD = 9.0). One limitation of this study
was the large dropout rate between the intent to treat group
(n = 42) and the participants who completed treatment (n = 20).
VRET With Veterans
In their study comparing VRET with present-centered ther-
apy (PCT), Ready et al. (2010) recruited clients currently in
treatment at the Atlanta VA Medical Center’s Mental Health
Clinic (n = 11, VRET n = 6, PCT n = 5), with one participant
from each group dropping out. The clinician who inter-
viewed participants was a licensed clinical psychologist with
several years of experience working with this population and
was blind to participant assignment. Clinicians used the
Structured Clinical Interview for DSM-IV, the CAPS, and
the Beck Depression Inventory as measures. PCT as the
comparison group included psychoeducation about PTSD,
problem-solving techniques, and a focus on the “here and
now” problems clients experience. Both the VRET and PCT
groups experienced improvement in symptoms; however,
the authors report “there was not statistically significant
improvement in CAPS or BDI scores when individual treat-
ment conditions were isolated” (Ready et al., 2010, p. 52).
The authors state that the small sample size impeded sig-
nificant differences between groups from being found. The
VRET group seemed to have lower baseline CAPS scores
(M = 87.83, SD = 15.43) than the PCT group (M = 101.00,
SD = 9.51). This is likely an artifact of the random assign-
ment procedure used with a small sample. The authors cal-
culated effect sizes for the mean change in the CAPS and
BDI scores for each group. The mean change in CAPS
scores for the VRET treatment group yielded a small
Cohen’s d of 0.28 from pretest to posttest (n = 5, M = 31.8,
SD = 39.1) and a medium Cohen’s d of 0.56 from pretest to
follow up (n = 5, M = 25.0, SD = 28.1). Differences in the
mean improvement of BDI scores did not yield significant
results. It is unclear why the authors chose to combine the
treatment groups and use a dependent samples t test to com-
pare changes in CAPS scores on the entire sample between
baseline, post-treatment, and follow up. An independent
samples t test of the same data for the VRET group at baseline
(n = 5, M = 101.0, SD = 9.51), posttreatment (n = 4, M = 75.5,
SD = 22.22), and follow up (n = 5, M = 87.00, SD = 6.32),
compared to the PCT group at baseline (n = 6, M = 87.83,
SD = 15.43), posttreatment (n = 5, M = 59.2, SD = 32.24),
and follow up (n = 4, M = 64.75, SD = 34.08) did not reveal
any statistically significant differences.
Gamito et al. (2010) completed a randomized controlled
pilot study comparing VRET (n = 5), imaginal exposure
(n = 2), and waiting list control (n = 3) groups with Portuguese
war veterans (n = 10) who had fought in Africa between
1963 and 1970. Measures used to assess participants of the
VRET group included the CAPS, a structured interview
from the DSM-IV, the IES-R, the SCL-90-R, and the BDI. It
is unclear why, but the SCL-90-R and BDI were not admin-
istered to the imaginal exposure and waiting list groups at
baseline or posttreatment. The authors report that BDI scores
for the VRET group were significantly lower at posttreat-
ment. There were no statistically significant differences
between groups at posttreatment on the CAPS. The IES-R
scores for the VRET group were reduced, whereas these
scores for the imaginal group and the waiting list control
group increased, however the differences were not statisti-
cally significant. Due to the small sample size, this study was
statistically underpowered and therefore inadequate to val-
idly compare VRET with imaginal therapy and waiting list
groups.
Ready et al. (2006) describe a group of multiple case
studies (Rothbaum, 2006; Rothbaum et al., 2001) where
Vietnam veterans (n = 14) were treated with VRET. Mean
CAPS scores at posttreatment (n = 14, M = 59.64, SD = 17.77),
3-month follow-up (n = 8, M = 55.13, SD = 14.38), and at
6-month follow-up (n = 11, M = 50.91, SD = 17.24) were all
statistically significantly different (p < .05) than CAPS
scores at baseline (n = 14, M = 72.57, SD = 16.18). Scores on
the BDI at posttreatment (n = 14, M = 21.14, SD = 8.18) and
at the 6-month follow up (n = 11, M = 18.45, SD = 9.49) were
statistically significantly different (p < .05) than at baseline
(n = 14, M = 24.86, SD = 9.70). Three-month posttreatment
BDI scores (n = 8, M = 24.25, SD = 9.53), however, were not
statistically significantly different from baseline BDI scores.
Discussion
Studies using VRET report several difficulties. First, the
nature of the treatment itself appears to be difficult for vet-
erans to either comprehend or trust. It is suspected that the
current generation of service members may be reacting
more positively to using virtual reality as a method of treat-
ing PTSD because they were raised in a generation more
familiar with this type of technology. Ready et al. (2010)
describe the older veteran population as being tentative
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
Nelson 177
about trusting the technology to actually help with their
PTSD symptoms.
Another difficulty in using VRET with a veteran popula-
tion is the amount of time that has lapsed between the trau-
matic events and the treatment. Authors suspect the larger
time lapse, in which participants have worked harder for a
longer period of time to suppress their PTSD symptoms,
causes participants to have a more difficult time in allowing
themselves to relive the traumatic event in the virtual environ-
ment. As a reliving of the events multiple times is necessary in
exposure therapy, this population has a much more difficult
time in succeeding with exposure therapy in general. Though
the clinicians explain to participants and their families that an
increase in symptoms is likely to occur at the beginning of
treatment, veterans seem to see this increase in symptoms as
evidence that the treatment is worsening their condition and
may cause many to terminate treatment. A qualitative study of
clinician perceptions of VRET found clinicians not trained in
the use of VRET expressed concerns about the safety of using
VRET with veterans, questioning whether the virtual environ-
ment would exacerbate the symptoms of veterans (Kramer et
al., 2010). However, in their meta-analysis on the use of
VRET with anxiety disorders, Powers and Emmelkamp
(2008) conducted a meta-regression analysis which showed
that an increase in the number of virtual reality treatment ses-
sions yielded larger effects sizes. This difficulty in recruiting
veterans as participants in trials using VRET has perhaps
stunted possible improvements that could be made to treat-
ment protocols that would benefit veterans. Case studies
determining how VRET can be better tailored specifically to
acclimatizing the veteran population to exposure therapy and
to virtual reality technology may be necessary. There have
also been high dropout rates in studies where participants are
active duty service members (McLay et al., 2012), which
could be attributed to difficulties in balancing treatment with
military duties, the time commitment of treatment sessions
(90-120 min twice weekly for 8-12 weeks), and the possibility
of transfers to other military bases occurring mid-treatment.
Kramer et al. (2010) also note that the use of virtual real-
ity technology as a form of treatment may cause the thera-
peutic alliance to suffer as a result. Therapists expressed
concern that multitasking conducting therapy and control-
ling complex computer software would prevent the develop-
ment of an effective therapeutic relationship. Measurement
of how VRET can either positively or negatively affect the
therapeutic alliance may be useful in understanding how
using a virtual environment can affect the usefulness of the
relationship between therapist and client.
Overall, the studies in this review found VRET to be ben-
eficial to both active duty service members and veterans
experiencing combat-related PTSD. Each group has a dif-
ferent set of difficulties preventing them from seeking or
receiving treatment, which is evidenced by high levels of attri-
tion. It may also explain the difficulty in setting up experi-
mental trials to test the efficacy of this treatment. Because
the use of virtual reality technology is such a specific field,
and because the purchase and training of virtual reality
equipment expends both financial and time resources, the
use of VRET in order to treat military service members and
veterans for PTSD is not likely to spread quickly. While the
actual cost of virtual reality technology is becoming less
expensive, hesitation in the field over using exposure ther-
apy in general, despite its positive results, will likely con-
tinue to hinder this form of treatment.
One last area where future studies using VRET may want
to focus is the distal impact VRET may have. Treatment pro-
viders want to ensure positive treatment results continue
over time. It has been suggested that studies follow veterans
and service members who have been treated with VRET for
up to 2 years in order to measure the distal effects of the
treatment (Powers & Emmelkamp, 2008). If positive distal
effects of the treatment can be more readily established, the
benefits of the treatment would perhaps balance out the dif-
ficulties seen in implementing it on a wide-scale basis.
Acknowledgment
The author thanks Dr Bruce Thyer for his assistance in
preparing
this manuscript for publication.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with
respect
to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research,
author-
ship, and/or publication of this article.
References
References marked with an asterisk indicate studies included in
the
systematic review. The in-text citations to studies selected for
sys-
tematic review are not preceded by asterisks.
American Psychiatric Association. (2000). Diagnostic and
statisti-
cal manual of mental disorders (4th ed., text revision).
Arlington,
VA: American Psychiatric Association.
Casey, G. W. (2011). Comprehensive soldier fitness: A vision
for
psychological resilience in the U.S. Army. American Psycholo-
gist, 66, 1-3.
*Gamito, P., Oliveira, J., Rosa, P., Morais, D., Duarte, N.,
Oliveira, S.,
& Saraiva, T. (2010). PTSD elderly war veterans: A clinical
controlled pilot study. Cyberpsychology, Behavior, and Social
Networking, 13(1), 43-48.
Gerardi, M., Cukor, J., Difede, J., Rizzo, A., & Rothbaum, B.
O.
(2010). Virtual reality exposure therapy for post-traumatic
stress disorder and other anxiety disorders. Current Psychiatry
Reports, 12, 298-305.
Kraft, M., Amick, M. M., Barth, J. T., French, L. M., & Lew, H.
L.
(2010). A review of driving simulator parameters relevant to the
Operation Enduring Freedom/Operation Iraqi Freedom veteran
population. American Journal of Physical Medicine & Reha-
bilitation, 89, 336-344.
Kramer, T. L., Pyne, J. M., Kimbrell, T. A., Savary, P. E.,
Smith, J. L.,
& Jegley, S. M. (2010). Clinician perceptions of virtual reality
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.
178 Traumatology 19(3)
to assess and treat returning veterans. Psychiatric Services, 61,
1153-1156.
McLay, R. N., Graap, K., Spira, J., Perlman, K., Johnston, S.,
Rothbaum, B. O., Difede, J. A., Deal, W., Oliver, D., Baird, A.,
Bordnick, P. S., Spitalnick, J., Pyne, J. M., & Rizzo, A. (2012).
Development and testing of virtual reality exposure therapy for
post-traumatic stress disorder in active duty service members
who served in Iraq and Afghanistan. Military Medicine, 177(6),
635-642.
*McLay, R. N., Wood, D. P., Webb-Murphy, J. A., Spira, J. L.,
Wiederhold, M. D., Pyne, J. M., & Wiederhold, B. K. (2011). A
randomized, controlled trial of virtual reality-graded exposure
therapy for post-traumatic stress disorder in active duty service
members with combat-related post-traumatic stress disorder.
Cyberpsychology, Behavior, and Social Networking, 14, 223-
229.
Powers, M. B., & Emmelkamp, P. M. (2008). Virtual reality
expo-
sure therapy for anxiety disorders: A meta-analysis. Journal of
Anxiety Disorders, 22, 561-569.
Pull, C. B. (2005). Current status of virtual reality exposure
therapy
in anxiety disorders. Current Opinion in Psychiatry, 18, 7-14.
*Ready, D. J., Gerardi, R. J., Backscheider, A. G., Mascaro, N.,
&
Rothbaum, B. O. (2010). Comparing virtual reality exposure
therapy to present-centered therapy with 11 U.S. Vietnam vet-
erans with PTSD. Cyberpsychology, Behavior, and Social Net-
working, 13(1), 49-54.
*Ready, D. J., Pollack, S., Rothbaum, B. O., & Alarcon, R. O.
(2006). Virtual reality exposure for veterans with posttrau-
matic stress disorder. Journal of Aggression, Maltreatment &
Trauma, 12, 199-220.
Reger, G. M., Gahm, G. A., Rizzo, A. A., Swanson, R., &
Duma, S.
(2009). Soldier evaluation of the virtual reality Iraq. Telemedi-
cine and e-Health, 15(1), 101-104.
*Reger, G. M, Holloway, K. M., Candy, C., Rothbaum, B. O.,
Difede, J., Rizzo, A. A., & Gahm, G. A. (2011). Effectiveness
of virtual reality exposure therapy for active duty soldiers in
a military mental health clinic. Journal of Traumatic Stress,
24(1), 93-96.
Riva, G., Raspelli, S., Algeri, D., Pallavicini, F., Gorini, A.,
Wie-
derhold, B. K., & Gaggioli, A. (2010). Interreality in practice:
Bridging virtual and real worlds in the treatment of posttrau-
matic stress disorders. Cyberpsychology, Behavior, and Social
Networking, 13(1), 55-65.
Rizzo, A., Reger, G., Gahm, G., Difede, J., & Rothbaum, B. O.
(2009). Virtual reality exposure therapy for combat related
PTSD. Post-Traumatic Stress Disorder, 6, 375-399.
Rothbaum, B. O. (2006). Virtual Vietnam: Virtual reality
exposure
therapy. In M. Roy (Ed.), Novel approaches to the diagnosis
and treatment of posttraumatic stress disorder (pp. 205-218).
Amsterdam, The Netherlands: IOS Press.
Rothbaum, B. O., Hodges, L. F., Ready, D., Graap, K., &
Alarcon, R. D.
(2001). Virtual reality exposure therapy for Vietnam veterans
with posttraumatic stress disorder. Journal of Clinical Psychia-
try, 62, 617-622.
Stetz, M. C., Long, C. P., Wiederhold, B. K., & Turner, D. D.
(2008). Combat scenarios and relaxation training to harden
medics against stress. Journal of CyberTherapy & Rehabilita-
tion, 1, 239-246.
Virtual Iraq Afghanistan Media Story CBC [video file].
Retrieved
from http://www.youtube.com/watch?v=Ltl9zbDRZWY&
feature=autoplay&list=UUQrbzaW3x9wWoZPl4-l4GSA
&playnext=1
Wood, D. P., Murphy, J. A., Center, K. B., Russ, C., McLay, R.
N.,
Reeves, D., . . . Wiederhold, B. K. (2008). Combat related post-
traumatic stress disorder: A multiple case report using virtual
reality graded exposure therapy with physiological monitoring.
In J. Westwood, R. Haluck, H. Hoffman, G. Mogel, R. Phillips,
R. Robb, & K. Vosburgh (Eds), Medicine meets virtual reality
16 (pp. 556-561). Fairfax, VA: IOS Press.
Wood, D. P., Murphy, J., McLay, R., Koffman, R., Spira, J.,
Obrecht, R. E., . . . Wiederhold, B. K. (2009). Cost
effectiveness
of virtual reality graded exposure therapy with physiological
monitoring for the treatment of combat related posttraumatic
stress disorder. Studies in Health Technology & Informatics,
144, 223-229.
T
hi
s
do
cu
m
en
t i
s
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
tio
n
or
o
ne
o
f i
ts
a
lli
ed
p
ub
lis
he
rs
.
T
hi
s
ar
tic
le
is
in
te
nd
ed
s
ol
el
y
fo
r t
he
p
er
so
na
l u
se
o
f t
he
in
di
vi
du
al
u
se
r a
nd
is
n
ot
to
b
e
di
ss
em
in
at
ed
b
ro
ad
ly
.

Enhancing Psychotherapy Process With Common Factors Feedback.docx

  • 1.
    Enhancing Psychotherapy ProcessWith Common Factors Feedback: A Randomized, Clinical Trial Andrew S. McClintock, Matthew R. Perlman, Shannon M. McCarrick, Timothy Anderson, and Lina Himawan Ohio University In this study, we developed and tested a common factors feedback (CFF) system. The CFF system was designed to provide ongoing feedback to clients and therapists about client ratings of three common factors: (a) outcome expectations, (b) empathy, and (c) the therapeutic alliance. We evaluated the CFF system using randomized, clinical trial (RCT) methodology. Participants: Clients were 79 undergradu- ates who reported mild, moderate, or severe depressive symptoms at screening and pretreatment assessments. These clients were randomized to either: (a) treatment as usual (TAU) or (b) treatment as usual plus the CFF system (TAU � CFF). Both conditions entailed 5 weekly sessions of evidence-based therapy delivered by doctoral students in clinical psychology. Clients completed measures of common factors (i.e., outcome expectations, empathy, therapeutic alliance) and outcome at each session. Clients and therapists in TAU � CFF received feedback on client ratings of common factors at the beginning of Sessions 2 through 5. When surveyed, clients and therapists indicated that that they were satisfied with
  • 2.
    the CFF systemand found it useful. Multilevel modeling revealed that TAU � CFF clients reported larger gains in perceived empathy and alliance over the course of treatment compared with TAU clients. No between-groups effects were found for outcome expectations or treatment outcome. These results imply that our CFF system was well received and has the potential to improve therapy process for clients with depressive symptoms. Public Significance Statement In this study, we developed a system that provides ongoing feedback to clients and therapists about what is transpiring in therapy. Results suggest that the feedback system may help to improve the process of treatment for clients with depressive symptoms. Keywords: common factors, feedback, empathy, alliance, randomized clinical trial A growing body of research attests to the utility and effectiveness of outcome feedback (Connolly Gibbons et al., 2015; De Jong et al., 2014; Shimokawa, Lambert, & Smart, 2010). In outcome feedback systems, client progress is monitored and reviewed by therapists (and, in some cases, by clients as well) to guide ongoing treatment (Lam- bert, 2007). Specifically, these systems collect distress/symptomatol- ogy data from clients on a routine basis, and then compare these data with norms or expected treatment responses (see Lambert, 2007; Lutz
  • 3.
    et al., 2006).When a client is off-track (i.e., is projected to have a relatively poor treatment response), the therapist is alerted and is then typically provided with strategies for improving quality of care (Lambert et al., 2004; Miller, Duncan, Sorrell, & Brown, 2005). Although outcome feedback has demonstrated efficacy (e.g., Shimokawa et al., 2010), there is undoubtedly room for improve- ment. Effects for outcome feedback systems are often only small or medium in size and, in some samples, are nonsignificant (Con- nolly Gibbons et al., 2015; De Jong et al., 2014; Shimokawa et al., 2010). In a recent study, Connolly Gibbons et al. (2015) found that 64% of clients who received treatment with outcome feedback did not achieve clinically significant change. Clearly, modifications to these systems are warranted. One novel approach is to utilize process-based feedback. Pro- cess feedback may be advantageous for several reasons. First, there is evidence from educational psychology (e.g., Zimmerman & Kitsantas, 1997) that the development of a skill (e.g., consis- tently hitting a bull’s-eye on a dartboard) is enhanced through a focus on process (e.g., the mechanics of dart-throwing). From this, it stands to reason that the development of psychological well- being may be enhanced by focusing on the therapeutic processes that foster well-being. Second, certain treatment modalities
  • 4.
    (e.g., humanistic and psychodynamictherapies) do not target symptoms per se and thus may be more compatible with a process feedback system than an outcome/symptom-based feedback system. Third, whereas therapists may view outcome feedback as evaluative and threatening (Boswell, Krauss, Miller, & Lambert, 2015), therapists This article was published Online First January 23, 2017. Andrew S. McClintock, Matthew R. Perlman, Shannon M. McCarrick, Timothy Anderson, and Lina Himawan, Department of Psychology, Ohio University. The ideas and data reported in this article have not been previously disseminated. Correspondence concerning this article should be addressed to Andrew S. McClintock, 264 Porter Hall, Athens, OH 45701. E-mail: [email protected] ohio.edu T hi s do
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
    oa dl y. Journal of CounselingPsychology © 2017 American Psychological Association 2017, Vol. 64, No. 3, 247–260 0022-0167/17/$12.00 http://dx.doi.org/10.1037/cou0000188 247 mailto:[email protected] mailto:[email protected] http://dx.doi.org/10.1037/cou0000188 may be more receptive to feedback about what is transpiring in therapy. Thus, process feedback has the potential to be more widely implemented. Fourth, a process feedback system could yield information that is actionable and immediately useful. For example, disagreement about treatment tasks could be readily addressed by exploring discrepancies between the implemented techniques and the client’s perceptions about which techniques should be implemented. An exemplary system that integrates process and outcome feed- back is the Partners for Change Outcome Management System (PCOMS; Miller et al., 2005; Duncan, 2012). PCOMS monitors the therapeutic alliance (i.e., agreement on therapeutic goals and tasks in the context of a positive affective bond; Bordin, 1979) at every session, enabling therapists to identify and repair alliance ruptures on an ongoing basis. Although the effectiveness of PCOMS is well documented (e.g., Duncan, 2012), it is unclear
  • 10.
    whether PCOMS’ effectivenessis because of outcome feedback, process feedback, or both. Indeed, no research to date has exam- ined the efficacy of process feedback in and of itself. To build a process feedback system that could be widely im- plemented, it seems prudent to track processes that are common across treatment approaches (i.e., “common factors”). Common factors account for the lion’s share of outcome variance (�50%), more so than theory-specific techniques (�15%) and extrathera- peutic factors (�25%) (Cuijpers et al., 2012; Lambert, 2013). In a landmark text, Wampold and Imel (2015) highlighted three spe- cific common factors that drive change in psychotherapy: (a) client’s outcome expectations, (b) a genuinely empathic connec- tion between client and therapist, and (c) the therapeutic alliance. Outcome expectations, empathy, and the alliance are discussed in the following sections to highlight their suitability for inclusion in a process feedback system. Outcome Expectations Outcome expectations are anticipatory beliefs about a treat- ment’s personal efficacy (Constantino, Ametrano, & Greenberg, 2012). A recent meta-analysis (Constantino, Glass, Arnkoff, Ame- trano, & Smith, 2011) that included 8,016 clients across 46 inde- pendent samples revealed that client outcome expectations ac- counted for a significant, albeit modest, percentage (1.4%) of outcome variance. It is worth noting that this association was derived predominantly from studies that assessed outcome expec-
  • 11.
    tations before orvery early in treatment. An alternative approach is to conceptualize outcome expectations as a dynamic process, wherein the client’s expectations are influenced by the developing client-therapist relationship, the credibility of the treatment ratio- nale, the effectiveness of early treatment procedures, and so forth. That is, according to this approach, outcome expectations may evolve over the course of therapy and thus should be measured beyond the first few sessions. Underscoring the utility of monitor- ing outcome expectations over the course of treatment, Newman and Fisher (2010) found that a midtreatment assessment of expec- tancy/credibility accounted for nearly 40% of the variance in therapeutic change. Empathy Empathy is a complex, interactional process involving three temporal stages: (a) the therapist’s attunement to the client’s experience, (b) the therapist’s communication about the client’s experience, and (c) the client’s receipt of the empathic communi- cation (Barrett-Lennard, 1981; MacFarlane, Anderson, & Mc- Clintock, 2015). A focus on the third stage is particularly impor- tant because client’s perceptions of therapist empathy may have the largest effect on outcome (Elliott, Bohart, Watson, & Green- berg, 2011); a meta-analysis of 38 studies (Elliott et al., 2011) showed that client-perceived empathy accounted for over 10% of outcome variance.
  • 12.
    Alliance A related constructis the therapeutic alliance, which refers to the collaborative, working relationship between client and thera- pist. Bordin (1979) conceptualized the alliance as involving three components: goals, tasks, and bond. The goals component is the level of agreement between client and therapist on the objectives of treatment (e.g., anxiety reduction). The tasks component is the level of client–therapist agreement on the techniques (e.g., cogni- tive restructuring, dream interpretation) used to attain treatment goals. Finally, the bond is the degree of emotional connection (e.g., care, liking, trust) between client and therapist. In a meta- analysis of 112 studies, Horvath, Del Re, Flückiger, and Symonds (2011) found that client-rated alliance accounted for about 8% of outcome variance. Current Research In contrast to outcome feedback systems, we developed a sys- tem that focuses exclusively on psychotherapy process. We se- lected outcome expectations, empathy, and the alliance for routine monitoring because these processes: (a) are common across treat- ment approaches, (b) are emphasized in Wampold and Imel’s (2015) widely influential model of therapeutic change, and (c) are
  • 13.
    among the strongestpredictors of treatment success. We anticipated that the provision of common factors feedback (CFF) would help therapists to identify poor process. Indeed, therapists do not always share their client’s perceptions of thera- peutic process, as evidenced by relatively weak correlations be- tween therapist-rated process and client-rated/observer-rated pro- cess (Cecero, Fenton, Frankforter, Nich, & Caroll, 2001; Greenberg, Watson, Elliott, & Bohart, 2001). Not only did we want to assist therapists in identifying poor process, but we also wanted to help therapists to intervene in ways that would improve that process. Therefore, we created a manual detailing evidence- based strategies for enhancing outcome expectations (e.g., Con- stantino et al., 2012; Swift & Derthick, 2013), empathy (e.g., Bohart & Greenberg, 1997; Bruce, Shapiro, & Constantino, & Manber, 2010; Dowell & Berman, 2013), and the alliance (e.g., Hill & O’Brien, 1999; Safran & Muran, 2000; Safran & Muran, 2006), and through prestudy training and ongoing supervision, encouraged study therapists to employ these strategies when com- mon factor ratings were suboptimal. The effects of the CFF system were tested using randomized, clinical trial (RCT) methodology. Given the exploratory nature of this research, we enrolled clinical analogues who reported at least a mild level of depressive symptoms on two separate occasions. These participants were randomly assigned to either treatment as usual (TAU) or TAU plus the CFF system (TAU � CFF). The T
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
    at ed br oa dl y. 248 MCCLINTOCK ETAL. CFF system monitored client ratings of outcome expectations, empathy, and the alliance and provided feedback on this informa- tion to clients and therapists in order to facilitate an open discus- sion about the therapeutic process. Our CFF system fed back information to both clients and therapists because there is some evidence that the provision of feedback to the client-therapist dyad is more effective than the provision of feedback to the therapist alone (De Jong et al., 2014) and because the provision of feedback to the client might increase the client’s sense of agency in treat- ment (see De Jong et al., 2014; Flückiger et al., 2012; Zuroff et al., 2007). We hypothesized that clients in TAU � CFF would report greater increases in outcome expectations, empathy, and the alli- ance over the course of therapy, compared with clients in TAU. Because common factors are purportedly therapeutic (see
  • 19.
    Wampold & Imel,2015) and thus improvements in the common factors should lead to better outcomes, we further hypothesized that clients in TAU � CFF would report greater decreases in depressive symptoms and greater increase in psychological well- being over the course of therapy, compared with clients in TAU. Method Participants Clients. Seventy-nine undergraduates at a Midwestern university met inclusion criteria and were randomized to a treatment condition (see Procedure). These participants were either in their freshman (59.5%), sophomore (25.3%), junior (5.1%), or senior year (10.1%) of college, with a mean age of 19.3 years (SD � 3.0). Most (82.3%) identified as female. About 81.0% identified as White/Caucasian, 5.1% as Black or African American, 3.8% as American Indian or Alaska Native, 3.8% as Multiracial, 2.5% as Hispanic or Latino/ Latina, 2.5% as Asian or Asian American, and 1.3% as Middle Eastern. About 13.9% were currently receiving psychological or phar- macological treatment at the pretreatment assessment. Participants reported a mean BDI-II score at pretreatment (23.68; SD � 8.21) that fell in the moderate depression range (31 participants reported mild depression, 27 reported moderate depression, and 21 reported
  • 20.
    severe depression; see Becket al., 1996). Therapists. Client participants received treatment from one of six doctoral students in a clinical psychology training program. All therapist participants had completed graduate-level assessment and treatment courses and were involved in practicum/traineeship as- sociated with the training program. Therapists had acquired a mean of 313.17 face-to-face clinical hours (SD � 261.31) by the start of the study. Three therapists were male, and three were female. Therapists had a mean age of 26.00 years (SD � 2.19), and all identified as White/Caucasian. With regard to theoretical orienta- tion, three therapists identified as cognitive– behavioral, two iden- tified as integrative/eclectic, and one identified as humanistic. Measures Outcome expectations. The Outcome Expectations Question- naire (OEQ; Constantino, McClintock, McCarrick, Anderson, & Himawan, 2016) is a recently developed, 10-item measure of client outcome expectations. Each item reflects a facet of treatment outcome about which clients may form expectations (example item: “My self-esteem”). Items are rated on a 7-point Likert scale ranging from (0) “I expect no improvement,” to (6) “I expect very substantial improvements.” Exploratory and confirmatory factor
  • 21.
    analyses (Constantino etal., 2016) of the OEQ items supported a two-factor solution, with one factor pertaining to the specific problems that bring the client to treatment (example item: “My distress about the problems that brought me to treatment”), and the second factor pertaining to more global issues (example item: “My sense of purpose”). These two factors have been found to be strongly correlated (rs ranged from 0.60 to 0.71; Constantino et al., 2016). We used total OEQ scores (sum of all items) in the current study. Akin to the original research (Constantino et al., 2016), the OEQ demonstrated good internal consistency in the present study (Cronbach’s alpha � .93 at pretreatment). Empathy. The Barrett-Lennard Relationship Inventory- Empathy Scale (BLRI-E; Barrett-Lennard, 2015) is the most widely used client rated measure of empathy (Elliott et al., 2011). The 16 BLRI-E items (example item: “My counselor usually senses or realizes what I am feeling”) are rated on a 6-point Likert scale ranging from (�3) “No, I strongly feel that it is not true” to (3) “Yes, I strongly feel that it is true.” A total BLRI-E score is derived by taking the mean of all items (after reverse-scoring eight items). Past research has established the internal consistency, test–retest reliability, convergent/divergent validity, and predictive validity of the BLRI-E (see Barrett-Lennard, 2015). The BLRI- E
  • 22.
    exhibited acceptable internalconsistency in the current study (Cronbach’s alpha � .73 after Session 1). Therapeutic alliance. The Working Alliance Inventory-Short Form Revised (WAI-SR; Hatcher & Gillaspy, 2006) is a widely used 12-item measure of the therapeutic alliance. Each item (ex- ample item: “I feel that the things I do in therapy will help me to accomplish the changes that I want.”) is rated on a 5-point Likert scale ranging from 1 (seldom) to 5 (always) and loads onto one of three factors: goals (i.e., agreement on the goals of therapy), tasks (i.e., agreement on the tasks of therapy), and bond (i.e., the emotional connection between client and therapist). The measure has demonstrated excellent reliability, a clean factor structure, convergent validity, and predictive validity (Hatcher & Gillaspy, 2006; McClintock, Anderson, & Petrarca, 2015). The total score (used for analyses) is calculated by summing all items. The inter- nal consistency of the WAI-SR was high in the present sample (Cronbach’s alpha � .88 after Session 1). Depression. The Beck Depression Inventory-II (BDI-II; Beck, Steer, & Brown, 1996) is the most widely used measure of depressive symptoms. The measure features 21 items representing depressive symptoms. Respondents rate the presence of each symptom on a 4-point Likert scale. An example item is “Sadness” with response options (0) “I do not feel sad,” (1) “I feel sad much
  • 23.
    of the time,”(2) “I am sad all of the time,” (3) “I am so sad or unhappy that I can’t stand it.” BDI-II total scores (sum of all items) can be categorized in the following ranges: minimal (0 – 13), mild (14 –19), moderate (20 –28), and severe (29 – 63). The BDI-II has sound psychometric properties in both clinical and nonclinical samples (Beck et al., 1996). The BDI-II demonstrated good inter- nal consistency in the current study (Cronbach’s alpha � .84 at pretreatment). Psychological well-being. The Schwartz Outcome Scale-10 (SOS-10; Blais et al., 1999) is a 10-item self-report measure of psychological well-being. The SOS-10 was developed using clas- T hi s do cu m en t is co py ri
  • 24.
  • 25.
  • 26.
  • 27.
    us er an d is no t to be di ss em in at ed br oa dl y. 249COMMON FACTORS FEEDBACK sicaltest theory and Rasch item analysis and has been employed extensively to assess the effectiveness of mental health treatments.
  • 28.
    Each item featuresa 7-point Likert scale ranging from 0 (never) to 6 (nearly all of the time). Sample items include “I have confidence in my ability to sustain important relationships” and “I am gener- ally satisfied with my psychological health.” The SOS-10 is scored by summing the 10 items (higher scores indicate better well- being). The measure has demonstrated good internal consistency, test–retest reliability, and convergent/discriminant validity (Hag- gerty, Blake, Naraine, Siefert, & Blais, 2010; Young, Waehler, Laux, McDaniel, & Hilsenroth, 2003). The SOS-10 exhibited high internal consistency in the present research (Cronbach’s alpha � .84 at pretreatment). Client satisfaction survey. We developed a brief client satis- faction survey to evaluate client perceptions about the CFF system. Because we were concerned that clients might find the completion of measures burdensome, we asked clients to rate the degree to which they enjoyed the completion of measures at the end of each session using a 7-point Likert scale (1 � not at all, 7 � very much). In addition, clients were asked about the degree to which the feedback reports helped improve treatment on a 7-point Likert scale (1 � not at all, 7 � very much). Clients completed the satisfaction survey at the
  • 29.
    end of theirtreatment. Therapist satisfaction survey. We also developed a brief therapist satisfaction survey to evaluate the degree to which ther- apists were satisfied with the CFF system and found it useful. Clinicians were asked to rate their satisfaction on a 5-point Likert scale (1 � dissatisfied, 5 � completely satisfied) and the utility of the CFF system on a 5-point Likert scale (1 � not useful, 5 � very useful). Therapists completed the satisfaction survey at the end of the research project. Procedures This study was conducted in the psychology department of a large Midwestern university during the 2015–2016 academic year. Institutional review board approval was obtained, and all ethical standards were followed; no adverse events were reported during the study. To be consistent with research on outcome feedback (e.g., see Connolly Gibbons et al., 2015), we aimed to recruit a sample of 75–100 participants. See Figure 1 for a procedure flowchart. We recruited undergraduates with depressive symptoms via the psychology department’s Web based screening system. Specifi- cally, we administered the BDI-II in the screening system (n � 1862) and recruited only those who scored in the mild range or higher (i.e., �14; see Beck et al., 1996). These students reporting
  • 30.
    mild, moderate, orsevere depressive symptoms (n � 463) were given a vague description of the study (titled “A Study of Psycho- therapy”) and were offered time slots; participation in the RCT was on a first-come, first-serve basis. The RCT was conducted in a psychotherapy laboratory on the university’s campus. Students arrived to the laboratory individually and all provided informed consent (n � 95). The OEQ, BDI-II, and SOS-10 were then administered for the pretreatment assessment. At this pretreatment assessment, 13 participants did not score in the mild 40 completed at least two sessions 32 completed all five sessions 29 completed at least two sessions 24 completed all five sessions Assessed for eligibility via screening assessment (n=1862) 463 scored 14 or higher on BDI-II and were eligible for study Excluded (n=16) 13 scored below 14 on BDI-II 3 exhibited active suicidality, mania,
  • 31.
    or psychosis Analyzed (n=35) Excludedfrom analysis (n=0) Allocated to TAU (n=44) Analyzed (n=44) Excluded from analysis (n=0) Randomized (N=79) Assessed for eligibility via pretreatment assessment (n=95) Enrollment Alloca�on Analysis Allocated to TAU+CFF (n=35) Figure 1. Procedure flow chart. T hi s
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
    br oa dl y. 250 MCCLINTOCK ETAL. range or higher on the BDI-II (i.e., score �14). These 13 participants were immediately deemed ineligible (and no additional data were collected) to maintain the integrity of a symptomatic sample. There- fore, to be eligible for the RCT, participants had to score in the mild range or higher on the BDI-II at both the screening assessment and pretreatment assessment (time between these assessments ranged from 3 days to 8 weeks). Participants were also excluded for exhib- iting or reporting active suicidality, mania, or psychosis (n � 3). All excluded participants were referred to local mental health providers. The remaining 79 participants were randomly assigned to either TAU (n � 44) or TAU � CFF (n � 35); the first author determined condition assignment using a table of random numbers. Therapists were crossed with treatment condition (to balance therapist skill across conditions; see Heppner, Wampold, Owen,
  • 37.
    Thompson, & Wang,2016). Therapists were assigned clients based on mutual availability, but they did not know which condi- tion a client was assigned to until after the pretreatment assess- ment. Therapists’ beliefs about the effectiveness of each treatment condition were assessed after they were trained to use the CFF system; on a 5-point Likert scale (1 � ineffective, 5 � highly effective), therapists reported a mean rating of 3.83 (SD � 0.75) for TAU and a mean rating of 4.50 (SD � 0.55) for TAU � CFF. Therapists participated in weekly, group supervision to discuss individual cases and to maintain adherence to the CFF system. Supervision was provided by a licensed clinical psychologist with over 25 years of clinical experience. Both TAU and TAU � CFF entailed five, 50-min individual treatment sessions delivered once per week. The treatments were limited to five sessions because of department restrictions on the use of the subject pool. Five session treatments have been shown to be effective in past research (e.g., McClintock, Anderson, & Cranston, 2015). To increase external validity, therapists selected from a range of evidence-based treatment approaches based on their theoretical orientation, case conceptualization, and supervisor input. The following treatment approaches were used in TAU: cognitive– behavioral (50%), emotion-focused (22%), mindfulness/acceptance- based (13%), client-centered (13%), and interpersonal (3%).
  • 38.
    The following treatment approacheswere used in TAU � CFF: cognitive– behavioral (46%), emotion-focused (31%), mindfulness/acceptance- based (12%), client-centered (8%), and interpersonal (4%). As previ- ously noted, the OEQ, BDI-II, and SOS-10 were administered at pretreatment (i.e., before the first session). The OEQ, BLRI-E, and WAI-SR were administered to clients after the first session. The OEQ, BLRI-E, WAI-SR, BDI-II, and SOS-II were administered to clients after sessions two through five. In total, clients attended a mean of 4.13 sessions (SD � 1.48, range � 1–5 sessions). Clients dropped from the study for a variety of reasons (e.g., study too cumbersome, no longer interested in therapy, etc.); 69 (40 in TAU and 29 in TAU � CFF) completed at least two sessions, and 56 (32 in TAU and 24 in TAU � CFF) completed all five sessions. Clients who completed all treatment sessions were compensated with $10 and five course credits (par- tial credit was awarded for partial participation). CFF System. The CFF system was a novel procedure devel- oped for the current research. The CFF system monitors client ratings of three common factors (i.e., outcome expectations, em- pathy, and alliance) and provides feedback on this information to clients and therapists in order to facilitate an open discussion about
  • 39.
    the therapeutic processand to help therapists to make adjustments when process is suboptimal. In Session 1 or the beginning of Session 2 in TAU � CFF, therapists described the three common factors (outcome expecta- tions, empathy, and alliance) and provided a jargon-free rationale for using the CFF system (e.g., “Each of these components is strongly related to treatment success, and so by maximizing these components in our treatment, we might be able to maximize your improvement as well”). Clients were told that their ratings of these factors would be reviewed and discussed in session. As mentioned, clients completed the OEQ, BLRI-E, and WAI-SR after each session. For TAU � CFF clients, these ratings were entered into an Excel spreadsheet (Microsoft, 2013) that visually depicted the client’s ratings in a line graph relative to percentile-based tracks. The tracks, derived from normative data (Anderson, Patterson, McClintock, & Song, 2013; Barrett- Lennard, 2015; Constantino et al., 2016), are color-coded green (highest 33% of scores in normative data), yellow (middle 33% of scores), and red (lowest 33% of scores). High, middle, and low tracks were created for each of the following variables: outcome expectations (i.e., OEQ scores), empathy (i.e., BLRI-E scores), alliance (i.e., WAI-SR-Total scores), goals facet of the alliance (i.e., WAI-SR-Goals scores), tasks facet of the alliance (WAI- SR- Tasks scores), and the bond facet of the alliance (i.e., WAI-SR- Bond scores). In this way, clients and therapists could view the
  • 40.
    client’s common factorsratings over time (i.e., within-client change) as well as relative to normative data (i.e., between clients). A screen shot of the Excel output is presented in Figure 2, showing a client’s alliance scores by the beginning of the fifth session. In addition to the Excel graphs, a common factors enhancement manual was created that details the general principles underlying the CFF system and the specific strategies that can be employed to enhance outcome expectations, empathy, and the alliance.1 In creating this manual, we drew heavily from existing strategies and guidelines (e.g., Bohart & Greenberg, 1997; Bruce et al., 2010; Constantino et al., 2012; Dowell & Berman, 2013; Safran & Muran, 2000; Safran & Muran, 2006; Swift & Derthick, 2013). Prior to study initiation, therapists were asked to read the manual and to role-play the discussion of client common factors ratings and the delivery of common factors enhancement strategies. Im- portantly, while the viewing of client data in the TAU condition was forbidden, therapists were not forbidden from using the com- mon factors enhancement strategies with their TAU clients. Therapists were instructed to review the outcome expectations, empathy, and alliance graphs with their TAU � CFF clients at the beginning of Sessions 2–5 and to initiate an exploration of the client’s perspective, particularly when ratings were suboptimal (i.e., in the yellow or red tracks). TAU � CFF data discussions were designed to be collaborative between client and therapist; clients were invited to share their perceptions of therapeutic pro-
  • 41.
    cesses, and therapistswere instructed to validate the client’s per- ceptions while employing techniques— adapted to the individual needs of the client—to bolster outcome expectations, empathy, and/or the therapeutic alliance. For example, a therapist could intervene with a client reporting low WAI-SR scores by exploring 1 For a copy of the common factors enhancement manual, please contact first author. T hi s do cu m en t is co py ri gh te d by
  • 42.
  • 43.
  • 44.
  • 45.
    is no t to be di ss em in at ed br oa dl y. 251COMMON FACTORS FEEDBACK andattempting to repair alliance ruptures (see Safran & Muran, 2000). CFF system adherence. Near the end of the research project, therapists were surveyed about their adherence to the CFF system. Therapists were first asked how frequently they discussed the feedback with TAU � CFF clients; on a 5-point Likert scale (1
  • 46.
    � never, 5 �always), therapists reported a mean rating of 4.67 (SD � 0.82). Therapists were also asked how much time they spent, in an average TAU � CFF session, discussing the feedback; with the options 0 –1 min, 1–5 min, 5–10 min, 10 –20 min, and �20 min, three therapists reported 1–5 min, and the other three therapists reported 5–10 min. Finally, therapists were asked about the extent to which the feedback influenced their interven- tion strategy; on a 5-point Likert scale (1 � not at all, 5 � substantially), therapists reported a mean rating of 3.67 (SD � 0.52). Plan of Analysis Correlations (r) were used to investigate associations between study measures. To assess differences on demographic and pre- treatment data between conditions, independent samples t tests and chi-square tests of independence were used. An independent sam- ples t test and a logistic regression were used to evaluate differ- ences in drop out/number of sessions attended. To assess clinically significant change, we identified participants who evidenced a 20% reduction in BDI-II scores and fell in the nonclinical range on the BDI-II (i.e., score �13) by the end of treatment (see Borkovec, Newman, Pincus, & Lytle, 2002; McClintock et al., 2015; Roemer, Orsillo, & Salters-Pedneault, 2008). Satisfaction ratings were an- alyzed with descriptive statistics.
  • 47.
    To model changesin process (i.e., WAI-SR, BLRI-E and OEQ) and outcome (i.e., BDI-II and SOS-10) measures, a three-level hierarchical linear model (HLM) was used for each measure with sessions nested within clients and clients nested within therapists. Thus, within-client variability was modeled at Level 1, the between-client and within-therapist variability was modeled at Level 2, and the between-therapist variability was modeled at Level 3. Time/session variables were entered as Level 1 predictors. The time/session variables were centered at the first session (i.e., first session coded as 0, second session coded as 1, etc.). Because randomization to treatment conditions occurred at the client level, treatment condition was entered as a Level 2 predictor. Treatment condition was centered at TAU (i.e., TAU coded as 0, TAU � CFF coded as 1). Analyses did not include Level 3 predictors. For each process/outcome measure, an unconditional growth curve was fitted first to investigate whether scores changed sig- nificantly over time. These unconditional growth curves only included time/session variable(s) as predictor(s). If a time/session predictor was significant (i.e., significant change in scores over time), then treatment condition was added as a Level 2 predictor to investigate whether the change over time differed between TAU and TAU � CFF.
  • 48.
    To account forthe different shapes that the growth curve might take, four different unconditional growth curves were fitted to the data, and the best model was obtained by comparing the informa- tion criteria (i.e., Akaike Information Criteria [AIC] and Bayesian Information Criteria [BIC]). The four unconditional growth curves were as follows: (a) a linear unconditional growth curve (i.e., a model with only a linear term of session number included as the Level 1 predictor) to assess the possibility that scores decrease or increase at a constant rate over time; (b) a log unconditional growth curve (i.e., a model with only a log of session number included as the Level 1 predictor) to assess the possibility that scores decrease or increase at a faster rate during the early ses- sions, then decrease or increase at a slower rate during the later sessions; (c) a quadratic unconditional growth curve (i.e., a model with linear and quadratic terms of session number as the Level 1 predictors) to assess the possibility that scores first decrease over time then increase or first increase then decrease; and (5) a cubic unconditional quadratic growth curve (i.e., a model with linear, quadratic, and cubic terms of session number as the Level 1 predictors) to assess the possibility that scores decrease first over Figure 2. Example of feedback graph. See the online article for the color version of this figure. T hi
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
    ed br oa dl y. 252 MCCLINTOCK ETAL. time, then increase before decreasing again or increase first, then decrease before increasing again. For illustration purposes, the model fitted for the linear uncon- ditional growth curve is provided below: Level 1: (Measure)tij � �0ij � �1ij(Session)tij � etij Level 2: �0ij � �00j � r0ij �1ij � �10j � r1ij Level 3: �00j � �000 � u00j �10j � �100 � u10j The complete model: (Measure)tij � �000 _ �100(Session)tij � [u00j(Session)tij �
  • 54.
    r0ij � r1ij(Session)tij �etij] In the previous model, (Measure)tij is the process/outcome measure (i.e., BDI-II, SOS-10, WAI-SR, BLRI-E, or OEQ) at time t for client i seeing therapist j; because Session was centered at the first session (i.e., immediately before first session for BDI-II, SOS-10 and OEQ and immediately after first session for BLRI-E and WAI-SR), �000 is the average of the scores at the first session; and �100 is the rate of change of the scores over one unit of time (i.e., session). A significant �000 means that the average of the scores at the first session is significantly different than zero. A significant �100 means that the scores change significantly over time (i.e., the rate of change of the scores is significantly different than zero). The parameters inside the brackets are the random effects: etij is the session variability within a client; r0ij and r1ij are client variability within a therapist around �000 and �100, respectively; and u00j and u10j are therapist variability around �000 and �100, respectively. In the beginning of the model fitting, �000 and �100 were treated as random effects at both Levels 2 and 3. However, when there was an indication that the model was
  • 55.
    overspecified, these randomeffects were dropped one by one starting from the highest level, until the model fit properly. Also for illustration purposes, the linear model fitted with treat- ment condition (TC) as a Level 2 predictor is provided here: Level 1: (Measure)tij � �0ij � �1ij(Session)tij � etij Level 2: �0ij � �00j � �01j(TC)ij � r0ij �1ij � �10j � �11j(TC)ij � r1ij Level 3: �00j � �000 � u00j �01j � �010 �10j � �100 � u10j �11j � �110 The complete model: (Measure)tij � �000 � �010(TC)ij � �100(Session)tij � �110(TC)ij(Session)tij � [u00j � u10j(Session)tij � r0ij � r1ij(Session)tij � etij] In the previous model, (Measure)tij is the process/outcome measure at time t for client i seeing therapist j; because Session was centered at the first session and Treatment Condition was centered at
  • 56.
    TAU, �000 is theaverage of the TAU scores at the first session; �010 is the effect of TAU � CFF on �000; �100 is the rate of change of TAU scores over one unit of time (i.e., session); and �110 is the effect of TAU � CFF on �100. A significant �000 means that the average of the TAU scores at the first session is significantly different than zero; a significant �010 means that the average of TAU � CFF scores at the first session is significantly different than that of TAU; a significant �100 means that the rate of change of TAU scores is significantly different than zero; and a significant �110 means that the rate of change of TAU � CFF scores is significantly different than that of TAU. The parameters inside the brackets are the random effects as described previously. In the beginning of the model fitting, �000 and �100 were also treated as random effects at both Levels 2 and 3. However, when there was an indication that this model was over- specified, these random effects were dropped one by one starting from the highest level, until the model fit properly. Because the main goal of the study was to investigate whether there was a significant difference in the rate of change of the process/
  • 57.
    outcome scores overtime between clients in the TAU and TAU � CFF conditions, the focus of the study was �110. Because clients were randomized into the two conditions, we did not expect that the two conditions would significant differ in average scores at the first session (i.e., �010). Results Preliminary Analyses Data were evaluated and found to be within normal limits in regards to outliers and degree of normality. Correlations (r) between study measures at first session are presented in Table 1. Correlation size ranged from trivial (e.g., correlation between BLRI-E and SOS- 10) to large (e.g., correlation between BLRI-E and WAI-SR), al- though even the large correlations were not so large as to suggest measure redundancy. At pretreatment, TAU participants and TAU � CFF participants did not significantly differ (p � .05) on any of the demographic and pretreatment data, implying that randomization was successful. An independent samples t test showed that TAU partici- pants and TAU � CFF participants did not significantly differ (p � .05) on the number of sessions attended. Similarly, a logistic regres-
  • 58.
    sion showed thatTAU and TAU � CFF did not significant differ (p � .05) in the number of participants who dropped out (i.e., did not complete all five sessions). Of the 79 enrolled participants (TAU n � 44, TAU � CFF n � 35), 45.6% (TAU n � 21, TAU � CFF n � 15) achieved clinically significant change (i.e., evidenced a 20% reduc- tion in BDI-II scores and fell in the nonclinical range on the BDI-II by the end of treatment). Client Satisfaction Ratings At the end of treatment, clients in TAU � CFF were asked about the degree to which they enjoyed the completion of measures at the end of each session; clients reported a mean rating of 5.15 (SD � 1.35) on a 7-point Likert scale (1 � not at all, 7 � very much). Clients in TAU � CFF were also asked about the degree to which the feedback reports helped improve treatment; clients reported a mean T hi s do cu m
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
    y. 253COMMON FACTORS FEEDBACK ratingof 5.63 (SD � 1.15) on a 7-point Likert scale (1 � not at all, 7 � very much). Therapist Satisfaction Ratings At the end of the research project, therapists were asked about their level of satisfaction with the CFF system; therapists reported a mean rating of 4.17 (SD � 0.75) on a 5-point Likert scale (1 � dissatisfied, 5 � completely satisfied). Therapists were also asked about the degree to which they found the CFF system to be useful; therapists reported a mean rating of 4.00 (SD � 0.89) on a 5- point Likert scale (1 � not useful, 5 � very useful). Between-Group Effects on Process and Outcome A three-level HLM was fitted for each process and outcome variable. Comparison of AIC and BIC showed that BDI-II, SOS- 10, and OEQ were best represented with linear unconditional growth curves, while WAI-SR and BLRI-E were best represented with log unconditional growth curves. The final unconditional growth curves were: (1) BDI-II
  • 64.
    (BDI-II)tij � �000� �100(Session)tij � [u00j � u10j(Session)tij � r0ij � r1ij(Session)tij � etij] (2) SOS-10 (SOS-10)tij � �000 � �100(Session)tij � [u10j(Session)tij � r0ij � r1ij(Session)tij � etij] (3) WAI-SR (WAI-SR)tij � �000 � �100(LogSession)tij � [r0ij � r1ij(LogSession)tij � etij] (4) BLRI-E (BLRI-E)tij � �000 � �100(LogSession)tij � [u10j(LogSession)tij � r0ij � r1ij(LogSession)tij � etij] (5) OEQ (OEQ)tij � �000 � �100(Session)tij � [r0ij � r1ij(Session)tij � etij] Results indicated that the average of the scores at the first session (i.e., �000) and the rate of change over time/session (i.e., �100) were
  • 65.
    significantly different thanzero. As expected, BDI-II scores decreased over time, while SOS-10, WAI-SR, BLRI-E, and OEQ scores in- creased over time. Over one unit of time/session, BDI-II scores decreased by 2.74 points, SOS-10 scores increased by 2.42 points, WAI-SR scores increased by 5.75 points, BLRI-E scores increased by 0.32 points, and OEQ scores increased by 2.21 points. A summary of the unconditional growth curve results is presented in Table 2. In the next set of analyses, treatment condition was entered as a Level 2 predictor. The final conditional growth curves were: (1) BDI-II (BDI-II)tij � �000 � �010(TC)ij � �100(Session)tij � �110(TC)ij(Session)tij � [u00j � u10j(Session)tij � r0ij � r1ij(Session)tij � etij] (2) SOS-10 (SOS-10)tij � �000 � �010(TC)ij � �100(Session)tij � �110(TC)ij(Session)tij � [u10j(Session)tij � r0ij � r1ij(Session)tij � etij] (3) WAI-SR (WAI-SR)tij � �000 � �010(TC)ij � �100(LogSession)tij � �110(TC)ij(LogSession)tij � [u10j(LogSession)tij
  • 66.
    � r0ij �r1ij(LogSession)tij � etij] (4) BLRI-E (BLRI-E)tij � �000 � �010(TC)ij � �100(LogSession)tij � �110(TC)ij(LogSession)tij � [u10j(LogSession)tij � r0ij � r1ij(LogSession)tij � etij] (5) OEQ (OEQ)tij � �000 � �010(TC)ij � �100(Session)tij � �110(TC)ij(Session)tij � [r0ij � r1ij(Session)tij � etij] Results indicated that for each process/outcome measure, the average of TAU scores at the first session (i.e., �000) was signif- icantly different than zero. For each process/outcome measure, the effect of the TAU � CFF condition on �000 (i.e., �010) was not significant. This implies that, as would be expected given random- Table 1 Means (SDs) and Correlations for Study Measures at First Session (N � 79) Study measures M (SD) BDI-II SOS-10 WAI-SR BLRI-E OEQ BDI-II 23.68 (8.21) �.65��� �.17 .22 .03 SOS-10 31.26 (8.32) .19 �.02 .08 WAI-SR 43.60 (7.83) .63��� .48���
  • 67.
    BLRI-E 1.61 (.50).43��� OEQ 39.42 (11.23) Note. BDI-II � Beck Depression Inventory-II (before first session); SOS-10 � Schwartz Outcome Scale-10 (before first session); WAI-SR � Working Alliance Inventory- Short Form Revised (after first session); BLRI-E � Barrett-Lennard Relationship Inventory-Empathy Scale (after first session); OEQ � Outcome Expectations Questionnaire (after first session). ��� p � .001. T hi s do cu m en t is co py ri gh te d by
  • 68.
  • 69.
  • 70.
  • 71.
    is no t to be di ss em in at ed br oa dl y. 254 MCCLINTOCK ETAL. ization, TAU and TAU � CFF did not significantly differ in process/outcome scores at the first session. Results also indicated that for each process/outcome mea- sure, the rate of change of TAU scores over time/session (i.e., �100) was significantly different than zero; directions of change were as expected (i.e., BDI-II scores decreased and SOS-10,
  • 72.
    WAI-SR, BLRI-E, andOEQ scores increased). Over one unit of time/session in the TAU condition, BDI-II scores decreased by 2.46 points, SOS-10 scores increased by 2.18 points, WAI-SR scores increased by 4.63 points, BLRI-E scores increased by 0.22 points, and OEQ scores increased by 2.01 points. The effect of the TAU � CFF condition on �100 (i.e., �110) was significant for WAI-SR and BLRI-E. That is, while TAU and TAU � CFF did not significantly differ in the rates of change of BDI-II, SOS-10, and OEQ scores, the two conditions significantly differed in the rates of change of WAI-SR and BLRI-E scores. Specifically, participants in TAU � CFF re- ported greater increases in WAI-SR and BLRI-E scores relative to TAU participants (over one unit of time/session, WAI-SR scores increased by 2.61 points more in TAU � CFF relative to TAU and BLRI-E scores increased by 0.20 points more in TAU � CFF relative to TAU). A summary of the conditional growth curve results is presented in Table 3. Figures 3 and 4 depict mean BLRI-E and WAI-SR scores at each session for TAU and TAU � CFF. We calculated the proportions of variance explained at Level 1 coefficients (i.e., 0ij and 1ij) by treatment condition, above and beyond the time/session variable. Treatment condition ac- counted for 0.59%, 0.47%, 0.49%, 1.01%, and 0.36% of the variance in 0ij for BDI-II, SOS-10, WAI-SR, BLRI-E and OEQ scores, respectively, and accounted for 2.77%, 0.81%, 9.01%, 13.67%, and 0.70% of the variance in 1ij for BDI-II, SOS-10, WAI-SR, BLRI-E, and OEQ scores, respectively. Discussion The present research marks the first attempt to develop and evaluate a common factors feedback (CFF) intervention. Re- sults suggest that our CFF system holds promise. Clients and
  • 73.
    therapists reported satisfactionwith the CFF system and en- dorsed its utility. Multilevel modeling showed that, while there were no between-groups effects on client ratings of outcome expectations, depressive symptoms, and psychological well- being, treatment condition had a medium-to-large sized effect (see Lambert, 2013) on empathy (accounted for about 13.7% of variability) and alliance ratings (accounted for about 9.0% of the variability). Specifically, clients who received treatment with CFF reported greater increases in perceived empathy and alliance over the course of treatment relative to clients who received treatment as usual. These results imply that our brief feedback intervention, which on average took less than 10 min to implement per session, may have enhanced the process of psychotherapy. Although we did not assess how the CFF system produced these results, we can speculate about potential mechanisms. Outcome feedback systems, on which our CFF system is based, are effective in part because they help to identify patients who are at risk for treatment failure. Research shows that, unaided, therapists are relatively poor in identifying off-track patientsT ab le 2 E st im a te s o
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
    serves a criticalneed. This same concept may translate to process-based feedback systems. Specifically, given that therapist-rated process is relatively weakly correlated with both client-rated and observer-rated process (Cecero et al., 2001; Greenberg et al., 2001), it stands to reason that poor process is frequently missed. Our CFF system might thus be useful be- cause it helps therapists perform an otherwise challenging task—the identification of poor process. Once poor process is recognized, therapists are presumably in a better position to tailor their behavior to the specific needs of the client (see therapist responsiveness; Stiles, Honos-Webb, & Surko, 1998). For example, therapists could respond to low empathy ratings by not only exploring areas of misunderstand- ing but by also increasing reflective listening and validation. This adaptation of behavior to match the client’s needs would likely improve process and the client’s perceptions of that process. Although this discussion focuses on therapist behavior, it is critical that we not lose sight of the important role that clients can play in process development. Flückiger and colleagues (2012) showed that clients who are explicitly encouraged to be proactive participants in their treatment tend to report greater improvements in the alliance relative to clients who do not receive this encouragement. Thus, it could be that CFF facili- tates increased client agency and engagement in treatment, which in turn improves process (see also Ryan & Deci, 2008; Zuroff et al., 2007). Examination of these and other mecha- nisms should be a high priority for future research. Our finding that CFF influenced empathy and alliance but not treatment outcome ratings may, at first blush, appear inconsis- tent with common factors theory. That is, if common factors are therapeutic, then an improvement in the common factors should coincide with an improvement in treatment outcomes. A num-
  • 105.
    ber of factorscould explain our nonsignificant effects on treat- ment outcome (as well as on outcome expectations). First, our study was underpowered for the purpose of detecting small between-groups effects. A second explanation is that our short treatment length (i.e., five sessions) may have precluded some between-groups effects. Indeed, the benefits of outcome feed- back are relatively minimal over the first few sessions (De Jong et al., 2014). Not only could treatment length be an issue, but the amount of time devoted to feedback discussion may have been insufficient; feedback was discussed, on average, less than 10 min per session, and so a greater focus on feedback may be needed to maximize its effects. Alternatively, the more time spent on process leaves less time for the client’s presenting issue, and so an increased focus on process may unwittingly attenuate treatment outcome effects. Yet another explanation for our nonsignificant effects pertains to our analog sample; by enrolling undergraduates with only moderate depressive symp- toms (M BDI-II score � 23.68), floor/ceiling effects could have contributed to our nonsignificant findings. It could also be the case that feedback effects are specific to the variables that are monitored; outcome feedback may primarily affect outcome, and process feedback may primarily affect process. Future research is needed to determine whether our nonsignificant findings reflect Type II errors or reflect inherent limitations of the feedback system.T ab le 3 E st im a
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.
  • 113.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
  • 119.
  • 120.
  • 121.
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127.
  • 128.
  • 129.
  • 130.
  • 131.
  • 132.
  • 133.
  • 134.
  • 135.
  • 136.
  • 137.
  • 138.
  • 139.
    y. 256 MCCLINTOCK ETAL. There are additional points to be made regarding this study’s limitations. Although the array of implemented evidence-based therapies increases the generalizability of findings, generaliz- ability is limited by the short therapy duration, participant compensation, and use of a mostly White and female analog sample. A different pattern of results could have emerged, for instance, with clients who are more difficult to engage in therapy (e.g., clients with personality disorders, chronic depres- sion, etc.) and who require longer-term treatment (see De Jong et al., 2014). It is also worth noting that study therapists were involved in the design of the CFF system and, as such, may have had a strong allegiance (see De Jong, van Sluis, Nugter, Heiser, & Spinhoven, 2012; Falkenström, Markowitz, Jonker, Philips, & Holmqvist, 2013) to the TAU � CFF condition. Demand characteristics and social desirability bias may have influenced the present results as well; TAU � CFF clients knew their data would be reviewed and discussed with their therapist, and so they may have overreported process quality. This con- cern is somewhat mitigated, however, by Reese et al.’s (2013) finding that alliance scores are not influenced by the presence of a therapist or the knowledge that one’s scores would be reviewed by the therapist. Nevertheless, in light of these short- comings, we recommend that future investigations (a) enroll demographically diverse, treatment-seeking participants; (b) employ longer treatments; and (c) evaluate the CFF system relative to an outcome-based feedback system. The CFF system developed in the present study represents a novel synthesis of outcome feedback systems and the common
  • 140.
    factors literature. OurCFF system monitors three common factors (i.e., outcome expectations, empathy, and alliance) over the course of therapy, visually presents these ratings—relative to normative data—to clients and therapists, and provides use- ful, empirically based strategies for improving suboptimal pro- cess. This approach has a number of strengths. First, since identifying poor process is the first step in repairing process, the CFF system fills a vital role in providing both a signal for off-track process and a context for collaboratively addressing concerns. Second, the CFF system yields targeted, actionable information that has direct implications for treatment planning. For example, low ratings on the tasks component of the alliance can be readily addressed by exploring discrepancies between the implemented techniques and the client’s perceptions about which techniques should be implemented in therapy. A third strength of the CFF system is that it focuses on factors common across treatments and thus could be useful in a wide range of settings and contexts. Finally, whereas outcome feedback is often met with fear and mistrust (Boswell, Kraus, Miller, & Lambert, 2015), feedback about what is simply transpiring in therapy might be more palatable for therapists and thus has the potential to be widely implemented. We are hopeful that our CFF system will advance outcome feedback and common fac- Figure 3. Mean empathy scores over time for TAU and TAU � CFF. Note. BLRI-E � Barrett-Lennard Relationship Inventory-Empathy Scale; TAU � Treatment as usual; TAU � CFF � Treatment as usual plus common factors feedback. See the online article for the color version of this figure. T hi s
  • 141.
  • 142.
  • 143.
  • 144.
  • 145.
    br oa dl y. 257COMMON FACTORS FEEDBACK torsliteratures and will improve the care provided to psycho- therapy clients. References Anderson, T., Patterson, C. L., McClintock, A. S., & Song, X. (2013). Factorial and predictive validity of the expectations about counseling- brief (EAC-B) with clients seeking counseling. Journal of Counseling Psychology, 60, 496 –507. http://dx.doi.org/10.1037/a0034222 Barrett-Lennard, G. T. (1981). The empathy cycle: Refinement of a nuclear concept. Journal of Counseling Psychology, 28, 91–100. http://dx.doi .org/10.1037/0022-0167.28.2.91 Barrett-Lennard, G. T. (2015). The Relationship Inventory: A complete resource and guide. West Sussex, United Kingdom: Wiley. Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck Depression
  • 146.
    Inventory (2nd ed.).San Antonio, TX: The Psychological Corporation. Blais, M. A., Lenderking, W. R., Baer, L., deLorell, A., Peets, K., Leahy, L., & Burns, C. (1999). Development and initial validation of a brief mental health outcome measure. Journal of Personality Assessment, 73, 359 –373. http://dx.doi.org/10.1207/S15327752JPA7303_5 Bohart, A. C., & Greenberg, L. S. (1997). Empathy reconsidered: New directions in psychotherapy. Washington, DC: American Psychological Association. http://dx.doi.org/10.1037/10226-000 Bordin, E. S. (1979). The generalizability of the psychoanalytic concept of the working alliance. Psychotherapy: Theory, Research, & Practice, 16, 252–260. http://dx.doi.org/10.1037/h0085885 Borkovec, T. D., Newman, M. G., Pincus, A. L., & Lytle, R. (2002). A component analysis of cognitive-behavioral therapy for generalized anx- iety disorder and the role of interpersonal problems. Journal of Consult- ing and Clinical Psychology, 70, 288 –298. http://dx.doi.org/10.1037/ 0022-006X.70.2.288 Boswell, J. F., Kraus, D. R., Miller, S. D., & Lambert, M. J. (2015).
  • 147.
    Implementing routine outcomemonitoring in clinical practice: Benefits, challenges, and solutions. Psychotherapy Research, 25, 6 –19. http://dx .doi.org/10.1080/10503307.2013.817696 Bruce, N., Shapiro, S. L., Constantino, M. J., & Manber, R. (2010). Psychotherapist mindfulness and the psychotherapy process. Psycho- therapy: Theory, Research, Practice, Training, 47, 83–97. http://dx.doi .org/10.1037/a0018842 Cecero, J. J., Fenton, L. R., Frankforter, T. L., Nich, C., & Caroll, K. M. (2001). Focus on therapeutic alliance: The psychometric properties of six measures across three instruments. Psychotherapy: Theory, Research, Practice, Training, 38, 1–11. http://dx.doi.org/10.1037/0033- 3204.38.1.1 Connolly Gibbons, M. B., Kurtz, J. E., Thompson, D. L., Mack, R. A., Lee, J. K., Rothbard, A., . . . Crits-Christoph, P. (2015). The effectiveness of clinician feedback in the treatment of depression in the community mental health system. Journal of Consulting and Clinical Psychology, 83, 748 –759. http://dx.doi.org/10.1037/a0039302 Constantino, M. J., Ametrano, R. M., & Greenberg, R. P. (2012). Clinician interventions and participant characteristics that foster adaptive
  • 148.
    patient expectations for psychotherapyand psychotherapeutic change. Psycho- therapy, 49, 557–569. http://dx.doi.org/10.1037/a0029440 Constantino, M. J., Glass, C. R., Arnkoff, D. B., Ametrano, R. M., & Smith, J. Z. (2011). Expectations. In J. Norcross (Ed.), Psychotherapy relationships that work: Evidence-based responsiveness (2nd ed., pp. 181–192). New York, NY: Oxford University Press. http://dx.doi.org/ 10.1093/acprof:oso/9780199737208.003.0018 Figure 4. Mean alliance scores over time for TAU and TAU � CFF. Note. WAI-SR � Working Alliance Inventory-Short Form Revised; TAU � Treatment as usual; TAU � CFF � Treatment as usual plus common factors feedback. See the online article for the color version of this figure. T hi s do cu m en t is co
  • 149.
  • 150.
  • 151.
  • 152.
  • 153.
    http://dx.doi.org/10.1037/0022-0167.28.2.91 http://dx.doi.org/10.1037/0022-0167.28.2.91 http://dx.doi.org/10.1207/S15327752JPA7303_5 http://dx.doi.org/10.1037/10226-000 http://dx.doi.org/10.1037/h0085885 http://dx.doi.org/10.1037/0022-006X.70.2.288 http://dx.doi.org/10.1037/0022-006X.70.2.288 http://dx.doi.org/10.1080/10503307.2013.817696 http://dx.doi.org/10.1080/10503307.2013.817696 http://dx.doi.org/10.1037/a0018842 http://dx.doi.org/10.1037/a0018842 http://dx.doi.org/10.1037/0033-3204.38.1.1 http://dx.doi.org/10.1037/0033-3204.38.1.1 http://dx.doi.org/10.1037/a0039302 http://dx.doi.org/10.1037/a0029440 http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0018 http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0018 Constantino, M. J.,McClintock, A. S., McCarrick, S. M., Anderson, T., & Himawan, L. (2016). Outcome Expectations Questionnaire. Manuscript in preparation. Cuijpers, P., Driessen, E., Hollon, S. D., van Oppen, P., Barth, J., & Andersson, G. (2012). The efficacy of non-directive supportive therapy for adult depression: A meta-analysis. Clinical Psychology Review, 32, 280 –291. De Jong, K., Timman, R., Hakkaart-Van Roijen, L., Vermeulen, P., Kooiman, K., Passchier, J., & Van Busschbach, J. (2014). The
  • 154.
    effect of outcome monitoringfeedback to clinicians and patients in short and long-term psychotherapy: A randomized controlled trial. Psychotherapy Research, 24, 629 – 639. http://dx.doi.org/10.1080/10503307.2013 .871079 De Jong, K., van Sluis, P., Nugter, M. A., Heiser, W. J., & Spinhoven, P. (2012). Understanding the differential impact of outcome monitoring: Therapist variables that moderate feedback effects in a randomized clinical trial. Psychotherapy Research, 22, 464 – 474. http://dx.doi.org/ 10.1080/10503307.2012.673023 Dowell, N. M., & Berman, J. S. (2013). Therapist nonverbal behavior and perceptions of empathy, alliance, and treatment credibility. Journal of Psychotherapy Integration, 23, 158 –165. http://dx.doi.org/10.1037/ a0031421 Duncan, B. L. (2012). The partners for change outcome management system (PCOMS): The heart and soul of change project. Canadian Psychology, 53, 93–104. http://dx.doi.org/10.1037/a0027762 Elliott, R., Bohart, A. G., Watson, J. C., & Greenberg, L. S. (2011). Empathy. In J. Norcross (Ed.), Psychotherapy relationships that
  • 155.
    work: Evidence-based responsiveness (2nded., pp. 89 –108). New York, NY: Oxford University Press. http://dx.doi.org/10.1093/acprof:oso/ 9780199737208.003.0006 Falkenström, F., Markowitz, J. C., Jonker, H., Philips, B., & Holmqvist, R. (2013). Can psychotherapists function as their own controls? Meta- analysis of the crossed therapist design in comparative psychotherapy trials. The Journal of Clinical Psychiatry, 74, 482– 491. http://dx.doi .org/10.4088/JCP.12r07848 Flückiger, C., Del Re, A. C., Wampold, B. E., Znoj, H., Caspar, F., & Jörg, U. (2012). Valuing clients’ perspective and the effects on the therapeutic alliance: A randomized controlled study of an adjunctive instruction. Journal of Counseling Psychology, 59, 18 –26. http://dx.doi.org/10 .1037/a0023648 Greenberg, L. S., Watson, J. C., Elliott, R., & Bohart, A. C. (2001). Empathy. Psychotherapy: Theory, Research, Practice, Training, 38, 380 –384. http://dx.doi.org/10.1037/0033-3204.38.4.380 Haggerty, G., Blake, M., Naraine, M., Siefert, C., & Blais, M. A. (2010). Construct validity of the Schwartz Outcome Scale-10: Comparisons to
  • 156.
    interpersonal distress, adultattachment, alexithymia, the five- factor model, romantic relationship length and ratings of childhood memories. Clinical Psychology & Psychotherapy, 17, 44 –50. Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart, D. W., Shimokawa, K., & Sutton, S. W. (2005). A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology, 61, 155–163. http://dx.doi.org/10.1002/jclp.20108 Hatcher, R. L., & Gillaspy, J. A. (2006). Development and validation of a revised short version of the Working Alliance Inventory. Psychotherapy Research, 16, 12–25. http://dx.doi.org/10.1080/10503300500352500 Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., & Wang, K. T. (2016). Research design in counseling. Boston, MA: Cengage Learning. Hill, C. E., & O’Brien, K. M. (1999). Helping skills: Facilitating explo- ration, insight, and action. Washington, DC: American Psychological Association. Horvath, A. O., Del Re, A. C., Flückiger, C., & Symonds, D. (2011). Alliance in individual psychotherapy. In J. Norcross (Ed.),
  • 157.
    Psychother- apy relationships thatwork: Evidence-based responsiveness (2nd ed., pp. 25– 69). New York, NY: Oxford University Press. http://dx.doi.org/ 10.1093/acprof:oso/9780199737208.003.0002 Lambert, M. J. (2007). Presidential address: What we have learned from a decade of research aimed at improving psychotherapy outcome in rou- tine care. Psychotherapy Research, 17, 1–14. http://dx.doi.org/10.1080/ 10503300601032506 Lambert, M. J. (2013). The efficacy and effectiveness of psychotherapy. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (pp. 169 –218). Oxford, England: Wiley. Lambert, M. J., Whipple, J. L., Harmon, C., Shimokawa, K., Slade, K., & Christofferson, C. (2004). Clinical support tools manual. Provo, UT: Department of Psychology, Brigham Young University. Lutz, W., Lambert, M. J., Harmon, S. C., Tschitsaz, A., Schürch, E., & Stulz, N. (2006). The probability of treatment success, failure and duration: What can be learned from empirical data to support decision making in clinical practice? Clinical Psychology & Psychotherapy, 13,
  • 158.
    223–232. http://dx.doi.org/10.1002/cpp.496 MacFarlane, P.,Anderson, T., & McClintock, A. S. (2015). Empathy from the client’s perspective: A grounded theory analysis. Psychotherapy Research. Advance online publication. http://dx.doi.org/10.1080/ 10503307.2015.1090038 McClintock, A. S., Anderson, T., & Cranston, S. (2015). Mindfulness therapy for maladaptive interpersonal dependency: A preliminary ran- domized controlled trial. Behavior Therapy, 46, 856 – 868. http://dx.doi .org/10.1016/j.beth.2015.08.002 McClintock, A. S., Anderson, T., & Petrarca, A. (2015). Treatment expec- tations, alliance, session positivity, and outcome: An investigation of a three-path mediation model. Journal of Clinical Psychology, 71, 41– 49. http://dx.doi.org/10.1002/jclp.22119 Microsoft. (2013). Microsoft Excel. Redmond, WA: The Microsoft Cor- poration. Miller, S. D., Duncan, B. L., Sorrell, R., & Brown, G. S. (2005). The partners for change outcome management system. Journal of Clinical Psychology, 61, 199 –208. http://dx.doi.org/10.1002/jclp.20111
  • 159.
    Newman, M. G.,& Fisher, A. J. (2010). Expectancy/credibility change as a mediator of cognitive behavioral therapy for generalized anxiety disorder: Mechanism of action or proxy for symptom change? Interna- tional Journal of Cognitive Therapy, 3, 245–261. http://dx.doi.org/10 .1521/ijct.2010.3.3.245 Reese, R. J., Gillaspy, J. A., Jr., Owen, J. J., Flora, K. L., Cunningham, L. C., Archie, D., & Marsden, T. (2013). The influence of demand characteristics and social desirability on clients’ ratings of the therapeu- tic alliance. Journal of Clinical Psychology, 69, 696 –709. http://dx.doi .org/10.1002/jclp.21946 Roemer, L., Orsillo, S. M., & Salters-Pedneault, K. (2008). Efficacy of an acceptance-based behavior therapy for generalized anxiety disorder: Evaluation in a randomized controlled trial. Journal of Consulting and Clinical Psychology, 76, 1083–1089. http://dx.doi.org/10.1037/ a0012720 Ryan, R. M., & Deci, E. L. (2008). A self-determination theory approach to psychotherapy: The motivational basis for effective change. Canadian Psychology, 49, 186 –193. http://dx.doi.org/10.1037/a0012753 Safran, J. D., & Muran, J. C. (2000). Negotiating the
  • 160.
    therapeutic alliance: A relationaltreatment guide. New York, NY: Guilford Press. Safran, J. D., & Muran, J. C. (2006). Resolving therapeutic impasses: A training DVD. Santa Cruz, CA: Custom-flix.com. Shimokawa, K., Lambert, M. J., & Smart, D. W. (2010). Enhancing treatment outcome of patients at risk of treatment failure: Meta- analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting and Clinical Psychology, 78, 298 –311. http://dx .doi.org/10.1037/a0019247 Stiles, W. B., Honos-Webb, L., & Surko, M. (1998). Responsiveness in psychotherapy. Clinical Psychology: Science and Practice, 5, 439 – 458. http://dx.doi.org/10.1111/j.1468-2850.1998.tb00166.x T hi s do cu m en t is
  • 161.
  • 162.
  • 163.
  • 164.
  • 165.
    http://dx.doi.org/10.1080/10503307.2013.871079 http://dx.doi.org/10.1080/10503307.2013.871079 http://dx.doi.org/10.1080/10503307.2012.673023 http://dx.doi.org/10.1080/10503307.2012.673023 http://dx.doi.org/10.1037/a0031421 http://dx.doi.org/10.1037/a0031421 http://dx.doi.org/10.1037/a0027762 http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0006 http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0006 http://dx.doi.org/10.4088/JCP.12r07848 http://dx.doi.org/10.4088/JCP.12r07848 http://dx.doi.org/10.1037/a0023648 http://dx.doi.org/10.1037/a0023648 http://dx.doi.org/10.1037/0033-3204.38.4.380 http://dx.doi.org/10.1002/jclp.20108 http://dx.doi.org/10.1080/10503300500352500 http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0002 http://dx.doi.org/10.1093/acprof:oso/9780199737208.003.0002 http://dx.doi.org/10.1080/10503300601032506 http://dx.doi.org/10.1080/10503300601032506 http://dx.doi.org/10.1002/cpp.496 http://dx.doi.org/10.1080/10503307.2015.1090038 http://dx.doi.org/10.1080/10503307.2015.1090038 http://dx.doi.org/10.1016/j.beth.2015.08.002 http://dx.doi.org/10.1016/j.beth.2015.08.002 http://dx.doi.org/10.1002/jclp.22119 http://dx.doi.org/10.1002/jclp.20111 http://dx.doi.org/10.1521/ijct.2010.3.3.245 http://dx.doi.org/10.1521/ijct.2010.3.3.245 http://dx.doi.org/10.1002/jclp.21946 http://dx.doi.org/10.1002/jclp.21946 http://dx.doi.org/10.1037/a0012720 http://dx.doi.org/10.1037/a0012720 http://dx.doi.org/10.1037/a0012753 http://dx.doi.org/10.1037/a0019247 http://dx.doi.org/10.1037/a0019247
  • 166.
    http://dx.doi.org/10.1111/j.1468-2850.1998.tb00166.x Swift, J. K.,& Derthick, A. O. (2013). Increasing hope by addressing clients’ outcome expectations. Psychotherapy, 50, 284 –287. http://dx .doi.org/10.1037/a0031941 Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for what makes psychotherapy work (2nd ed.). New York, NY: Routledge. Young, J. L., Waehler, C. A., Laux, J. M., McDaniel, P. S., & Hilsenroth, M. J. (2003). Four studies extending the utility of the Schwartz Outcome Scale (SOS-10). Journal of Personality Assessment, 80, 130 – 138. http:// dx.doi.org/10.1207/S15327752JPA8002_02 Zimmerman, B. J., & Kitsantas, A. (1997). Development phases in self- regulation: Shifting from process goals to outcome goals. Journal of Educational Psychology, 89, 29 –36. http://dx.doi.org/10.1037/0022- 0663.89.1.29 Zuroff, D. C., Koestner, R., Moskowitz, D. S., McBride, C., Marshall, M., & Bagby, M. R. (2007). Autonomous motivation for therapy: A new
  • 167.
    common factor inbrief treatments for depression. Psychotherapy Re- search, 17, 137–147. http://dx.doi.org/10.1080/10503300600919380 Received August 20, 2016 Revision received November 7, 2016 Accepted November 7, 2016 � Members of Underrepresented Groups: Reviewers for Journal Manuscripts Wanted If you are interested in reviewing manuscripts for APA journals, the APA Publications and Communications Board would like to invite your participation. Manuscript reviewers are vital to the publications process. As a reviewer, you would gain valuable experience in publishing. The P&C Board is particularly interested in encouraging members of underrepresented groups to participate more in this process. If you are interested in reviewing manuscripts, please write APA Journals at [email protected] Please note the following important points: • To be selected as a reviewer, you must have published articles in peer-reviewed journals. The experience of publishing provides a reviewer with the basis for preparing a thorough, objective review. • To be selected, it is critical to be a regular reader of the five to six empirical journals that are most central to the area or journal for which you would like to
  • 168.
    review. Current knowledgeof recently published research provides a reviewer with the knowledge base to evaluate a new submission within the context of existing research. • To select the appropriate reviewers for each manuscript, the editor needs detailed information. Please include with your letter your vita. In the letter, please identify which APA journal(s) you are interested in, and describe your area of expertise. Be as specific as possible. For example, “social psychology” is not sufficient—you would need to specify “social cognition” or “attitude change” as well. • Reviewing a manuscript takes time (1– 4 hours per manuscript reviewed). If you are selected to review a manuscript, be prepared to invest the necessary time to evaluate the manuscript thoroughly. APA now has an online video course that provides guidance in reviewing manuscripts. To learn more about the course and to access the video, visit http://www.apa.org/pubs/authors/review- manuscript-ce-video.aspx. T hi s do cu m
  • 169.
  • 170.
  • 171.
  • 172.
  • 173.
    y. 260 MCCLINTOCK ETAL. http://dx.doi.org/10.1037/a0031941 http://dx.doi.org/10.1037/a0031941 http://dx.doi.org/10.1207/S15327752JPA8002_02 http://dx.doi.org/10.1207/S15327752JPA8002_02 http://dx.doi.org/10.1037/0022-0663.89.1.29 http://dx.doi.org/10.1037/0022-0663.89.1.29 http://dx.doi.org/10.1080/10503300600919380Enhancing Psychotherapy Process With Common Factors Feedback: A Randomized, Clinical TrialOutcome ExpectationsEmpathyAllianceCurrent ResearchMethodParticipantsClientsTherapistsMeasuresOutcome expectationsEmpathyTherapeutic allianceDepressionPsychological well-beingClient satisfaction surveyTherapist satisfaction surveyProceduresCFF SystemCFF system adherencePlan of AnalysisResultsPreliminary AnalysesClient Satisfaction RatingsTherapist Satisfaction RatingsBetween-Group Effects on Process and OutcomeDiscussionReferences Traumatology 19(3) 171 –178 © The Author(s) 2012 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/1534765612459891 tmt.sagepub.com Article Concern over the best methods to prevent and treat combat-
  • 174.
    related posttraumatic stressdisorder (PTSD) in military ser- vice members and military veterans has been of particular interest with the resurgence of military service members who are serving multiple tours in Iraq and Afghanistan. Preventa- tive programs such as comprehensive soldier fitness (Casey, 2011) acknowledge the need for the military services to have a more frank discussion with their service members about PTSD. The stigma within the military of a PTSD diagnosis prevents many service members from seeking treatment, even when they recognize the symptoms of PTSD in them- selves. While military programs, such as the Department of Veterans Affairs, have sought to educate service members and their families about the importance of seeking treatment for PTSD, the threat of having a diagnosis of PTSD on their service record stops many service members from seeking help. Some have been able to seek treatment outside of the military health care system, but such treatment can be costly. Another related population is military veterans who, like their present-day counterparts, did not seek treatment or for whom no appropriate treatment was available. The preferred treatment for anxiety disorders is exposure therapy (Powers & Emmelkamp, 2008), also known as pro- longed or gradual exposure therapy. Exposure therapy is a type of behavior therapy where the client is taught cognitive and behavioral techniques such as progressive muscle relax- ation, breathing exercises, recognition of automatic thoughts and schemas, and cognitive restructuring (Pull, 2005). The client is taught to utilize these interventions while the thera- pist gradually exposes the client to the cause of anxiety, increasing the intensity of exposure as the client is able to tolerate in order to help the client become more accustomed to the stimuli-evoking anxiety. Clients also undertake self- conducted exposure-based exercises as homework in between formal treatment sessions. Two types of exposure therapy have dominated the field: in vivo therapy, where the
  • 175.
    459891TMTXXX10.1177/153 4765612459891TraumatologyNelson 1Florida State University,Tallahassee, FL, USA Corresponding Author: Rebekah J. Nelson, Florida State University, 296 Champions Way, University Center, Building C, Tallahassee, FL 32306, USA. Email: [email protected] Is Virtual Reality Exposure Therapy Effective for Service Members and Veterans Experiencing Combat- Related PTSD? Rebekah J. Nelson1 Abstract Purpose: Exposure therapy has been identified as an effective treatment for anxiety disorders, including posttraumatic stress disorder (PTSD). The use of virtual reality exposure therapy (VRET) in the past decade has increased due to improvements in virtual reality technology. VRET has been used to treat active duty service members and veterans experiencing posttraumatic stress symptoms by exposing them to a virtual environment patterned after the real-world environment in which the trauma occurred. This article is a systematic review of the effectiveness of using VRET with these two populations. Method: A search of 14 databases yielded 6 studies with experimental or quasi-experimental designs where VRET was used with active duty service members or veterans diagnosed with combat- related PTSD. Results: Studies show positive results for the use of VRET in treating combat-related PTSD, though more
  • 176.
    trials are neededwith both active duty service members and veterans. Conclusions: VRET is an effective treatment, however more studies including random assignment are needed in order to show whether it is more effective than other treatments. There are still many barriers that the use of VRET with military populations would need to overcome in order to be widely used, including helping veterans become accustomed to the technology; assisting veterans who have spent a longer period of time avoiding anxiety-inducing stimuli in accepting an initial increase in anxiety; clinician concerns about the technology interfering with the therapeutic alliance, and clinician biases against the use of exposure therapy in general; and high treatment dropout rates. Keywords combat, posttraumatic stress disorder, service members, veterans, virtual reality exposure therapy T hi s do cu m en t i s co py
  • 177.
  • 178.
  • 179.
  • 180.
  • 181.
    therapist and clientare able to experience exposure to anxi- ety-evoking stimuli in increasingly naturalistic settings; and imaginal exposure therapy, where the therapist leads the cli- ent in imagining the cause of anxiety. Usually exposure ther- apy (ET) in imagination is followed by real-life exposure. These two types of exposure therapy are sometimes poorly tolerated by service members and veterans who have com- bat-related PTSD because of the distinctness of the settings in which the trauma occurred, and because of the tendency of clients to suppress thoughts that activate PTSD symptoms (Riva et al., 2010). A relatively new exposure-based treatment for PTSD that has gained attention in the media and the therapeutic com- munity is the use of virtual reality programs. Using virtual reality in place of real-life or imaginal exposure therapy allows clients to receive and process exposure to traumatic events in a relatively safe environment. Virtual reality expo- sure therapy (VRET) has been tested with persons experi- encing PTSD symptoms in multiple trials and with many different causes of anxiety (Gerardi, Cukor, Difede, Rizzo, & Rothbaum, 2010; Pull, 2005). In their meta-analysis on the use of VRET for anxiety disorders, Powers and Emmelkamp (2008) found VRET to have a slightly more powerful effect than did real-life exposure treatment. This review will assess studies of the effectiveness of VRET when used to treat service members and military vet- erans diagnosed with combat-related PTSD. It will also con- sider the practical use of the technology, including the cost of treatment and the possible application of VRET in the assessment and prevention of PTSD in active duty soldiers. PTSD is a type of anxiety disorder brought on by experi- encing or witnessing a traumatic event or events. Traumatic events are defined by the Diagnostic and Statistical Manual
  • 182.
    of Mental Disorders(4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) as events “that involved actual or threatened death or serious injury, or a threat to the physical integrity of self or others” (p. 467). The response to the trauma also yields feelings of hopelessness, fear, or horror. The traumatic event must be reexperienced in some way, such as through nightmares or physical reactions to events resembling the trauma. There must also be an avoidance of stimuli that cause thoughts about the trauma and increased arousal, such as hypervigilance. These symp- toms must have lasted for more than 1 month, and must be causing clinically significant distress for the individual. Therapists using VRET to treat PTSD seek to simulate a virtual world that is as similar as possible to the real-world environment in which the traumatic event occurred. This is referred to as a “sense of presence” in the virtual world, or the level to which clients actually feel the virtual environ- ment mirrors reality. In a qualitative study of clinician perceptions about VRET (Kramer et al., 2010), clinicians expressed concern that the virtual environment would not be realistic enough in order to trigger and then reduce anxiety. However, in their evaluation study of the realism of two vir- tual Iraq scenarios, Reger, Gahm, Rizzo, Swanson, and Duma (2009) conducted a convenience sample study with 93 soldiers not diagnosed with PTSD to see if the soldiers, who had been deployed to Iraq one or more times, felt this sense of presence. A majority of the soldiers rated the convoy sce- nario (86%) and the city environment (82%) from adequate to excellent. VRET uses several technology-based methods to engage all five senses of the client, making the exposure feel as real- istic as possible. The technology used generally includes a “controlled delivery of sensory stimulation via the therapist,
  • 183.
    including visual, auditory,olfactory, and tactile cues” (Gerardi et al., 2010, p. 299). Visual effects include being able to change the time of day, weather, number of pedestri- ans and vehicles, street debris, Humvees, planes, and heli- copters clients see within the virtual world. Most of the machines include an orientation tracker, which allows clients to move about the virtual environment via headgear that responds to the movements of the participant. Olfactory senses are also engaged using scent palettes, which blow smells such as spices or burning rubber, and are controlled by the therapist. The therapist can also include sounds such as sirens, people crying, gunshots and mortars, helicopters, improvised explosive devices, rocket-propelled grenades, car bombs, and sounds of an insurgent attack. In their narra- tive review of the many uses of VRET, Gerardi et al. (2010) describe two scenarios available in their virtual Iraq: The city incorporates scenes such as marketplaces, security checkpoints, mosques, apartment buildings that can be entered, and rooftops that can be accessed. The Humvee scenario includes a desert setting with overpasses, checkpoints, debris, broken-down struc- tures, and ambushes that can be introduced. (p. 303) Finally, clients’ seats are manipulated to create tactile vibrations in order to mimic a car ride, helicopter ride, or an explosion. An example of the equipment and virtual reality scenarios utilized in VRET can be seen in many media reports on the subject, such as a news report conducted by the Canadian Broadcasting Corporation on the costs and benefits of VRET (Virtual Iraq Afghanistan Media Story CBC, video file). Similar virtual settings can be created for veterans of wars in other areas of the world. Specific to this article are virtual environments that mimic settings in Vietnam for veterans of
  • 184.
    the Vietnam War,and in Africa, where a war was fought by Portuguese soldiers in Africa between 1963 and 1970. As explained in Rothbaum, Hodges, Ready, Graap, and Alarcon’s (2001) article on the use of VRET with Vietnam veterans, VRET treatment spans several weeks of therapy, T hi s do cu m en t i s co py ri gh te d by th e A m
  • 185.
  • 186.
  • 187.
  • 188.
    e di ss em in at ed b ro ad ly . Nelson 173 Table 1.VRET Equipment Needed to Set Up a VRET Environment. Two Pentium 4 computers with 1 GB RAM each DirectX 9 128 MB DirectX 9-compatible NVIDIA 3D graphics card Ethernet cable Head Mounted Display and Navigation Interface (eMagin z800) Numerical Design Limited’s Gamebryo rendering library Alias’ Maya 6 and Autodesk 3D Studio Max 7 Envirodine, Inc. Scent Palette Logitech force-feedback game control pad and audio-tactile sound transducers from Aura Sound Inc.
  • 189.
    Table 2. VRETReviewed Studies. Study Year Intervention Study population Study design Primary outcome Ready et al. 2006 VRET Vietnam veterans diagnosed with PTSD (n = 14) OXO Change in CAPS scores for participants were statistically significantly different at posttreatment, 3 month, and 6 month follow up; BDI scores were statistically significantly different at posttreament and 6 month follow up, but not at 3 month follow up Gamito et al. 2010 VRET vs. exposure in imagination vs. waiting list Portuguese war veterans (n = 10) VRET (n = 5) EI (n = 2) Waiting list (n=3) R OXO R OYO R O O CAPS scores were not statistically significantly different; IES-R, BDI, and SCL-90-R scores were collected only for the VRET group
  • 190.
    Ready et al. 2010VRET vs. present- centered therapy Vietnam veterans with combat-related PTSD (n = 11) R OXO R OYO Both VRET and PCT lowered mean CAPS scores at posttreatment and follow up, with VRET yielding higher levels of improvement McLay et al. 2011 VRET vs. treatment as usual Active duty soldiers from two hospital sites with PTSD related to their duties in Iraq or Afghanistan (n = 19) R OXO R OYO No significant difference between the two groups before or after treatment; however, there
  • 191.
    was a significant(p < .05) difference in the mean CAPS change score over the course of treatment Reger et al. 2011 VRET, adapted from prolonged exposure manual Active duty soldiers (n = 24), diagnosed with PTSD (n = 18), or Anxiety NOS (n = 6) OXO At posttreatment, 62% (n = 15) had reliably improved on the PCL-M McLay et al. 2012 Virtual reality exposure therapy (VRET) Active duty soldiers from a naval medical center and a marine corps base (n = 42) with multiple drop outs before session 4 (n = 12) and after session 4 (n = 10) OXO PCL-M scores between baseline and posttreatment were statistically significant (p < .0001), as were PHQ-9 and BAI scores at baseline and posttreatment. For n = 17 participants, scores at 3 month follow up were also significantly different from baseline on the PCL-M, PHQ-9, and the BAI
  • 192.
    generally meeting twicea week for 90 to 120 min each ses- sion. The first session of VRET treatment is spent in assess- ing clients and gathering information about the traumatic event they experienced. Sessions 2 and 3 are spent in accli- matizing clients to the virtual reality equipment and environ- ment, and in teaching clients cognitive behavioral interventions to practice when their symptoms increase, such as breathing and relaxation techniques. Further therapy sessions are spent in using the virtual environment to expose clients to traumatic memories while they describe the events in detail. Homework assigned to clients generally includes listening to recordings of the therapy sessions while practic- ing cognitive behavioral interventions learned in therapy. Some who have been working on virtual reality technol- ogy have anticipated the use of it as an assessment tool to determine whether a soldier is emotionally and mentally fit T hi s do cu m en t i s co py ri
  • 193.
  • 194.
  • 195.
  • 196.
    se r a nd is n ot to b e di ss em in at ed b ro ad ly . 174 Traumatology19(3) to return for another tour (McLay et al., 2012). Others (Kraft,
  • 197.
    Amick, Barth, French,& Lew, 2010) also anticipate the use of virtual reality in reassessing the driving ability of combat service members returning from Iraq or Afghanistan who have been diagnosed with PTSD or traumatic brain injury (TBI), as these two disorders may critically affect returning soldiers’ ability to drive. In these ways, virtual reality tech- nology may benefit soldiers as an assessment tool, rather than solely a treatment for PTSD. VRET has also been looked at as a prevention tool. Stetz, Long, Wiederhold, and Turner (2008) conducted a study in which virtual reality and stress inoculation training were used to try and prevent medics who would be serving in Iraq or Afghanistan from later developing PTSD. Stress inocula- tion training consists of exposing the participant through virtual reality technology to traumatic events they may encounter in their future service in hopes that when they encounter similar traumatic events in reality they will be able to use their practiced cognitive behavioral skills to lessen their chances of developing PTSD in the future. While no distal measures of whether the stress inoculation training provided in Stetz et al. (2008) are given, posttests suggest exposing military medics preemptively to stressful situations may harden them against trauma. Such a preventative effort may be useful to all military personnel, as Reger et al. (2009) report that 67% of a convenience sample (n = 93) of military service members had provided aid to persons who were wounded during their combat experience. Exposure to such secondary trauma can sometimes serve as the initiating event triggering the onset of PTSD; however, if service members were given preventative virtual reality stress inoculation training, their chances of developing PTSD due to this expo- sure may decrease. One of the concerns of implementing virtual reality ther- apy on a wide-scale basis is the approximate cost of purchas-
  • 198.
    ing and settingup the virtual reality equipment and in training therapists to use the equipment effectively. In their prelimi- nary results utilizing virtual reality technology with active duty soldiers with PTSD, Rizzo, Reger, Gahm, Difede, and Rothbaum (2009) approximate some of the costs of setting up an adequate amount of virtual reality equipment in order to make the virtual environment realistic enough to help effect change. However, the authors only provide actual dol- lar amounts regarding the Head Mounted Display (US$1500) and the Logitech control pad (<US$120) that creates vibra- tions in the seat of the participant. Table 1 is a list Rizzo et al. (2009) give of equipment needed to set up a virtual reality therapy environment. In an interview with CBC news, Rizzo (Virtual Iraq Afghanistan Media Story CBC, video file) estimates the total cost of virtual reality hardware to be approximately US$15,000, stating that the computer soft- ware for conducting VRET can be obtained through him at no cost. Wood et al. (2009) articulate the possible financial ben- efits implementing virtual reality technology could have if the military were saved the money of having to replace service members who would have left the military due to PTSD symptoms. They estimated that the training cost sav- ings for the 12 participants in their study would be just under US$330,000, whereas the training cost savings of treating PTSD with treatment as usual would be close to US$193,000. Pull (2005), Riva et al. (2010), and Gerardi et al. (2010) state that VRET may be more cost effective than imaginal or real-life exposure therapy because it can be less time- consuming. This may be because the technological equip- ment allows the clinician to have greater control over the magnitude of exposure in a virtual environment than they
  • 199.
    would have intrying to help the client imagine graded images of the trauma or feared object, thus taking less time overall to treat clients. Using virtual technology may also be less costly than trying to have a real-life experience with the client. For example, Gerardi et al. (2010) cites the cost to the patient of having a virtual experience with flying versus the cost of paying for a genuine flight. Method Search Strategy Academic Search Complete, JSTOR, Applied Social Sciences Index and Abstracts (ASSIA), Computer and Information Systems Abstracts, ERIC, ProQuest Dissertations, and Theses (PQDT), PsycINFO, Social Services Abstracts, Sociological Abstracts, Social Sciences Citation Index, Web of Knowledge, Web of Science, Military and Government Collection, and Dissertation Abstracts were searched in order to find studies pertaining to the topic. While the gray litera- ture was not specifically searched, efforts were made to obtain copies of articles and conference proceedings that were a result of the search strategy. Where possible, search terms were limited to abstracts. The search terms used for this review were virtual and realit* and (military or veteran*) and (PTSD or posttraumatic or post-traumatic). A flow chart (Figure 1) depicts the disposition of retrieved articles. Data Collection and Analysis Methods While multiple case studies were found on this topic, only experimental and quasi-experimental studies looking at the use of VRET as a treatment for military service members or veterans experiencing combat-related PTSD will be included. The literature search yielded 100 studies. Seventy-one were ineligible based on review of the title (including repeats of previously acquired studies), and a further 16 were excluded after reviewing abstracts. Following a full-text review, seven
  • 200.
    more studies wereexcluded because they were found to be preliminary results of studies already acquired, the text or pertinent information was unavailable, or the study was ana- lyzed in more than one of the resulting studies and the article with the most information was chosen, leaving a total of six studies to include in the review (Table 2). T hi s do cu m en t i s co py ri gh te d by th e A m
  • 201.
  • 202.
  • 203.
  • 204.
    e di ss em in at ed b ro ad ly . Nelson 175 Common MeasuresUsed to Assess PTSD in Military Service Members The first, most common measure used in studying the effec- tiveness of treatment for PTSD is the Clinician Administered PTSD Scale (CAPS; Gamito et al., 2010; McLay et al., 2011; Ready, Gerardi, Backsheider, Mascaro, & Rothbaum, 2010; Ready, Pollack, Rothbaum, & Alarcon, 2006). The CAPS is a measure that assesses the frequency and intensity of PTSD symptoms. Another measure commonly used with military service members is self-report PTSD Checklist, Military Version (PCL-M; McLay et al., 2012; Reger et al., 2011). The Impact of Events Scale Revised (IES-R; Gamito et al., 2010) is a self-report instrument that measures
  • 205.
    PTSD symptoms ofavoidance, intrusion, and hyperarousal, and the Symptoms Checklist Revised (SCL-90-R; Gamito et al., 2010) is used to measure psychopathology. Finally, the Patient Health Questionnaire-9 (PHQ-9; McLay et al., 2012) and the Beck Depression Inventory (BDI; Gamito et al., 2010; Ready et al., 2006) are used to measure depres- sion levels in clients, while the Beck Anxiety Inventory (BAI; McLay et al., 2012) is used to measure anxiety levels in participants. Therapists also use what are called Subjective Units of Discomfort/Distress Scale (SUDS) when using VRET. SUDS are generally not tracked or measured for experimen- tal purposes, but are used to understand how the client is responding in the moment to the level of exposure in the virtual reality environment, and to decide if the level of exposure should be increased or decreased based on partici- pant reactivity. Physiological monitoring through biofeed- back is also often used to monitor client response to the virtual environment and, in one study in this review (Wood et al., 2008) was used to measure the effectiveness of VRET. Results VRET With Active Duty Service Members In their randomized controlled trial of VRET with active duty soldiers, McLay et al. (2011) used a convenience sam- ple to locate potential patients. They assigned 20 service members to VRET (n = 10) or to treatment as usual (n = 10, with one participant not completing postassessment tests) using random assignment, and used the CAPS as their outcome measure. While the VRET intervention appeared to follow the standard VRET treatment protocol, the treatment as usual group was not assigned to any particular treatment. Rather, they were assigned to receive one or more of the
  • 206.
    treatments available forPTSD provided by the two hospital locations, which included prolonged exposure (PE) therapy, EMDR, group therapy, psychiatric medication management, substance rehab, and inpatient services. Unfortunately, only the number of mental health visits, and not what type of treatment the TAU patients were receiving, was tracked. The findings for this study may not, therefore, truly reflect the comparison between a VRET group and a TAU group, as we are unsure what type of treatment the TAU group spe- cifically received. Also, the TAU group was, at some point, changed to a waiting-list group. It is unclear at what point this information was given to TAU group members, which may have affected their confidence in the TAU treatment they received. McLay et al. (2011) found seven out of ten of the VRET patients improved at least 30% on their CAPS scores from pretest to posttest. There was no significant dif- ference between CAPS scores after treatment; however, the authors found a significant difference (p < .05) in the change from pretest to posttest mean scores between the VRET group (M = 35.4, SD = 24.7) and the TAU (M = 9.4, SD = 26.6) group, favoring VRET. Reger et al. (2011) conducted a convenience sample study with 24 active duty soldiers diagnosed with PTSD (n = 18) or anxiety NOS (n = 6) who had been deployed at least once to Iraq or Afghanistan. The service members had either requested to receive VRET as a treatment or had received previous treat- ment for their disorder that was unsuccessful. The VRET was based on a training manual for the conduct of PE, delivered by a clinical psychologist with formal training in both VRET and PE. Patients received treatment a mean of 27.8 months (SD = 17.3) after the trauma, and received an average of 7.4 (SD = 3.3) treatment sessions. Researchers used the self-report PCL-M to measure treatment outcomes, and found patients reported a significant (p < .001) improvement in PTSD symp- toms from pretest (M = 60.92, SD = 11.03) to posttest (M =
  • 207.
    47.08, SD =12.7), with a large effect size (Cohen’s d = 1.17). Figure 1. Search strategy results for VRET treatment for military service members and veterans with PTSD. T hi s do cu m en t i s co py ri gh te d by th e A m
  • 208.
  • 209.
  • 210.
  • 211.
    e di ss em in at ed b ro ad ly . 176 Traumatology 19(3) Aquasi-experimental convenience sample study of 20 active duty service members (McLay et al., 2012) used the PCL-M, PHQ-9, and BAI to measure PTSD symptoms, depression, and anxiety. Their study revealed a large effect size (Cohen’s d = 1.34) between baseline PCL-M scores (n = 20, M = 53.8, SD = 9.6) and posttreatment (M = 35.6, SD = 17.4) scores. For n = 17 participants, scores on the PCL-M also showed a large effect size (Cohen’s d = 2.17) between baseline (M = 53.8, SD = 9.6) and 3-month follow up (M = 28.9, SD = 13.0). PHQ-9 scores at baseline (n = 20, M = 13.3, SD = 5.4) and posttreatment (M = 7.1, SD = 6.7) were statistically significant (p < .002), as was the difference between baseline (n = 17, M = 12.9, SD = 5.4) and 3-month follow up (M = 5.7, SD = 6.1, p < .001). Scores on the BAI
  • 212.
    showed a mediumeffect size (Cohen’s d = 0.56) between baseline (n = 20, M = 18.1, SD = 10.6) and posttreatment (M = 8.12, SD = 9.0), and a large effect size (Cohen’s d = 1.01) between baseline (n = 17, M = 18.1, SD = 10.6) and 3-month follow up (M = 8.12, SD = 9.0). One limitation of this study was the large dropout rate between the intent to treat group (n = 42) and the participants who completed treatment (n = 20). VRET With Veterans In their study comparing VRET with present-centered ther- apy (PCT), Ready et al. (2010) recruited clients currently in treatment at the Atlanta VA Medical Center’s Mental Health Clinic (n = 11, VRET n = 6, PCT n = 5), with one participant from each group dropping out. The clinician who inter- viewed participants was a licensed clinical psychologist with several years of experience working with this population and was blind to participant assignment. Clinicians used the Structured Clinical Interview for DSM-IV, the CAPS, and the Beck Depression Inventory as measures. PCT as the comparison group included psychoeducation about PTSD, problem-solving techniques, and a focus on the “here and now” problems clients experience. Both the VRET and PCT groups experienced improvement in symptoms; however, the authors report “there was not statistically significant improvement in CAPS or BDI scores when individual treat- ment conditions were isolated” (Ready et al., 2010, p. 52). The authors state that the small sample size impeded sig- nificant differences between groups from being found. The VRET group seemed to have lower baseline CAPS scores (M = 87.83, SD = 15.43) than the PCT group (M = 101.00, SD = 9.51). This is likely an artifact of the random assign- ment procedure used with a small sample. The authors cal- culated effect sizes for the mean change in the CAPS and BDI scores for each group. The mean change in CAPS scores for the VRET treatment group yielded a small Cohen’s d of 0.28 from pretest to posttest (n = 5, M = 31.8,
  • 213.
    SD = 39.1)and a medium Cohen’s d of 0.56 from pretest to follow up (n = 5, M = 25.0, SD = 28.1). Differences in the mean improvement of BDI scores did not yield significant results. It is unclear why the authors chose to combine the treatment groups and use a dependent samples t test to com- pare changes in CAPS scores on the entire sample between baseline, post-treatment, and follow up. An independent samples t test of the same data for the VRET group at baseline (n = 5, M = 101.0, SD = 9.51), posttreatment (n = 4, M = 75.5, SD = 22.22), and follow up (n = 5, M = 87.00, SD = 6.32), compared to the PCT group at baseline (n = 6, M = 87.83, SD = 15.43), posttreatment (n = 5, M = 59.2, SD = 32.24), and follow up (n = 4, M = 64.75, SD = 34.08) did not reveal any statistically significant differences. Gamito et al. (2010) completed a randomized controlled pilot study comparing VRET (n = 5), imaginal exposure (n = 2), and waiting list control (n = 3) groups with Portuguese war veterans (n = 10) who had fought in Africa between 1963 and 1970. Measures used to assess participants of the VRET group included the CAPS, a structured interview from the DSM-IV, the IES-R, the SCL-90-R, and the BDI. It is unclear why, but the SCL-90-R and BDI were not admin- istered to the imaginal exposure and waiting list groups at baseline or posttreatment. The authors report that BDI scores for the VRET group were significantly lower at posttreat- ment. There were no statistically significant differences between groups at posttreatment on the CAPS. The IES-R scores for the VRET group were reduced, whereas these scores for the imaginal group and the waiting list control group increased, however the differences were not statisti- cally significant. Due to the small sample size, this study was statistically underpowered and therefore inadequate to val- idly compare VRET with imaginal therapy and waiting list groups.
  • 214.
    Ready et al.(2006) describe a group of multiple case studies (Rothbaum, 2006; Rothbaum et al., 2001) where Vietnam veterans (n = 14) were treated with VRET. Mean CAPS scores at posttreatment (n = 14, M = 59.64, SD = 17.77), 3-month follow-up (n = 8, M = 55.13, SD = 14.38), and at 6-month follow-up (n = 11, M = 50.91, SD = 17.24) were all statistically significantly different (p < .05) than CAPS scores at baseline (n = 14, M = 72.57, SD = 16.18). Scores on the BDI at posttreatment (n = 14, M = 21.14, SD = 8.18) and at the 6-month follow up (n = 11, M = 18.45, SD = 9.49) were statistically significantly different (p < .05) than at baseline (n = 14, M = 24.86, SD = 9.70). Three-month posttreatment BDI scores (n = 8, M = 24.25, SD = 9.53), however, were not statistically significantly different from baseline BDI scores. Discussion Studies using VRET report several difficulties. First, the nature of the treatment itself appears to be difficult for vet- erans to either comprehend or trust. It is suspected that the current generation of service members may be reacting more positively to using virtual reality as a method of treat- ing PTSD because they were raised in a generation more familiar with this type of technology. Ready et al. (2010) describe the older veteran population as being tentative T hi s do cu m en
  • 215.
  • 216.
  • 217.
  • 218.
  • 219.
    . Nelson 177 about trustingthe technology to actually help with their PTSD symptoms. Another difficulty in using VRET with a veteran popula- tion is the amount of time that has lapsed between the trau- matic events and the treatment. Authors suspect the larger time lapse, in which participants have worked harder for a longer period of time to suppress their PTSD symptoms, causes participants to have a more difficult time in allowing themselves to relive the traumatic event in the virtual environ- ment. As a reliving of the events multiple times is necessary in exposure therapy, this population has a much more difficult time in succeeding with exposure therapy in general. Though the clinicians explain to participants and their families that an increase in symptoms is likely to occur at the beginning of treatment, veterans seem to see this increase in symptoms as evidence that the treatment is worsening their condition and may cause many to terminate treatment. A qualitative study of clinician perceptions of VRET found clinicians not trained in the use of VRET expressed concerns about the safety of using VRET with veterans, questioning whether the virtual environ- ment would exacerbate the symptoms of veterans (Kramer et al., 2010). However, in their meta-analysis on the use of VRET with anxiety disorders, Powers and Emmelkamp (2008) conducted a meta-regression analysis which showed that an increase in the number of virtual reality treatment ses- sions yielded larger effects sizes. This difficulty in recruiting veterans as participants in trials using VRET has perhaps stunted possible improvements that could be made to treat- ment protocols that would benefit veterans. Case studies
  • 220.
    determining how VRETcan be better tailored specifically to acclimatizing the veteran population to exposure therapy and to virtual reality technology may be necessary. There have also been high dropout rates in studies where participants are active duty service members (McLay et al., 2012), which could be attributed to difficulties in balancing treatment with military duties, the time commitment of treatment sessions (90-120 min twice weekly for 8-12 weeks), and the possibility of transfers to other military bases occurring mid-treatment. Kramer et al. (2010) also note that the use of virtual real- ity technology as a form of treatment may cause the thera- peutic alliance to suffer as a result. Therapists expressed concern that multitasking conducting therapy and control- ling complex computer software would prevent the develop- ment of an effective therapeutic relationship. Measurement of how VRET can either positively or negatively affect the therapeutic alliance may be useful in understanding how using a virtual environment can affect the usefulness of the relationship between therapist and client. Overall, the studies in this review found VRET to be ben- eficial to both active duty service members and veterans experiencing combat-related PTSD. Each group has a dif- ferent set of difficulties preventing them from seeking or receiving treatment, which is evidenced by high levels of attri- tion. It may also explain the difficulty in setting up experi- mental trials to test the efficacy of this treatment. Because the use of virtual reality technology is such a specific field, and because the purchase and training of virtual reality equipment expends both financial and time resources, the use of VRET in order to treat military service members and veterans for PTSD is not likely to spread quickly. While the actual cost of virtual reality technology is becoming less expensive, hesitation in the field over using exposure ther-
  • 221.
    apy in general,despite its positive results, will likely con- tinue to hinder this form of treatment. One last area where future studies using VRET may want to focus is the distal impact VRET may have. Treatment pro- viders want to ensure positive treatment results continue over time. It has been suggested that studies follow veterans and service members who have been treated with VRET for up to 2 years in order to measure the distal effects of the treatment (Powers & Emmelkamp, 2008). If positive distal effects of the treatment can be more readily established, the benefits of the treatment would perhaps balance out the dif- ficulties seen in implementing it on a wide-scale basis. Acknowledgment The author thanks Dr Bruce Thyer for his assistance in preparing this manuscript for publication. Declaration of Conflicting Interests The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author received no financial support for the research, author- ship, and/or publication of this article. References References marked with an asterisk indicate studies included in the
  • 222.
    systematic review. Thein-text citations to studies selected for sys- tematic review are not preceded by asterisks. American Psychiatric Association. (2000). Diagnostic and statisti- cal manual of mental disorders (4th ed., text revision). Arlington, VA: American Psychiatric Association. Casey, G. W. (2011). Comprehensive soldier fitness: A vision for psychological resilience in the U.S. Army. American Psycholo- gist, 66, 1-3. *Gamito, P., Oliveira, J., Rosa, P., Morais, D., Duarte, N., Oliveira, S., & Saraiva, T. (2010). PTSD elderly war veterans: A clinical controlled pilot study. Cyberpsychology, Behavior, and Social Networking, 13(1), 43-48. Gerardi, M., Cukor, J., Difede, J., Rizzo, A., & Rothbaum, B. O. (2010). Virtual reality exposure therapy for post-traumatic stress disorder and other anxiety disorders. Current Psychiatry Reports, 12, 298-305. Kraft, M., Amick, M. M., Barth, J. T., French, L. M., & Lew, H. L. (2010). A review of driving simulator parameters relevant to the Operation Enduring Freedom/Operation Iraqi Freedom veteran population. American Journal of Physical Medicine & Reha- bilitation, 89, 336-344. Kramer, T. L., Pyne, J. M., Kimbrell, T. A., Savary, P. E., Smith, J. L.,
  • 223.
    & Jegley, S.M. (2010). Clinician perceptions of virtual reality T hi s do cu m en t i s co py ri gh te d by th e A m er ic an
  • 224.
  • 225.
  • 226.
  • 227.
    em in at ed b ro ad ly . 178 Traumatology 19(3) toassess and treat returning veterans. Psychiatric Services, 61, 1153-1156. McLay, R. N., Graap, K., Spira, J., Perlman, K., Johnston, S., Rothbaum, B. O., Difede, J. A., Deal, W., Oliver, D., Baird, A., Bordnick, P. S., Spitalnick, J., Pyne, J. M., & Rizzo, A. (2012). Development and testing of virtual reality exposure therapy for post-traumatic stress disorder in active duty service members who served in Iraq and Afghanistan. Military Medicine, 177(6), 635-642. *McLay, R. N., Wood, D. P., Webb-Murphy, J. A., Spira, J. L., Wiederhold, M. D., Pyne, J. M., & Wiederhold, B. K. (2011). A randomized, controlled trial of virtual reality-graded exposure therapy for post-traumatic stress disorder in active duty service members with combat-related post-traumatic stress disorder. Cyberpsychology, Behavior, and Social Networking, 14, 223- 229.
  • 228.
    Powers, M. B.,& Emmelkamp, P. M. (2008). Virtual reality expo- sure therapy for anxiety disorders: A meta-analysis. Journal of Anxiety Disorders, 22, 561-569. Pull, C. B. (2005). Current status of virtual reality exposure therapy in anxiety disorders. Current Opinion in Psychiatry, 18, 7-14. *Ready, D. J., Gerardi, R. J., Backscheider, A. G., Mascaro, N., & Rothbaum, B. O. (2010). Comparing virtual reality exposure therapy to present-centered therapy with 11 U.S. Vietnam vet- erans with PTSD. Cyberpsychology, Behavior, and Social Net- working, 13(1), 49-54. *Ready, D. J., Pollack, S., Rothbaum, B. O., & Alarcon, R. O. (2006). Virtual reality exposure for veterans with posttrau- matic stress disorder. Journal of Aggression, Maltreatment & Trauma, 12, 199-220. Reger, G. M., Gahm, G. A., Rizzo, A. A., Swanson, R., & Duma, S. (2009). Soldier evaluation of the virtual reality Iraq. Telemedi- cine and e-Health, 15(1), 101-104. *Reger, G. M, Holloway, K. M., Candy, C., Rothbaum, B. O., Difede, J., Rizzo, A. A., & Gahm, G. A. (2011). Effectiveness of virtual reality exposure therapy for active duty soldiers in a military mental health clinic. Journal of Traumatic Stress, 24(1), 93-96. Riva, G., Raspelli, S., Algeri, D., Pallavicini, F., Gorini, A., Wie- derhold, B. K., & Gaggioli, A. (2010). Interreality in practice:
  • 229.
    Bridging virtual andreal worlds in the treatment of posttrau- matic stress disorders. Cyberpsychology, Behavior, and Social Networking, 13(1), 55-65. Rizzo, A., Reger, G., Gahm, G., Difede, J., & Rothbaum, B. O. (2009). Virtual reality exposure therapy for combat related PTSD. Post-Traumatic Stress Disorder, 6, 375-399. Rothbaum, B. O. (2006). Virtual Vietnam: Virtual reality exposure therapy. In M. Roy (Ed.), Novel approaches to the diagnosis and treatment of posttraumatic stress disorder (pp. 205-218). Amsterdam, The Netherlands: IOS Press. Rothbaum, B. O., Hodges, L. F., Ready, D., Graap, K., & Alarcon, R. D. (2001). Virtual reality exposure therapy for Vietnam veterans with posttraumatic stress disorder. Journal of Clinical Psychia- try, 62, 617-622. Stetz, M. C., Long, C. P., Wiederhold, B. K., & Turner, D. D. (2008). Combat scenarios and relaxation training to harden medics against stress. Journal of CyberTherapy & Rehabilita- tion, 1, 239-246. Virtual Iraq Afghanistan Media Story CBC [video file]. Retrieved from http://www.youtube.com/watch?v=Ltl9zbDRZWY& feature=autoplay&list=UUQrbzaW3x9wWoZPl4-l4GSA &playnext=1 Wood, D. P., Murphy, J. A., Center, K. B., Russ, C., McLay, R. N., Reeves, D., . . . Wiederhold, B. K. (2008). Combat related post- traumatic stress disorder: A multiple case report using virtual reality graded exposure therapy with physiological monitoring.
  • 230.
    In J. Westwood,R. Haluck, H. Hoffman, G. Mogel, R. Phillips, R. Robb, & K. Vosburgh (Eds), Medicine meets virtual reality 16 (pp. 556-561). Fairfax, VA: IOS Press. Wood, D. P., Murphy, J., McLay, R., Koffman, R., Spira, J., Obrecht, R. E., . . . Wiederhold, B. K. (2009). Cost effectiveness of virtual reality graded exposure therapy with physiological monitoring for the treatment of combat related posttraumatic stress disorder. Studies in Health Technology & Informatics, 144, 223-229. T hi s do cu m en t i s co py ri gh te d by
  • 231.
  • 232.
  • 233.
    el y fo r t he p er so na l u se o ft he in di vi du al u se r a nd is
  • 234.