The Meaningful Assessment of Therapy Outcomes:
Incorporating a Qualitative Study Into a Randomized Controlled Trial
Evaluating the Treatment of Adolescent Depression
Nick Midgley
University College London and Anna Freud Centre, London,
United Kingdom
Flavia Ansaldo
Southwark Targeted Services—CAMHS, London,
United Kingdom
Mary Target
University College London
For many years, there have been heated debates about the best way to evaluate the efficacy and
effectiveness of psychological therapies. On the one hand, there are those who argue that the randomized
controlled trial (RCT) is the only reliable and scientifically credible way to assess psychological
interventions. On the other hand, there are those who have argued that psychological therapies cannot be
meaningfully assessed using a methodology developed to evaluate the impact of drug treatments, and that
the findings of RCTs lack “external validity” and are difficult to translate into routine clinical practice.
In this article, we advocate the use of mixed-method research designs for RCTs, combining the rigor of
quantitative data about patterns of change with the phenomenological contextualized insights that can be
derived from qualitative data. We argue that such an approach is especially important if we wish to
understand more fully the impact of therapeutic interventions within complex clinical settings. To
illustrate the value of a mixed-method approach, we describe a study currently underway in the United
Kingdom, in which a qualitative study (IMPACT-My Experience [IMPACT-ME]) has been “nested”
within an RCT (the Improving Mood With Psychoanalytic and Cognitive Behavioral Therapy [IMPACT]
study) designed to evaluate the effectiveness of psychological therapies in the treatment of adolescent
depression. We argue that such a mixed-methods approach can help us to evaluate the effectiveness of
psychological therapies and support the real-world implementation of our findings within increasingly
complex and multidisciplinary clinical contexts.
Keywords: adolescent depression, randomized controlled trials (RCTs), qualitative research, mixed-
methods design, outcome research
For many years, there have been heated debates about the best
way to evaluate the efficacy and effectiveness of psychological
therapies. On the one hand, there are those who argue that the
randomized controlled trial (RCT) (and meta-analyses of such
trials) is the only reliable and scientifically credible way to assess
psychological interventions. The RCT has long been considered
the “gold standard” approach, placed at the top of the “hierarchy of
evidence” and given almost exclusive credence by bodies such as
the Cochrane Collaboration and guideline developers such as the
National Institute for Health and Clinical Excellence (NICE). On
the other hand, there are those who have argued that psychological
therapies cannot be meaningfully assessed using a methodology
developed to evaluate the impact of drug treatme ...
The Meaningful Assessment of Therapy OutcomesIncorporating .docx
1. The Meaningful Assessment of Therapy Outcomes:
Incorporating a Qualitative Study Into a Randomized Controlled
Trial
Evaluating the Treatment of Adolescent Depression
Nick Midgley
University College London and Anna Freud Centre, London,
United Kingdom
Flavia Ansaldo
Southwark Targeted Services—CAMHS, London,
United Kingdom
Mary Target
University College London
For many years, there have been heated debates about the best
way to evaluate the efficacy and
effectiveness of psychological therapies. On the one hand, there
are those who argue that the randomized
controlled trial (RCT) is the only reliable and scientifically
credible way to assess psychological
interventions. On the other hand, there are those who have
argued that psychological therapies cannot be
meaningfully assessed using a methodology developed to
evaluate the impact of drug treatments, and that
the findings of RCTs lack “external validity” and are difficult to
translate into routine clinical practice.
In this article, we advocate the use of mixed-method research
2. designs for RCTs, combining the rigor of
quantitative data about patterns of change with the
phenomenological contextualized insights that can be
derived from qualitative data. We argue that such an approach is
especially important if we wish to
understand more fully the impact of therapeutic interventions
within complex clinical settings. To
illustrate the value of a mixed-method approach, we describe a
study currently underway in the United
Kingdom, in which a qualitative study (IMPACT-My
Experience [IMPACT-ME]) has been “nested”
within an RCT (the Improving Mood With Psychoanalytic and
Cognitive Behavioral Therapy [IMPACT]
study) designed to evaluate the effectiveness of psychological
therapies in the treatment of adolescent
depression. We argue that such a mixed-methods approach can
help us to evaluate the effectiveness of
psychological therapies and support the real-world
implementation of our findings within increasingly
complex and multidisciplinary clinical contexts.
Keywords: adolescent depression, randomized controlled trials
(RCTs), qualitative research, mixed-
methods design, outcome research
For many years, there have been heated debates about the best
way to evaluate the efficacy and effectiveness of psychological
therapies. On the one hand, there are those who argue that the
randomized controlled trial (RCT) (and meta-analyses of such
trials) is the only reliable and scientifically credible way to
assess
psychological interventions. The RCT has long been considered
the “gold standard” approach, placed at the top of the
“hierarchy of
evidence” and given almost exclusive credence by bodies such
as
3. the Cochrane Collaboration and guideline developers such as
the
National Institute for Health and Clinical Excellence (NICE).
On
the other hand, there are those who have argued that
psychological
therapies cannot be meaningfully assessed using a methodology
developed to evaluate the impact of drug treatments, and that
the
findings of RCTs lack “external validity” and are difficult to
translate into routine clinical practice. Those who have offered
this
critique of RCT approaches (McLeod, 2011) have often argued
for
the greater use of qualitative research methods, which can
provide
more “contextual knowledge” and allow a greater focus on
meaning-making and the perspective of service users.
More recently, there has been a shift toward what some re-
searchers call a “third research paradigm” (Johnson & Onwueg-
buzie, 2004), in which qualitative and quantitative methods are
combined in the form of mixed-methods research. Although
such
mixed-methods approaches have a long history, they are only
recently beginning to be used systematically in the study of psy-
chological therapies, and there are still a number of conceptual
and
pragmatic challenges to designing and carrying out such
research.
In this article, we argue for a mixed-method approach to eval-
uating psychological therapies, suggesting that such an
approach
addresses some of the limitations of either a purely quantitative
4. or
qualitative design. To illustrate the value of such an approach,
we
will describe a study currently underway in the United
Kingdom,
This article was published Online First December 30, 2013.
Nick Midgley, Research Department of Clinical, Educational
and Health
Psychology, University College London, London, United
Kingdom and
Anna Freud Centre, London, United Kingdom; and Flavia
Ansaldo, South-
wark Targeted Services—CAMHS and Carelink, Lister Heath
Centre,
London, United Kingdom; Mary Target, Research Department of
Clinical,
Educational and Health Psychology, University College London.
The IMPACT-ME study is funded by the Monument Trust.
Correspondence concerning this article should be addressed to
Nick
Midgley, Anna Freud Centre, 21 Maresfield Gardens, London
NW3 5SD,
United Kingdom. E-mail: [email protected]
T
hi
s
do
cu
m
10. of incidence during teenage years. In the United Kingdom, 1 in
10
young people referred to Child and Adolescent Mental Health
Services receives a diagnosis of depression (Harrington, Fudge,
Rutter, Pickles, & Hill, 1990), and around 80% of first episodes
occur during the teenage period (Angold & Costello, 2001;
Ford,
Goodman, & Meltzer, 2003). The strong links between
adolescent
depression and recurrent depressive conditions and suicidal
behav-
ior later in life, as well as the subsequent emergence of
personality
disorders and substance misuse (Rudolph & Klein, 2009),
suggest
the importance of identifying and improving psychological
treat-
ments that are delivered early and that have long-term benefits
in
reducing the risk of relapse later in life.
Yet despite a considerable investment in studies that have eval-
uated psychological treatments, there are still major gaps in our
understanding of what kind of treatment is most effective for
young people, especially in terms of long-term prevention of
relapse, and what it is that contributes to a successful (or unsuc-
cessful) outcome. When the NICE guidelines on child
depression
were published in the United Kingdom in 2005, it was recom-
mended that a range of psychological therapies (including
cognitive– behavioral therapy [CBT] and short-term psychody-
namic psychotherapy) could be helpful elements within a treat-
ment package, but it was noted that the evidence available was
still
provisional and that some of the research findings were
contradic-
11. tory or inconclusive. In the “key research recommendations”
sec-
tion at the end of the guideline, it stated:
“An appropriately blinded, randomized controlled trial should
be conducted to assess the efficacy (including measures of
family
and social functioning as well as depression) and the cost effec-
tiveness of individual CBT, systemic family therapy and child
psychodynamic psychotherapy compared with each other and
treatment as usual in a broadly based sample of children and
young
people diagnosed with moderate to severe depression (using
min-
imal exclusion criteria). The trial should be powered to examine
the effect of treatment in children and young people separately
and
involve a follow-up of 12 to 18 months (but no less than 6
months)” (NICE, 2005, p. 40).
In the light of these recommendations, the Health Technologies
Assessment (a U.K. government-backed funding agency) put out
a
call for bids to conduct such a study, and in 2007 a joint
applica-
tion between the University of Cambridge, University of Man-
chester, and University College London successfully won this
bid
(Goodyer et al., 2011).
The IMPACT study, as it came to be called, is the largest
clinical trial of the psychological treatment of adolescent
depres-
sion to have ever taken place in Europe (Goodyer et al., 2011),
with preliminary findings due to be published in 2014. The
12. IMPACT study is a pragmatic, relapse prevention, superiority,
RCT comparing the effectiveness of two specialist treatments—
CBT and short-term psychoanalytic psychotherapy (STPP)—
with
clinical care without psychotherapy (Specialist Clinical Care)
rou-
tinely delivered across a Child and Adolescent Mental Health
Service (CAMHS) in the United Kingdom (The recommendation
in the NICE guidelines for further investigation of systemic
family
therapy to also be included in the study was not followed).
During
the course of the study, almost 500 young people who have been
referred to 18 different CAMHS teams across the United King-
dom, and who meet the criteria for moderate to severe
depression,
are being randomized to one of the three treatment arms. The
5-year study (now in its third year) is designed to address some
of
the key questions that were left unanswered by previous
research:
chiefly, to identify the most effective treatment to reduce
depres-
sive symptoms among adolescents with moderate to severe de-
pression both in the short term (6 and 12 weeks) and in the
medium/long term (36, 52, and 86 weeks), thereby accounting
for
the long-term effects of different types of treatments in
reducing
risk of relapse and recurrence.
Advantages of Using an RCT to Evaluate the
Effectiveness of Psychological Therapies
There are a number of advantages to addressing the issue of
treatment effectiveness by means of an RCT. The core elements
13. of
RCTs include randomisation to different treatments, control
over
treatment fidelity, and comparison of outcomes in the treatment
group (Spillane et al., 2010). RCTs are especially important as a
way of evaluating the efficacy of therapeutic treatments because
they produce results that in most cases can be confidently ex-
plained in relation to controlled and carefully analyzed sets of
variables. Different outcomes can be attributed with greater
con-
fidence to the impact of the different treatments that are being
evaluated and compared, while minimizing the bias deriving
from
extraneous factors. A well-designed RCT study ensures high
levels
of scientific rigor and validity by collecting data longitudinally
from a large representative sample receiving treatments that are
manualized and rated with fidelity measures.
The IMPACT study design (see Goodyer et al., 2011, for full
details) aims to uphold a high level of empirical strength and
scientific power through the careful implementation of randomi-
sation and blinding procedures (Blackwood, O’Halloran, &
Porter,
2010) and full adherence to the CONSORT guidelines on the
design of RCTs (Schulz, Altman, & Moher, 2010). The young
people entering the IMPACT trial have an equal and unbiased
chance of being randomized to any of the three treatment arms.
During the course of the trial, the research assistants remain
blind
to participants’ treatment allocation, thereby ensuring that the
theoretical and professional inclinations of researchers and
clini-
cians are more likely to maintain a more neutral attitude toward
outcomes related to each particular case and type of
intervention.
14. Furthermore, the extent to which different treatments are more
or
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
18. ss
em
in
at
ed
br
oa
dl
y.
129THE MEANINGFUL ASSESSMENT OF THERAPY
OUTCOMES
less likely to prevent relapse and maintain therapeutic gains has
been incorporated in the design by monitoring the patients’
symp-
tomatology up to 18 months after receiving the intervention.
This
extended follow-up design begins to address the lack of
evidence
around the long-term effects of different psychological
treatments.
Other key strengths of the IMPACT trial are the implementation
of treatment manuals and measures of treatment adherence for
the
different treatment modalities, ensuring greater adherence to a
standardized protocol across sites; this further reinforces the
inter-
nal validity of the study by guaranteeing that the treatments
19. being
investigated are actually the treatments being delivered. Yet this
is
balanced by an emphasis on “external validity,” in so far as the
study is a pragmatic one, with treatments delivered by qualified
clinicians in real child and adolescent mental health services.
Each
treatment has at least preliminary evidence for effectiveness. In
terms of generalizability of the findings, although many RCT
studies have been criticized for excluding participants with
multi-
ple diagnosis, the IMPACT study includes young people with
comorbid disorders alongside major depression— especially im-
portant, as comorbidity is the rule rather than the exception for
young people with depression. This ensures that the sample is
representative of the complexity and variety of the cases
accessing
CAMHS, reflecting more closely and accurately the reality of
clinical settings. The large size of the sample also makes it
possible
address some of the questions about moderators and mediators
of
effectiveness, such as the level of family support provided to
the
young people, or the role of the therapeutic alliance in
supporting
adherence to treatment.
Limitations in Using RCTs to Evaluate
Psychological Therapies
Despite all of these strengths, there are clear limitations in the
design of “gold standard” statistical methods such as RCT
studies
in so far as they aim to provide understanding of complex inter-
ventions and identify mechanisms of change in therapy,
20. including
mediators and moderators of treatment outcome (Dattilio, Ed-
wards, & Fishman, 2010; Kazdin, 2009). Researchers and
practi-
tioners have identified an “implementation gap” (Britten, 2010)
when it comes to translating RCTs’ findings effectively into
changes in clinical practice (Hollon, 2006). As Jane Noyes,
from
the Cochrane Collaboration, has recently put it:
“Many aspects of treatment and care cannot be evaluated by
randomized trials . . . a Cochrane review may provide clear evi-
dence of the effectiveness of an intervention, but does not
include
evidence on how people experience the intervention or how it
fits
with their lifestyle or matches with their preferred choices or
expectations. The latter evidence about views, experiences, life-
styles, concordance, attrition and undesired effects is more
likely
to be qualitative” (Noyes, 2010, p. 526).
Researchers are increasingly calling for an integrated approach
that combines the “hard science” of quantifiable outcomes with
qualitative data about the meaning of therapeutic interventions
and
process of change (Hill, Chui, & Baumann, 2013). The “imple-
mentation gap” cannot be filled by data illustrating outcomes
unless it also sheds light on the different paths that lead to those
outcomes. As a stand-alone approach, the RCT design struggles
to
explain complex interventions and unpick which aspect of the
treatment is key to efficacy and isolate its effect from the
broader
context. As Blackwood et al. (2010) argue:
21. “The power of the RCT is dependent upon its capacity to use
probability theory to approximate the closed system of the
exper-
iment where, all other things being equal, there is only one
putative
causal factor acting upon the intervention group, and this is
absent
from the control group . . . [However] interventions in clinical
arenas not controlled by trial protocols are open systems, in
which
many factors additional to the intervention itself, including
those
relating to organization structure, cultural mores, economic
capac-
ity . . . will all affect the effectiveness of the intervention . . .
So for
example, if the trial shows no effect, one has to question
whether
or not the intervention itself was ineffective, inadequately
applied,
applied in an inappropriate setting or the comparison was
unsuit-
able. In health care research, what stops us from seeing an
‘effect’
may have more to do with the system than the intervention per
se”
(p. 517/18 and 513/14).
Similarly, important “nonspecific factors,” which are thought to
contribute to a large proportion of variation in treatment
outcomes,
often remain unexplained when RCT results suggest the equal
effects across treatment types (the so-called “dodo bird effect”).
These factors are likely to include important characteristics of
the
22. patient–therapist relationship, the personality and skill of the
ther-
apist, the particular phase of the therapeutic process, and the
individual clients’ responsiveness to such processes (Fishman,
2002).
Like almost all RCTs, the IMPACT study relies on a battery of
widely used and validated outcome measures, including
structured
interviews and questionnaires. Although this makes it possible
to
have confidence in the reliability and validity of the measures,
and
to compare findings across a range of different studies, it also
means that the only outcomes being measured are ones that have
been predetermined by mental health professionals and
research-
ers. However, as the study by Morris (2005) demonstrated,
many
outcomes that were identified by clients themselves using open-
ended interviews (qualitative data) might never have been
identi-
fied using standardized questionnaires, especially the kind of
changes that go beyond symptom-relief and touch on the wider
(but ultimately just as important) question about quality of life
and
the meaning that people attribute to their experiences. Although
RCTs need to limit the number of variables that are observed
and
measured, in reality we are unlikely to truly understand what
factors promote or hinder recovery without an understanding of
the
broader social, cultural, and organizational context in which
ther-
apy is taking place.
23. Moreover, in light of the high rates of dropout in adolescent
outpatient psychotherapy (Pelkonen, Marttunen, Laippala, &
Lön-
nqvist, 2000), it can be predicted that a considerable number of
young people in the IMPACT RCT will drop out of therapy
despite
the clinicians’ and researchers’ best efforts. Standardized
outcome
measures are not designed to capture the complex processes that
lead to treatment dropout (and failure), suggesting that the
IMPACT study on its own could struggle to provide answers to
pressing questions around what helps to facilitate (or hinder)
the
adolescents’ engagement and retention in therapy. Although
psy-
chotherapy research is often driven by a professional need to
show
the effectiveness of treatment, the reality is that we often learn
most by understanding treatment failure.
T
hi
s
do
cu
m
en
t
is
co
py
28. effective interventions through quantitative methods, it is
increas-
ingly just as important to implement RCT studies in which the
findings can be “translated” into clinical practice more
meaning-
fully. Nevertheless, the CONSORT criteria for assessing the
qual-
ity of clinical trials still place much greater emphasis on
“science”
issues than on issues related to clinical relevance or
transferability
of findings, and as such, the limitations of randomized clinical
trials as a way of moving the field forward continue to be felt.
The Advantage of Incorporating Qualitative Data Into
an RCT Study
While the Health Technologies Association (HTA) was going
ahead with the funding for a high-profile RCT to evaluate the
effectiveness of psychological therapies in the treatment of ado-
lescent depression, a subsequent paragraph from the same
guide-
lines was given less attention. In the 2005 report, an “additional
research recommendations” section argued:
“A qualitative study should be conducted that examines the
experiences in the care pathway of children and young people
and
their families (and perhaps professionals) in order to inform
deci-
sions about what the most appropriate care pathway should be”
(p. 41).
The NICE guidelines did not imply that such a qualitative study
should be incorporated into an RCT design, as qualitative and
quantitative studies have traditionally been carried out
29. relatively
independently from each other, drawing on different research
paradigms and different research skills. Nevertheless, at the
time
that the IMPACT Study was designed, some thought had been
given to including some qualitative data collection, but this idea
had been shelved because it was considered too time- and labor-
intensive. There was perhaps also a question about the value of
such qualitative data, and whether it could add anything
meaning-
ful to the findings of an RCT study.
Certainly, when reviewing the existing psychotherapy research
literature, there is a striking absence of mixed-method studies
when evaluating treatment efficacy or effectiveness, reflecting
the
fact, perhaps, that qualitative data are still considered to have a
low
standing in the “hierarchy of evidence.” In a recent review,
McLeod (2011) identified only 12 studies of the outcome of
psychotherapy (all with adults) in which the client’s perspective
was also examined using open- ended qualitative interviews. In
the
field of child and adolescent psychotherapy, the voices of young
people are even more marginalized, as an important report by
Young Minds has noted (Street & Herts, 2005).
Yet incorporating qualitative approaches into an RCT design (as
opposed to having entirely separate qualitative and quantitative
studies) has a number of key advantages, as has been recognized
more broadly in the social sciences community, where mixed-
methods research is increasingly favored (Creswell, 2003;
Tashak-
kori & Teddlie, 2003). Mixed-methods research has been
defined
as “the collection or analysis of both quantitative and
30. qualitative
data in a single study in which the data are collected
concurrently
or sequentially, are given a priority, and involve the integration
of
the data at one or more stages in the process of research” (Cre-
swell, Plano Clark, Gutmann, & Hanson, 2003, p. 212). Rather
than advocating a “hierarchy of evidence,” mixed-methods re-
searchers argue that different research methods are appropriate
for
different types of research questions, and that for complex ques-
tions such as the effectiveness of a psychological therapy, a
range
of methods is probably essential to reach any meaningful
conclu-
sions. In mixed-method designs, the synthesis of quantitative
and
qualitative data can lead to a scenario in which findings
corrobo-
rate each other, thereby strengthening the validity and
generaliz-
ability of the study, or contradict each other, thereby causing
the
epistemological query to require further investigation in light of
new empirical and theoretical models.
In recent years, there has been a growing interest in using
qualitative methods alongside RCTs to assess complex health
care
interventions, although when this is done qualitative elements
are
often poorly integrated with the overall trial findings (Lewin,
Glenton, & Oxman, 2009). And yet incorporating qualitative
meth-
ods, such as in-depth interviewing, observations, and artifacts,
31. into
an RCT study may provide crucial insight into aspects such as
the
feasibility or acceptability of a particular intervention. To date
this
has been the most common way in which qualitative data have
been used within RCTs. For example, Kramer and Burns (2008)
collected qualitative postintervention interview data with
service
providers to better understand the factors facilitating the imple-
mentation of a CBT intervention for depressed adolescents
within
two mental health units. They followed this up with interviews
with research participants in a Group CBT treatment trial, and
used
the findings from this study to help establish that the
intervention
was acceptable and that the trial methodology was acceptable
(Cramer, Salisbury, Conrad, Eldred, & Araya, 2011). While
these
studies used qualitative interviews to explore the feasibility of
an
RCT among those both delivering and receiving therapy, other
researchers have noted that qualitative data can be valuable at
various stages of the RCT process (Vuckovic, 2002). For
example,
qualitative data can help to “triangulate” and validate the
findings
from the quantitative measures (Balmer, Gikundi, Nasio,
Kihuho,
& Plummer, 1998); to make predictions about who is most
likely
to respond to a given intervention (Philips, Wennberg, &
Werbart,
2007); to throw light on the multidimensional factors that
contrib-
32. ute to more or less positive treatment outcomes (Verhoef, Case-
beer, & Hilsden, 2002); to explain why an intervention did not
work for some patients (Schumacher et al., 2005); or to
highlight
differences in outcome between participants in different arms of
a
study, which had not been captured by the quantitative rating
scales used in the RCT (Berk et al., 2011; Lunn, Poulsen, and
Daniels, 2012).
Including patients’ perspectives into RCT evaluations makes it
possible to understand how they make sense of the difficulties
they
are facing, as well as the meaning they make of receiving thera-
peutic help. This is likely to be crucial in understanding why
some
people do not seek help, or may refuse or drop out of treatment.
For example, Down, Willner, Watts, & Griffiths (2011) describe
a
mixed-method RCT of anger management interventions for ado-
lescents. The researchers collected pre- and postquestionnaire
(quantitative) and interview (qualitative) data to establish the
ef-
fectiveness of the different treatments as well as identifying
factors
related to improved outcomes. This helped them to understand
more about what factors contributed to effective treatment. In
one
of the only studies of its kind with young adults, von Below,
Werbart, & Rehnberg (2010) have demonstrated that combining
a
clinical trial with in-depth interviews with clients allows for the
development of important “process models” for the way out of
T
hi
37. ed
br
oa
dl
y.
131THE MEANINGFUL ASSESSMENT OF THERAPY
OUTCOMES
depression and that the integration of both approaches offers
something that neither a clinical trial nor qualitative data can
offer
when looked at separately.
IMPACT-ME—A Qualitative Study “Nested” Within
an RCT
In the light of the issues set out above, we would like to
describe how we have established a qualitative longitudinal
study—IMPACT-ME—as a “nested” study within the IMPACT
RCT, aiming to interweave qualitative data based on the client’s
own perspective with the more traditional quantitative data col-
lected in randomized clinical trials in a “double helix” design
(Miller & Crabtree, 2008). We will first set out the overall
design
of the IMPACT-ME study in relation to the main RCT, before
discussing some of the methodological issues that have arisen
by
attempting this kind of mixed-methods evaluation of
psychological
therapy as a means of reducing relapse among young people
38. suffering from depression.
The primary aim of the IMPACT-ME study is to explore the
experience of overcoming depression, in adolescents (and their
parents) who have undergone a course of psychological therapy
within a CAMHS setting as part of the IMPACT RCT. The
IMPACT-ME study is spread across three time points of data
collection, involving in-depth interviews with young people and
their parents (when applicable) before the start treatment (Time
1),
at the end of treatment (Time 2), and 1 year after the end of
treatment (Time 3).
In the pretreatment phase of data collection, all young people
and parents entering the IMPACT RCT will take part in a quali-
tative interview (the “Expectation of Therapy Interview,”
Midgley
et al., 2011a) investigating the way young people and their
parents
understand the difficulties that brought them to CAMHS and
their
hopes and expectations about therapy. The interview schedule is
an
adaptation of Elliott’s Change Interview (Elliott, Slatick, & Ur-
man, 2001) and Werbart’s Private Theories Interview (Werbart
&
Levander, 2005). It can be used by the interviewer in a flexible
way (in keeping with the principles of qualitative interviewing,
e.g., Kvale, 1996) but covers: (a) what brought the young
person
to treatment and how these difficulties have been affecting the
lives of the young person and those around them; (b) the inter-
viewee’s understanding of those difficulties (how things came
to
be like this); (c) hopes for change and ideas about what could
lead
39. to meaningful change; (d) and ideas and expectations about
ther-
apy itself.
In the posttreatment phase of data collection (Time 2 and 3), all
families in one of the regions (London) taking part in the main
IMPACT study will be interviewed using the “Experience of
Therapy Interview” (Midgley et al., 2011b), which revisits the
topics explored in the earlier interview, but adds to them an
exploration of the young people and their families’ experiences
of
therapy and change over time, with a focus in particular on the
processes that led to positive or negative treatment outcomes as
well as the broader cultural and contextual factors affecting
those
outcomes, and an exploration of the participant’s experience of
being involved in the research study (e.g., the process of rando-
misation, research assessment meetings, audio-recording etc.).
A
sample of therapists will also be interviewed at time point 2,
subject to the consent of the young people.
Although the primary aim of the study is to explore young
people’s experiences of therapy and their own understanding of
the
process of change, a number of subsidiary questions will also be
addressed, both at specific time-points (e.g., young people’s ex-
pectations of therapy, based on time point 1) and longitudinally.
Specific attention will be paid to certain subgroups, such as
those
young people who withdrew from, or dropped out of therapy,
and
those who appeared to benefit from therapy but who
subsequently
“relapsed” by the 1-year follow-up point. In an RCT study
looking
40. at the role of psychological therapy in preventing relapse, and
given the high levels of remission among those with depression
who are untreated, it will be especially important to have this
kind
of interview data at the 1-year follow-up stage.
Pragmatic and Scientific Issues in the Design of the
IMPACT-ME Study
Despite the obvious advantages of incorporating qualitative data
into an RCT study in this way, there were a number of
significant
obstacles— both pragmatic and scientific—to making such a
study
possible. Not least of these was the issue of funding—with
major
funding bodies notoriously reluctant to fund qualitative studies
in
the field of psychotherapy research, despite the increasing
empha-
sis in medical services more generally on service-user
involvement
and a recognition of the value of taking into account the user’s
perspective and user’s experience. (Indeed, NICE published
their
first guidelines specifically on service-user experience in adult
mental health settings in 2011). We were extremely fortunate,
however, that the Monument Trust—part of the Sainsbury
Family
Charitable Trusts—agreed to provide funds for an additional
study,
to be conducted alongside (and incorporated within) the main
IMPACT Study. One of the co-Principal Investigators of the
RCT
study (M.T.) is also a PI on the IMPACT Study, while the other
(N.M.) is a Senior Research Fellow on the main RCT.
41. The first step toward setting up the IMPACT-ME study within
the broader frame of the IMPACT RCT was to create a “third
research community” among the research team, within which
quantitative and qualitative researchers could embrace and
value
an integrated and flexible approach combining different aims,
methods, and procedures. This challenge was heightened by the
fact that the IMPACT-ME study officially started 2 years after
the
RCT study was up and running, with just �100 out of the 540
young people already recruited to the main study, and with the
plans for data collection and analysis already clearly set out
(Goodyer et al., 2011). This meant that the IMPACT-ME
qualita-
tive researchers were given the challenging yet stimulating task
to
integrate themselves as part of a quantitative research “commu-
nity,” which needed molding and adaptation to embrace new
objectives and methodologies that felt both complementary and
contrasting.
This first phase of theoretical and philosophical integration,
which entails the adoption of a common set of lenses through
which shared research aims could be understood and contextual-
ized, required a delicate process of negotiation. The aim of this
process was not only to reach a democratic acceptance of both
paradigms under one roof, but rather to achieve a shared convic-
tion that a mixed-method approach could best address the multi-
layer complexity of our investigation. In practice this also
involved
deciding on data-collection procedures as well as data analysis
and
T
hi
46. ed
br
oa
dl
y.
132 MIDGLEY, ANSALDO, AND TARGET
integration procedures (Hanson, Creswell, Clark, Petska, & Cre-
swell, 2005). After a meeting between the principal
investigators
of both studies, a presentation about the qualitative study was
made to a national meeting of the IMPACT study, after which
two
of the authors (N.M. and F.A.) visited each of the study sites
and
offered a half-day training in qualitative research interviewing,
which was followed up three months later with a review
meeting,
once qualitative interviewing had begun. The focus at this stage,
perhaps inevitably, was more on the collection of data rather
than
on how best to integrate at the stage of data analysis. It was
agreed
that each research assistant would contribute at least to some
extent to both quantitative and qualitative data collection. This
meant that each research assistant in the main IMPACT team
had
to learn a new set of skills, which needed to be integrated as
part
of their existing package of skills and experiences. Regular
47. super-
vision sessions helped the researchers to adopt a “pragmatic”
approach aimed at collecting both quantitative and qualitative
data,
which fulfilled high standards of scientific rigor while
addressing
the complexity of individual experiences within ever changing
contexts. In practice, research assistants often spoke about
needing
to shift their focus in moving from the qualitative to
quantitative
data, as more open-ended and fluid interviewing gave way to a
style that was more structured and focused on gathering specific
information.
When deciding on data-collection procedure, the key decisions
revolve around the order in which the qualitative and
quantitative
data are collected (concurrently or sequentially) and the priority
or
emphasis that is given to each type of data (equal or unequal).
For
example, when setting up a mixed-method psychotherapy
evalua-
tion research, the researchers can decide for quantitative data
(such
as standardized questionnaires), and qualitative data (such as in-
depth interviews), to be collected simultaneously or at different
points in time, before, during, and after the intervention. As far
as
the priority of the data is concerned, it has to be decided
whether
equal status is given to the two sets of data or whether one type
of
data is used to inform or support the findings of the other.
48. In line with Creswell and colleagues’ classification system
(Creswell et al., 2003; as cited by Hanson et al., 2005), the
IMPACT RCT design can be described both as a concurrent
triangulation design and a concurrent nested design study. As a
concurrent triangulation design study, qualitative and
quantitative
data are collected simultaneously and priority is given equally
to
both sets of data, which are analyzed separately and will be
integrated at the interpretation stage. The data sets will be
“trian-
gulated” to establish the degree of convergence or divergence of
the findings with the aim to address the main topics of
investiga-
tion: namely, what treatments of adolescent depression have the
most extended therapeutic benefits over time and how do these
treatments achieve what they appear to achieve (or not achieve).
However, it could also be argued that the IMPACT-ME study is
“nested” within the main RCT in that the embedded qualitative
data are collected to help a subset of questions, such as
exploring
the experiences of stakeholders and carrying out case study
inter-
views with young people, parents, and clinicians involved in the
RCT. In this view, IMPACT-ME qualitative data do not hold
equal
status to the quantitative data, as it is aimed at addressing a
subset
of questions, which branch off the main research questions.
One immediate concern, when considering how best to nest the
qualitative element within the clinical trial, was the issue of
“as-
sessment burden” on the young people and their families. It was
agreed that adding further assessment time points would be bur-
49. densome, so the qualitative data should be collected alongside
the
quantitative data at the same time points. Three time points
were
considered of particular importance: pretherapy (to explore the
expectations of therapy and young people’s experience of
depres-
sion), posttherapy (to explore the nature of change and the
expe-
rience of treatment), and at 1-year follow-up (to explore what
factors contribute to relapse or to the maintenance of treatment
gains). However the principal investigators of the IMPACT
study
were concerned that adding a further qualitative interview to the
heavy assessment load would be unmanageable for these vulner-
able families, and could lead to families withdrawing their
partic-
ipation from the study. It was therefore agreed to pilot the
baseline
qualitative interview with a small number of families in one of
the
study sites (London), in order to assess its impact.
When this pilot period came to an end, we were interested to
discover that young people, their parents, and the research
assis-
tants were all extremely positive about the qualitative interview,
which we decided to use right at the start of the baseline assess-
ment. Young people and their families reported that it made
them
feel as if they were being listened to and that their “story” mat-
tered; research assistants said that it helped them to engage
fam-
ilies and get to know them better, before launching into a more
structured psychiatric assessment and/or a set of questionnaires.
Although the overall length of the baseline assessment slightly
50. increased, the feedback was so positive in terms of the
establish-
ment of rapport between researcher and participants that it was
decided to build the qualitative interview in to all baseline
assess-
ments (rather than a subgroup, as had originally been proposed).
This greatly increased the sample size for the qualitative study
at
the baseline, with implications for how data were to be analyzed
(see below).
Some concern was raised at this stage about the way in which
the qualitative interviews could impact on the outcome of the
interventions—would they have a “therapeutic” effect in their
own
right? It can also be assumed (and the families themselves con-
firmed this) that in many cases the interviews were experienced
by
the families as a place where their thoughts and ideas about the
treatment were especially valued and desired by the research
staff,
as opposed to any feeling that they were just the subject of
observation—and that this is likely to have had an impact on
their
involvement in the study. Similar findings have been found in
studies that have looked at the impact of “therapeutic
assessments”
on treatment alliance (Hilsenroth, Cromer, & Ackerman, 2012).
Although this might raise concerns about the way in which the
research impacts on findings, it is almost certain that any
involve-
ment with data collection as part of an RCT influences a
family’s
experience of treatment. At least in the IMPACT study we hope
to
explore this impact, by means of examining the experience of
51. participating in a research study as part of the IMPACT-ME
interview, and whether families themselves felt that this
contrib-
uted to change.
Slightly different concerns were raised about the qualitative
interviews at time points 2 and 3 (posttherapy). The research
team
discussed whether IMPACT-ME data collection at the end of
treatment and at the 1 year follow-up could also be embedded in
the existing IMPACT follow-up meetings, but concluded that
the
“Experience of Therapy Interview” required a separate (and op-
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
55. d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
133THE MEANINGFUL ASSESSMENT OF THERAPY
OUTCOMES
tional) research meeting to ensure that enough space was given
to
achieve an in-depth understanding of each young person (and
family)’s experience of receiving therapy as part of a large
clinical
trial. (Using the research assistants working on the RCT to
56. conduct
interviews about the experience of therapy would also have “un-
blinded” them, which would have seriously undermined the
cred-
ibility of the findings of the RCT.) The current stage of
IMPACT-ME posttreatment data collection is set up to closely
follow the IMPACT participants’ 36-week follow-up (which
roughly coincides with the end of their treatment) and 86-week
follow-up research meeting (roughly 1 year after the end of
treat-
ment). This way the “Experience of Therapy Interview” is timed
to
capture the young people’s narratives about their journey
through
depression and therapy both soon after the end of treatment and
1
year later, but is conducted separately from the other research
assessments, and by a different research assistant (with special
training in qualitative research interviewing).
The way in which the qualitative component was “nested”
within the RCT design also had implications for the sampling
strategy to be used. It quickly became clear that the initial sam-
pling strategy for the qualitative posttherapy component (45
fam-
ilies, equally divided between the three treatment arms and
strat-
ified according to age, gender, and treatment outcome) was not
going to work. In particular, there would not be enough
informa-
tion available in advance to be able to sample families
according
to outcome of treatment, as this would only be known at a later
stage. A decision was therefore made to invite all families in
one
of the three sites (London) to take part in the IMPACT-ME
57. study,
and to use a retrospective sampling strategy, in which
participants
would be sampled from among the “pool” of those who had been
interviewed, depending on the particular research question.
Over-
all this would increase the size of the posttherapy IMPACT-ME
sample to about 80 families, rather than 45 as had been
originally
proposed, but it would allow for the possibility of using
retrospec-
tive purposive sampling of the IMPACT-ME data in response to
specific issues that arose during the course of the study. For
example, a study looking at those who had benefitted from
therapy
but relapsed by follow-up (time point 3) would sample
retrospec-
tively from all those who had been interviewed and whose
trajec-
tory of change matched the research question. Or if there turned
out to be an especially high level of dropout in one arm of
treatment (although those who stayed in treatment appeared to
benefit from it), then it would be possible to retrospectively
sample
those who had dropped out from that arm of treatment and
explore
what factors led to their withdrawal—an issue with important
clinical implications, which the quantitative data could not in
itself
address.
Data Analysis Issues for IMPACT-ME in the Context
of the IMPACT Study
Alongside the sampling strategy and the method of incorporat-
ing the qualitative interviews into the RCT design, the other
58. major
question was about how best to manage such a large qualitative
data set and how to address the question of data analysis and
integration procedures. In particular, there was the challenge of
how to transform data from one study (IMPACT-ME) in such a
way that it could be integrated with findings from the other
study
(IMPACT). As defined by Caracelli and Greene (1993), “data
transformation” refers to the process of translating one data
type
into another. For example, themes that emerged from analyzing
qualitative interviews can be ordered into numerical codes and
then compared with quantitative questionnaire data.
Alternatively, results from a quantitative analysis could help
identify a subsample to follow up with further qualitative
investi-
gation.
In the case of the IMPACT study and IMPACT-ME, we are
aiming to analyze the data at a number of different levels, some
separately and some in a combined way. For example, the
baseline
qualitative interviews will be analyzed alone as a way of
investi-
gating young people’s experiences of depression and their
expec-
tations about therapy, but the qualitative data on expectations
can
also be coded quantitatively to look at correlations between ex-
pectations and outcome. Alongside this, a study examining a
subsample of young people (such as those who dropped out of
therapy) can draw on both the quantitative data (looking at
mod-
erators or mediators of dropout) as well as sampling from the
59. qualitative interviews to carry out an in-depth analysis of a
small
number of young people who dropped out of treatment. Over the
course of time, we would anticipate a series of studies, some of
which will report independently on findings from the qualitative
or
quantitative data, and others which will incorporate both,
whether
in the form of systematic case studies or studies focused on
specific subgroups (e.g., those who responded well to CBT, or
those whom therapists had initially thought were not “suitable”
to
the type of treatment they were randomized to, but subsequently
made good use of therapy).
At a practical level, the integration of qualitative and quantita-
tive data— especially in studies that involve collecting large
amounts of data, as with the IMPACT/IMPACT-ME study— has
long been complicated by the fact that different types of data
analytic software are used for quantitative and qualitative
analysis.
Whereas software packages such as SPSS are widely used for
quantitative data analysis, qualitative researchers have made use
of
software such as Atlas.ti (http://www.atlasti.com/index.html)
and
NVivo (http://www.qsrinternational.com/default.aspx), which
un-
til recently were purely textual. However, recent developments
in
qualitative data programming means that it is now much easier
to
integrate data from a package such as NVivo with statistical
packages such as SPSS. A wider range of methods has also been
developed to allow different “levels” of qualitative analysis. For
example, within IMPACT-ME we plan to analyze large data sets
60. using Framework Analysis (Ritchie & Spencer, 1994), which
was
developed to work both as a “data management tool” to system-
atically organize large data sets in sizable “chunks” and as an
analytical process whereby the researchers maintain a creative
and
interpretative stance throughout. Framework Analysis (which is
compatible with the NVivo software) can either pave the way
toward developing a (quantitative) coding system, or can be the
basis for a more in-depth exploratory form of qualitative
analysis,
such as Interpretative Phenomenological Analysis (IPA, Smith,
Flowers, & Larkin, 2009), which explores the lived experience
of
participants in an idiographic way. IPA is especially helpful for
studies such as this, focusing as it does on trying to “explore in
detail the participant’s view of the topic under investigation . . .
an
individual’s perception or account of an object or event as
opposed
to an attempt to produce an objective statement of the object or
event itself” (1999, p. 218). The method allows the researcher
to
T
hi
s
do
cu
m
en
t
65. http://www.atlasti.com/index.html
http://www.qsrinternational.com/default.aspx
build up the analysis from the reading of individual cases to the
theorizing of themes at a group level, while retaining the focus
on
personal perceptions. (For further details, see Smith et al.,
2009).
The challenges (and opportunities) of a mixed-methods ap-
proach in this particular case are also increased by the fact that
data
are collected longitudinally, across a period of almost 2 years,
and
that for IMPACT-ME, we will be integrating data from different
perspectives (i.e., the young person, the parent, and the
therapist).
Despite a growing interest among social science researchers in
using qualitative longitudinal research designs (Holland, Thom-
son, & Henderson, 2006), data analysis of longitudinal
qualitative
data still navigates in uncharted waters. Researchers are faced
with
the challenge of developing innovative and experimental
strategies
to integrate cross-sectional analysis, capturing the nature of the
sample at a particular data-collection point, with longitudinal
anal-
ysis following individual trajectories over time. The ultimate
aim
is to bring together time, change, and process within complex
multidimensional data sets (Holland et al., 2006).
Our research team decided to embrace a flexible approach
66. shifting between cross-sectional analyses, focusing on
identifying
the key theoretical framework and themes around particular re-
search questions (such as the mechanism of change in good out-
come adolescent patients) and individual longitudinal analysis
holding the individual as a core unit of analysis within their
unique
psychosocial context. Pragmatic case studies (Dattilio et al.,
2010;
Fishman, 2002) have been shown to be useful in examining un-
expected outcomes as well as for throwing light on the mecha-
nisms of change. In conducting multiperspective case studies re-
search, bringing together detailed narratives from young people,
parents, and therapist, there are ethical issues around the
privacy
and confidentiality of the individual cases, but there are now
helpful guidelines on “best practice” in relation to systematic
case
study designs (e.g., McLeod, 2011), and several journals,
includ-
ing this one, now have special sections or regular features on
“Evidence-Based Case Studies.” Studies such as the one by
Lunn
et al. (2012) suggest that there is an increasing interest in
under-
taking case studies as part of RCTs.
Despite the considerable challenges involved in the analysis of
such a complex multimethod data set, it is within our team ethos
to welcome a research endeavor that contains the potential for
methodological and analytical development and innovation
throughout the entire research process. How successfully we are
able to achieve this within the IMPACT-ME study remains to be
seen.
Conclusion
67. In this article, we have outlined the conceptual advantages of a
mixed-methods approach, specifically, incorporating a
qualitative
study within an RCT, to the evaluation of psychotherapy. We
have
also described the scientific and pragmatic challenges of design
and data analysis we have faced to date in the context of an
ongoing study of the effectiveness of psychological treatment of
adolescent depression currently underway in the United
Kingdom
(IMPACT ME and IMPACT).
As yet, there have been few attempts to integrate qualitative
data
and data from RCTs in evaluating a psychological therapy
(Lunn
et al., 2012). As recently as 2005, Hanson et al. were lamenting
that “virtually nothing has been written about mixed methods
research designs in applied psychology generally” (p. 224). Al-
though there are numerous and broadly recognized advantages
in
implementing mixed methodologies as part of mental health re-
search (Hanson et al., 2005), and social science research at
large
(Creswell, 2003), in this article we have aimed to specifically
illustrate the benefits of combining quantitative and qualitative
research as part an RCT of psychological and psychiatric inter-
ventions.
Such a mixed-methods approach, we have argued, has several
advantages. The “triangulation” of the findings combines
quanti-
tative outcome data about the effectiveness of treatment with a
deeper understanding of the therapeutic process and
68. mechanisms
of change that lead to such outcomes. The collection of in-depth
interview data alongside the battery of standardized outcome
mea-
sures will shed light on important questions around the factors
facilitating or hindering the young people’s engagement and re-
tention in therapy, including dropout and treatment failure. This
way we can go beyond a set of predefined outcomes to include
unexpected broader social, cultural, and contextual factors to
build
a more complex reality-based model of adolescent depression
and
process of change inside and outside the therapy. Furthermore,
qualitative data add a “zoom” on individual differences in the
young people, families, and therapists’ beliefs and preferences
affecting treatment alliance, retention, and outcome. In this
way,
combing the scientific rigor of quantitative data about patterns
of change with the phenomenological contextualized strength of
qualitative data that can help us to understand the meaning of
therapeutic interventions will increase the transferability of the
findings into improvements in clinical practice, thereby
addressing
some elements of the implementation gap in psychotherapy re-
search.
Nevertheless, we want to acknowledge that there are real chal-
lenges— both conceptual and pragmatic—to nesting qualitative
research within an RCT study. In setting up the IMPACT-ME
study as part of the IMPACT RCT, we first encountered a con-
ceptual challenge in integrating the theoretical and
philosophical
framework of a newly formed research team including both
quan-
titative and qualitative researchers. The creation of a
collaborative
69. and effective “mixed method” research team required the
careful
negotiation of a shared system of values to embrace a
pragmatic,
flexible stance in approaching our epistemological query. The
classical positivist paradigm of most quantitative researchers,
re-
lying on the “hard science” of statistical and numerical assess-
ments, had to be integrated with the rather different
epistemology
of qualitative research. Although this has been successfully
done at
the level of data collection, through a series of meetings and
trainings, it remains to be seen how successfully this can be
achieved at the level of data analysis. Given that the main RCT
and
the qualitative study were set up and funded separately, it is
likely
that the initial stages of data analysis will be independent of
each
other, and a true integration at the level of data analysis will be
at
the stage of secondary data analyses and substudies. Whether
this
would have been different if the qualitative component had been
established from the start is an important question.
Certainly there is now much greater common ground between
qualitative and quantitative researchers, and there are an
increasing
number of researchers who have been trained in, and are
comfort-
able working with, both types of data. It is now widely accepted
that the most fitting philosophical basis to support mixed-
method
74. in
at
ed
br
oa
dl
y.
135THE MEANINGFUL ASSESSMENT OF THERAPY
OUTCOMES
research is found in pragmatism (Tashakkori & Teddlie, 2003)
or
critical realism (Blackwood et al., 2010), although some argue
that
mixed-methods approaches sit most comfortably with a
“postpara-
digm” generation of researchers for whom the “paradigm wars”
of
previous generations have been replaced by a view that there
are
multiple paths toward knowledge, and that no one approach can
address all the questions we wish to address (Wheeldon, 2010).
The “third research community” (Teddlie & Tashakkori, 2009)
argue that the understanding of reality is provisional and ever
changing and equal value should be given to both objective and
subjective knowledge; different methods, techniques, and proce-
dures, which ought to be flexibly tailored to the purposes of
each
epistemological query, can lead to a more balanced and
complete
75. view of social phenomena, by drawing on the strengths of both
approaches and increasing the internal and external validity of
findings (Dures, Rumsey, Morris, & Gleeson, 2010).
Alongside these conceptual challenges, the research team has
had to sustain a continuous self-reflective and monitoring
process
to ensure a smooth and flexible transition between more
structured
form-filling data-collection procedures (quantitative) and an
open-
ended in-depth style of interviewing in which the aim is less to
categorize and more to explore the meaning of human
experience.
Other aspects that required careful consideration include the im-
plementation of longitudinal qualitative data-collection
procedures
as part of an RCT whose plans for data collection and analysis
had
already been set out (Goodyer et al., 2011). This process has
entailed pragmatic decisions about sampling and the timing of
the
different waves of data collection to minimize the “assessment
burden” on the participants and creative attempts to manage the
integration of large quantitative and qualitative data sets and
the
analysis of a large set of qualitative longitudinal data. The man-
agement and analysis of complex multidimensional data will re-
quire a flexible approach bringing together mixed-method
statis-
tical analysis, including the process of “quantifying” qualitative
data and vice versa, as well as cross-sectional and individual
longitudinal analysis of large qualitative data sets.
To conclude, there are ongoing challenges involved with
incorpo-
76. rating qualitative data within RCTs focusing on the
effectiveness of
complex health care interventions. Yet, it is our view that only
multimethod research can truly help us to evaluate the
effectiveness of
psychological therapies, in such a way that such studies can also
support the effective implementation of our findings within
increas-
ingly complex and multidisciplinary clinical contexts. In
psychother-
apy research, we suggest, the “gold standard” can no longer be
identified as the use of one methodology in isolation, whether
that
methodology is the RCT or even meta-analysis of RCTs. For
modern
psychotherapy researchers, the term “gold standard” (if we wish
to
retain it at all) can only refer to the reflective and critical
integration
of a range of methods (Dattilio et al., 2010). The conception of
RCTs
as belonging to the highest tier of the hierarchy of evidence,
whilst
single-case studies, expert opinion and qualitative
investigations are
placed in the lowest ranks, is not only out of date but also
potentially
harmful and misleading; if we wish to address real-world issues
such
as how best to help depressed young people whose difficulties
can
potentially have long-term consequences, we need to see more
mixed-
methods studies in which qualitative data are nested within RCT
designs.
77. References
Angold, A., & Costello, E. J. (2001). The epidemiology of
depression in
children and adolescents. In I. Goodyer (Ed.), The depressed
child and
adolescent (pp. 143–178). Cambridge: Cambridge University
Press.
Balmer, D. H., Gikundi, E., Nasio, J., Kihuho, F., & Plummer,
F. A.
(1998). A clinical trial of group counselling for changing high-
risk
sexual behaviour in men. Counselling Psychology Quarterly, 11,
33– 43.
Berk, M., Munib, A., Dean, O., Malhi, G. S., Kohlmann, K.,
Schapkaitz, I.,
. . . Bush, A. I. (2011). Qualitative methods in early-phase drug
trials:
Broadening the scope of data and methods from an RCT of
N-acetylcysteine in schizophrenia. Journal of Clinical
Psychiatry, 72,
909 –913. doi:10.4088/JCP.09m05741yel
Blackwood, B., O’Halloran, P., & Porter, S. (2010). On the
problems of
mixing RCTs with qualitative research: The case of the MRC
framework
for the evaluation of complex healthcare interventions. Journal
of Re-
search in Nursing, 15, 511–521.
Britten, N. (2010). Qualitative research and the take-up of
evidence-based
practice. Journal of Research in Nursing, 15, 537–544.
78. Caracelli, V. J., & Greene, J. C. (1993). Data analysis strategies
for
mixed-method evaluation designs. Education Evaluation and
Policy
Analysis, 15, 195–207.
Cramer, H., Salisbury, C., Conrad, J., Eldred, J., & Araya, R.
(2011).
Group cognitive behavioural therapy for women with
depression: Pilot
and feasibility study for a randomized controlled trial using
mixed
methods. BMC Psychiatry, 11, 82.
Creswell, J. W. (2003). Research design: Quantitative,
qualitative, and
mixed methods approaches (2nd ed.). Thousand Oaks, CA:
Sage.
Creswell, J. W., Plano Clark, V. L., Gutmann, M. L., & Hanson,
W. E.
(2003). Advanced mixed method research designs. In A.
Tashakkori &
C. Teddlie (Eds.), Handbook in mixed methods in social and
behavioral
research (pp. 209 –240). Thousand Oaks, CA: Sage.
Dattilio, F. M., Edwards, D. J. A., & Fishman, D. B. (2010).
Case studies
within a mixed-method paradigm: Toward a resolution of the
alienation
between researchers and practitioners in psychotherapy
research. Psy-
chotherapy Theory, Research, Practice, Training, 47, 427– 441.
79. Down, R., Willner, P., Watts, L., & Griffiths, J. (2010). Anger
management
groups for adolescents: A mixed-methods study of the efficacy
and
treatment preferences. Clinical Child Psychology and
Psychiatry, 16, 33.
Dures, E., Rumsey, N., Morris, M., & Gleeson, K. (2010).
Mixed methods
in health psychology: Theoretical and practical considerations
of the
third paradigm. Journal of Health Psychology, 16, 332.
Elliott, R., Slatick, E., & Urman, M. (2001). Qualitative change
process
research on psychotherapy: Alternative strategies. In J.
Frommer & D. L.
Rennie (Eds.), Qualitative psychotherapy research: Methods and
meth-
odology (pp. 69 –111). Lengerich, Germany: Pabst Science.
Fishman, D. B. (2002). From single case to database: A new
method for
enhancing psychotherapy, forensic, and other psychological
practice.
Applied & Preventive Psychology, 10, 275–304.
Ford, T., Goodman, R., & Meltzer, H. (2003). The British child
and
adolescent mental health survey 1999: The prevalence of DSM–
IV
disorders. Journal of the American Academy of Child &
Adolescent
Psychiatry, 42, 1203–1211.
Goodyer, I. M., Tsancheva, S., Byford, S., Dubicka, B., Hill, J.,
80. Kelvin, R.,
. . . Fonagy, P. (2011). Improving mood with psychoanalytic
and
cognitive therapies (IMPACT): a pragmatic effectiveness
superiority
trial to investigate whether specialized psychological treatment
reduces
the risk for relapse in adolescents with moderate to severe
unipolar
depression: Study protocol for a randomized controlled trial.
Trials, 12,
175. Retrieved from
http://www.trialsjournal.com/content/12/1/175
Hanson, W. E., Creswell, J. W., Clark, V. P., Petska, K. S., &
Creswell,
J. D. (2005). Mixed-method research in counselling psychology.
Journal
of Counselling Psychology, 52, 224 –235.
T
hi
s
do
cu
m
en
t
is
co
py
85. Harrington, R., Fudge, H., Rutter, M., Pickles, A., & Hill, J.
(1990). Adult
outcomes of childhood and adolescent depression: I. Psychiatric
status.
Archives of General Psychiatry, 47, 465– 473.
Hill, C., Chui, H., & Baumann, E. (2013). Revisiting and
reenvisioning the
outcome problem in psychotherapy: An argument to include
individu-
alized and qualitative measurement. Psychotherapy, 50, 68 –76.
doi:
10.1037/a0030571
Hilsenroth, M., Cromer, T., & Ackerman, S. (2012). How to
make practical
use of therapeutic alliance research in your clinical work. In R.
A. Levy,
J. S. Ablon, & H. Kächele (Eds.), Psychodynamic
psychotherapy re-
search: Evidence-based practice and practice-based evidence,
current
clinical psychiatry. New York: Springer. doi:10.1007/978-1-
60761-792-
1_22
Holland, J., Thomson, R., & Henderson, S. (2006). Qualitative
longitudi-
nal research: A discussion paper. Retrieved from
http://www.Isbu.ac.uk
Hollon, S. D. (2006). Randomized clinical trials. In J. C.
Norcross, L. E.
Beutler, & R. F. Levant (Eds.), Evidence-based practices in
mental
86. health: Debate and dialogue on the fundamental questions (pp.
96 –105).
Washington, DC: American Psychological Association.
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed-methods
research: A
research paradigm whose time had come. Educational
Researcher, 3,
14 –26.
Kazdin, A. E. (2009). Understanding how and why
psychotherapy leads to
change. Psychotherapy Research, 19, 418 – 428.
Kramer, T. F., & Burns, B. J. (2008). Implementing cognitive
behavioural
therapy in the real world: A case study of two mental health
centres.
Implementation Science, 3, 14.
Kvale, S. (1996). Interviews: An introduction to qualitative
research in-
terviewing. Thousand Oaks, CA: Sage.
Lewin, S., Glenton, C., & Oxman, A. D. (2009). Use of
qualitative methods
alongside randomised controlled trials of complex healthcare
interven-
tions: Methodological study. British Medical Journal, 339,
b3496.
Lunn, S., Poulsen, S., & Daniel, S. I. F. (2012). A multiple case
study of
psychoanalytic therapies for clients with bulimia nervosa.
Nordic Psy-
chology, 64, 87–102.
87. McLeod, J. (2011). The role of qualitative methods in outcome
research. In
J. McLeod (Ed.), Qualitative research in counselling and
psychotherapy
(2nd ed., pp. 161–180). London: Sage.
Midgley, N., Ansaldo, F., Parkinson, S., Holmes, J., Stapley, E.,
& Target,
M. (2011a). Expectations of therapy interview (Young Person
and Par-
ent Versions). Unpublished manuscript, Anna Freud Centre,
London.
Midgley, N., Ansaldo, F., Parkinson, S., Holmes, J., Stapley, E.,
& Target,
M. (2011b). Experience of therapy interview (Young Person,
Parent and
Therapist Versions). Unpublished manuscript, Anna Freud
Centre, Lon-
don.
Miller, W., & Crabtree, B. (2008). Clinical research. In N.
Denzin & Y.
Lincoln (Eds.), Strategies of qualitative inquiry (3rd ed.).
Newbury Park,
CA: Sage.
Morris, B. (2005). Discovering bits and pieces of me: Research
exploring
women’s experiences of psychoanalytic psychotherapy. London:
Wom-
en’s Therapy Centre. Retrieved from
www.womenstherapycentre.co.uk/
news/news/html
88. NICE. (2005, September). Depression in children and young
people:
Identification and management in primary, community and
secondary
care. London: National Institute for Health and Clinical
Excellence.
Noyes, J. (2010). Never mind the qualitative feel the depth! The
evolving
role of qualitative research in Cochrane intervention reviews.
Journal of
Research in Nursing, 15, 525.
Pelkonen, M., Marttunen, M., Laippala, P., & Lönnqvist, J.
(2000). Factors
associated with early dropout from adolescent psychiatric
outpatient
treatment. Journal of the American Academy of Child and
Adolescent
Psychiatry, 39, 329 –336.
Philips, B., Wennberg, P., & Werbart, A. (2007). Ideas of cure
as a
predictor of premature termination, early alliance and outcome
in psy-
choanalytic psychotherapy. Psychology and Psychotherapy:
Theory, Re-
search and Practice, 80, 229 –245.
Ritchie, J., & Spencer, L. (1994). Qualitative data analysis for
applied
policy research. In A. Bryman & R. G. Burgess (Eds.),
Analyzing
qualitative data (pp. 173–194). London: Routledge.
Rudolph, K. D., & Klein, D. N. (2009). Exploring depressive
89. personality
traits in youth: Origins, correlates, and developmental
consequences.
Developmental Psychopathology, 21, 1155–1180.
Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT
2010
statement: Updated guidelines for reporting parallel group
randomised
trials. British Medical Journal, 340, c332.
Schumacher, K. L., Koresawa, S., West, C., Dodd, M., Paul, S.
M.,
Tripathy, D., . . . Miaskowski, C. (2005). Focus on research
methods.
Qualitative Research Contribution to a Randomized Clinical
Trial. Re-
search in Nursing & Health, 28, 268 –280.
Smith, J. A., Flowers, P., & Larkin, M. (2009). Interpretative
phenome-
nological analysis: Theory method and research. London: Sage.
Spillane, P. S., Pareja, A. S., Dorner, L., Barnes, C., May, H.,
Huff, J., &
Camburn, E. (2010). Mixing methods in randomized controlled
trials
(RCTs): Validation, contextualization, triangulation, and
control. Edu-
cational Assessment, Evaluation and Accountability, 22, 5–28.
Street, C., & Herts, B. (2005). Putting participation into
practice. London:
Young Minds.
Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of
90. mixed method
in social and behavioral research. Thousand Oaks, CA: Sage.
Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed
methods
research. Thousand Oaks, CA: Sage.
Verhoef, M. J., Casebeer, A. L., & Hilsden, R. J. (2002).
Assessing
efficacy of complementary medicine: Adding qualitative
research meth-
ods to the “Gold Standard”. Journal of Alternative and
Complementary
Medicine, 8, 275–281.
von Below, C., Werbart, A., & Rehnberg, S. (2010).
Experiences of
overcoming depression in young adults in psychoanalytic
psychother-
apy. European Journal of Psychotherapy and Counseling, 12,
129 –147.
Vuckovic, N. (2002). Integrating qualitative methods in
randomized con-
trolled trials: The experience of the Oregon Center for
Complementary
and Alternative Medicine. Journal of Alternative and
Complementary
Medicine, 8, 225–227. doi:10.1089/10755530260127916
Werbart, A., & Levander, S. (2005). Understanding the
incomprehensible:
Private theories of first- episode psychotic patients and their
therapists.
The Bulletin of the Menninger Clinic, 69, 103–136.
91. Wheeldon, J. (2010). Mapping mixed methods research:
Methods, mea-
sures, and meaning. Journal of Mixed Methods Research, 4, 87–
102.
doi:10.1177/1558689809358755
Received June 16, 2013
Accepted June 18, 2013 �
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
95. t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
137THE MEANINGFUL ASSESSMENT OF THERAPY
OUTCOMES
http://dx.doi.org/10.1037/a0030571
http://dx.doi.org/10.1037/a0030571
http://dx.doi.org/10.1007/978-1-60761-792-1_22
http://dx.doi.org/10.1007/978-1-60761-792-1_22
http://www.Isbu.ac.uk
http://www.womenstherapycentre.co.uk/news/news/html
http://www.womenstherapycentre.co.uk/news/news/html
http://dx.doi.org/10.1089/10755530260127916
http://dx.doi.org/10.1177/1558689809358755The Meaningful
Assessment of Therapy Outcomes: Incorporating a Qualitative
Study Into a Randomiz ...The IMPACT StudyAdvantages of
Using an RCT to Evaluate the Effectiveness of Psychological
TherapiesLimitations in Using RCTs to Evaluate Psychological
96. TherapiesThe Advantage of Incorporating Qualitative Data Into
an RCT StudyIMPACT-ME—A Qualitative Study “Nested”
Within an RCTPragmatic and Scientific Issues in the Design of
the IMPACT-ME StudyData Analysis Issues for IMPACT-ME in
the Context of the IMPACT StudyConclusionReferences