This is the latest from the research team of the Heart and Soul of Change Project, published in the Journal of Consulting and Clinical Psychology. This study demonstrated that PCOMS is not only a viable quality improvement strategy but also that services to the poor and disenfranchised provided in a public behavioral setting, contrary to earlier research, can be as effective as those delivered in randomized clinical trials.
Slone, N. C., Reese, R. J., Mathews-Duvall, S., & Kodet, J. (2015). Evaluating the Efficacy of client feedback in group psychotherapy. Group Dynamics: Theory, Research, and Practice, 19, 122-136. doi:10.1037/gdn0000026
Feedback condition achieved nearly four times the amount of clients reaching reliable or clinically significant change. Nearly a 50% less separation/divorce at rate at follow up.
This study examined the effects of using client feedback, known as the Partners for Change Outcome Management System (PCOMS), with couples undergoing psychotherapy. 46 heterosexual couples were randomly assigned to either a treatment as usual (TAU) condition or to a feedback condition where therapists received feedback on client progress and the therapeutic alliance at each session via the Outcome Rating Scale (ORS) and Session Rating Scale (SRS). It was hypothesized that couples receiving feedback would have better outcomes, improve more quickly, and be more likely to meet the criteria for clinically significant change. Results from this study aimed to replicate previous research finding client feedback beneficial for couples therapy.
Duncan & Sparks Ch 5 of Cooper & DrydenBarry Duncan
THIS CHAPTER DISCUSSES
•
Systematic feedback and the Partners for Change Outcome Management System (PCOMS)
•
PCOMS as a way to truly privilege clients, include them as full partners in decision-making and operationalize social justice and a pluralistic approach
This 2 page article, which appeared in The Iowa Psychologist, provides an ultra brief summary of what makes therapy effective (the common factors) and how we can get better at what do: namely, add PCOMS, harvest client existing resources, and rely on that neglected old friend, the therapeutic alliance.
The original validation of the CORS for kids and the ORS for adolescents. Allowed the benefits of client based outcome feedback to expand to youth and family and paved the way to the current RCT with kids in the schools.
The Partners for Change Outcome Management System: Duncan & Reese, 2015Barry Duncan
Despite overall psychotherapy efficacy (Lambert, 2013), many clients do not benefit (Reese, Duncan, Bohanske, Owen, & Minami, 2014), dropouts are a problem (Swift & Greenberg, 2012), and therapists vary significantly in success rates (Baldwin & Imel, 2013), are poor judges of negative outcomes (Chapman et al., 2012), and grossly overestimate their effectiveness (Walfish, McAlister, O'Donnell, & Lambert, 2012). Systematic client feedback offers one solution (Duncan, 2014). Several feedback systems have emerged (Castonguay, Barkham, Lutz, & McAleavey, 2013), but only two have randomized clinical trial support and are included in the Substance Abuse and Mental Health Administration’s National Registry of Evidence based Programs and Practices: The Outcome Questionnaire-45.2 System (Lambert, 2010) and the Partners for Change Outcome Management System (PCOMS; Duncan, 2012). This article presents the current status of the Partners for Change Outcome Management System, the psychometrics of the PCOMS measures, its empirical support, and its clinical and training applications. Future directions and implications of PCOMS research, training, and practice are detailed. Finally, we propose that systematic feedback offers a way, via large scale data collection, to re-prioritize what matters to psychotherapy outcome, reclaim our empirically validated core values and identity, and change the conversation from a medical model dominated discourse to a more scientific, relational perspective.
Slone, N. C., Reese, R. J., Mathews-Duvall, S., & Kodet, J. (2015). Evaluating the Efficacy of client feedback in group psychotherapy. Group Dynamics: Theory, Research, and Practice, 19, 122-136. doi:10.1037/gdn0000026
Feedback condition achieved nearly four times the amount of clients reaching reliable or clinically significant change. Nearly a 50% less separation/divorce at rate at follow up.
This study examined the effects of using client feedback, known as the Partners for Change Outcome Management System (PCOMS), with couples undergoing psychotherapy. 46 heterosexual couples were randomly assigned to either a treatment as usual (TAU) condition or to a feedback condition where therapists received feedback on client progress and the therapeutic alliance at each session via the Outcome Rating Scale (ORS) and Session Rating Scale (SRS). It was hypothesized that couples receiving feedback would have better outcomes, improve more quickly, and be more likely to meet the criteria for clinically significant change. Results from this study aimed to replicate previous research finding client feedback beneficial for couples therapy.
Duncan & Sparks Ch 5 of Cooper & DrydenBarry Duncan
THIS CHAPTER DISCUSSES
•
Systematic feedback and the Partners for Change Outcome Management System (PCOMS)
•
PCOMS as a way to truly privilege clients, include them as full partners in decision-making and operationalize social justice and a pluralistic approach
This 2 page article, which appeared in The Iowa Psychologist, provides an ultra brief summary of what makes therapy effective (the common factors) and how we can get better at what do: namely, add PCOMS, harvest client existing resources, and rely on that neglected old friend, the therapeutic alliance.
The original validation of the CORS for kids and the ORS for adolescents. Allowed the benefits of client based outcome feedback to expand to youth and family and paved the way to the current RCT with kids in the schools.
The Partners for Change Outcome Management System: Duncan & Reese, 2015Barry Duncan
Despite overall psychotherapy efficacy (Lambert, 2013), many clients do not benefit (Reese, Duncan, Bohanske, Owen, & Minami, 2014), dropouts are a problem (Swift & Greenberg, 2012), and therapists vary significantly in success rates (Baldwin & Imel, 2013), are poor judges of negative outcomes (Chapman et al., 2012), and grossly overestimate their effectiveness (Walfish, McAlister, O'Donnell, & Lambert, 2012). Systematic client feedback offers one solution (Duncan, 2014). Several feedback systems have emerged (Castonguay, Barkham, Lutz, & McAleavey, 2013), but only two have randomized clinical trial support and are included in the Substance Abuse and Mental Health Administration’s National Registry of Evidence based Programs and Practices: The Outcome Questionnaire-45.2 System (Lambert, 2010) and the Partners for Change Outcome Management System (PCOMS; Duncan, 2012). This article presents the current status of the Partners for Change Outcome Management System, the psychometrics of the PCOMS measures, its empirical support, and its clinical and training applications. Future directions and implications of PCOMS research, training, and practice are detailed. Finally, we propose that systematic feedback offers a way, via large scale data collection, to re-prioritize what matters to psychotherapy outcome, reclaim our empirically validated core values and identity, and change the conversation from a medical model dominated discourse to a more scientific, relational perspective.
PCOMS works with kids too!
Cooper, M., Stewart, D., Sparks, J., Bunting, L. (2013). School-based counseling using systematic feedback: A cohort study evaluating outcomes and predictors of change. Psychotherapy Research, 23, 474-488.
This article, "Casting a Wider Net in Behavioral Health Screening in Primary Care" found that the ORS identified more clients for behavioral healthcare consultation than the PHQ-9. A first step toward the upcoming RCT with PCOMS in an integrated setting.
This document summarizes a study that analyzed written responses from clients who had completed couple therapy. The study explored how clients experienced therapy through their responses to open-ended questions about therapy at a 6-month follow-up. The responses were analyzed thematically and compared between clients whose therapists did or did not use systematic feedback. Most clients found personable, active therapists who maintained neutrality to be helpful. Some expressed dissatisfaction with lack of structure or challenge from therapists. Lack of flexibility in scheduling was also problematic. Clients who used feedback generally found it very helpful.
PCOMS as an Alternative to Psychiatric Diagnosis (Duncan, Sparks, & Timimi, 2...Barry Duncan
Part of an incredible series about diagnostic alternatives by the Journal of Humanistic Psychology edited by Sarah Kamens, Brent Dean Robbins, & Elizabeth Flanagan
The Norway Couple Project: Lessons LearnedBarry Duncan
The document discusses lessons learned from studies on using client feedback to improve outcomes in couple therapy. A large randomized clinical trial in Norway found that routinely collecting and discussing client feedback on progress and the therapeutic alliance using brief measures led to better outcomes compared to treatment as usual. Specifically, couples receiving feedback showed greater improvement in their relationships and were less likely to deteriorate over time. The findings suggest incorporating systematic client feedback into routine practice can help therapists improve outcomes for couples across different therapy approaches.
When children and teens present with behaviour and emotional problems the lure of a quick fix is
understandable and drugs present a ready-made solution. Therapists are often hesitant to talk about
medication and defer to medical professionals. In this paper DUNCAN, SPARKS, MURPHY and MILLER
highlight the explosion in the use of psychotropic medications for children and teens. This trend flies in the
face of the American Psychological Association’s recommendation of the use of psychosocial interventions
as the first intervention of choice with children and teens. The reliability and validity of psychiatric diagnoses is
questioned, in particular against a background of fluctuations in child development and social adaptations,
and a compelling critique is provided of the current research findings on the effectiveness of psychotropic
medications including antidepressants and ADHD medications. Therapists are urged to shed their timidity
and discuss openly the risks and benefits of medication with the knowledge that there is empirical support
for psychosocial interventions as a first line approach. Recommendations are offered to engage clients as
central partners in developing solutions—medical or non-medical—that fit each child and each situation.
Our recent article about therapist effects in couple therapy. So what distinguished one therapist from another? Demographics didn’t matter but 2 other things did. First, that tried and true but neglected old friend, the alliance accounted for 50% of the differences among therapists. Those who formed better alliances across clients got better outcomes. And therapist specific experience with couples accounted for 25% of the differences. So, experienced therapists can take some solace that getting older does have its advantages—as long as it is specific to task at hand.
The first quasiexperiemental study of the ORS/SRS in a telephonic EAP company. Doubled outcomes and improved retension. Set the stage for the RCTs that followed
Although many of you may not be interested in the psychometric details of the ORS and SRS, it does bear importantly on whether there are seen as credible. Jeff Reese and I (Duncan & Reese, 2013) recently exchanged views with Halstead, Youn, and Armijo (2013), debating when a measure is too brief and when it is too long. Here is our paper. First regarding when a measure is too brief: There is no doubt that 45 items, 30 items, or even 19 items is psychometrically better than 4 items, and that the increased reliability and validity of longer measures likely result in better detection, prediction, and ultimate measurement of outcome. But how much better is the really the question. Are these differences clinically meaningful and do they offset the low compliance rates and resulting data integrity issues from missing data? These are the questions that require empirical investigation to determine how brief is too brief, although from my experience, the verdict has already been rendered. But when is a measure too long? The answer is simple: When clinicians won’t use it.
The article discusses the development and research supporting the Partners for Change Outcome Management System (PCOMS). PCOMS uses two brief measures - the Outcome Rating Scale (ORS) and Session Rating Scale (SRS) - to collect feedback from clients at each session on their progress and the therapeutic alliance. The ORS and SRS were developed to be brief and feasible for routine use. Research shows providing therapists feedback based on these measures improves client outcomes compared to treatment as usual. The article outlines how PCOMS was developed and refined, presents supporting research on the measures' psychometrics and clinical usefulness, and discusses examples of implementing PCOMS in behavioral health settings.
This article discusses applying research on psychotherapy outcomes, which has shown that common factors like the therapeutic relationship are more influential than theoretical approach or techniques. The article proposes intentionally using the client's frame of reference to enhance common factors and collaboration. It suggests emphasizing the client's perceptions of their relationship with the therapist and understanding of their issues over theoretical perspectives. A client-directed process is outlined that de-emphasizes theory and maximizes common factors and the client's involvement.
This study examined the psychometric properties of Dutch translations of the Outcome Rating Scale (ORS) and Session Rating Scale (SRS). Data was collected from 126 clients who completed a total of 1005 ORS and SRS assessments over multiple therapy sessions. Results found the Dutch translations had good internal consistency and test-retest reliability, similar to previous American studies. Scores on the ORS and SRS also converged with therapist satisfaction ratings. Additionally, SRS scores predicted later ORS scores, supporting the validity of both measures. Overall, the study provides preliminary support for using the Dutch ORS and SRS in cross-cultural settings.
This is the validation study of the Group Session Rating Scale (GSRS). In a nutshell, this study found more than acceptable reliability and validity with not only an alliance measure but also with group climate and cohesiveness scales. The GSRS was also predictive of last session outcomes. An RCT comparing PCOMS to TAU in group therapy has been submitted.
Overview of PCOMS and couple and family therapy.
Duncan, B., & Sparks, J. (2017). The Partners for Change Outcome Management System. In J. L. Lebow, A. L. Chambers, & D. C. Breunlin (Eds.), Encyclopedia of Couple and Family Therapy (pp. 1-10). New York: Springer.
This document discusses what makes an effective or "master" therapist. It begins by arguing that psychotherapy is a relational endeavor dependent on the client and therapist's connection, not just evidence-based treatments. The most important thing a therapist can do is identify clients who are not benefiting and change course.
It then discusses four questions about what makes an effective therapist. In response to the first question about what they do, the author emphasizes routinely measuring outcomes and the therapeutic alliance to ensure client perspectives are central. For the second question about who they are, the author believes their belief in clients and psychotherapy's ability to create change is important.
In response to the third question about what defines an extraordinary therapist, the author argues
This document summarizes a study on implementing a systematic client feedback protocol into a marriage and family therapy training program to improve trainee competence and accountability. The study describes how the program integrated continuous client feedback into coursework, clinical training, and supervision using an Outcome Management system. Research shows that incorporating client feedback improves client outcomes and therapist effectiveness. The program believes this approach will train therapists to be more accountable to clients and enhance services provided at their family therapy clinic.
Article by Dr Mary Haynes about her agency's journey to a recovery orientation via CDOI and PCOMS published in the SAMHSA Recovery to Practice Newsletter.
What Is Client Directed Outcome InformedScott Miller
Client Directed Outcome Informed (CDOI) clinical work privileges the client's perspective and uses their feedback to guide treatment in a partnership between client and provider. Several mental health and substance abuse treatment organizations that have implemented CDOI report improved outcomes like higher retention rates and lower costs from reduced sessions and cancellations. Research shows involving clients in decisions about their treatment and focusing on whether treatment is working improves success rates by an average of 65%.
Most therapists want to improve their skills and help more clients. However, research shows that factors like personal therapy, specific treatment approaches, training, or experience do not necessarily correlate with better outcomes. After studying thousands of therapists over 15 years, one key factor was identified - "Healing Involvement", where therapists are fully engaged with clients through empathy, skills, efficacy, and handling difficulties constructively. This state can be achieved through career development improving skills over time, self-care reducing burnout, and connection to purpose and values in their work.
The 4th RCT demonstrating the power of feedback to improve outcomes--this time with group therapy of soldiers with substance abuse problems. All of the RCTs of PCOMS have been conducted by the Heart and Soul of Change Project
PCOMS works with kids too!
Cooper, M., Stewart, D., Sparks, J., Bunting, L. (2013). School-based counseling using systematic feedback: A cohort study evaluating outcomes and predictors of change. Psychotherapy Research, 23, 474-488.
This article, "Casting a Wider Net in Behavioral Health Screening in Primary Care" found that the ORS identified more clients for behavioral healthcare consultation than the PHQ-9. A first step toward the upcoming RCT with PCOMS in an integrated setting.
This document summarizes a study that analyzed written responses from clients who had completed couple therapy. The study explored how clients experienced therapy through their responses to open-ended questions about therapy at a 6-month follow-up. The responses were analyzed thematically and compared between clients whose therapists did or did not use systematic feedback. Most clients found personable, active therapists who maintained neutrality to be helpful. Some expressed dissatisfaction with lack of structure or challenge from therapists. Lack of flexibility in scheduling was also problematic. Clients who used feedback generally found it very helpful.
PCOMS as an Alternative to Psychiatric Diagnosis (Duncan, Sparks, & Timimi, 2...Barry Duncan
Part of an incredible series about diagnostic alternatives by the Journal of Humanistic Psychology edited by Sarah Kamens, Brent Dean Robbins, & Elizabeth Flanagan
The Norway Couple Project: Lessons LearnedBarry Duncan
The document discusses lessons learned from studies on using client feedback to improve outcomes in couple therapy. A large randomized clinical trial in Norway found that routinely collecting and discussing client feedback on progress and the therapeutic alliance using brief measures led to better outcomes compared to treatment as usual. Specifically, couples receiving feedback showed greater improvement in their relationships and were less likely to deteriorate over time. The findings suggest incorporating systematic client feedback into routine practice can help therapists improve outcomes for couples across different therapy approaches.
When children and teens present with behaviour and emotional problems the lure of a quick fix is
understandable and drugs present a ready-made solution. Therapists are often hesitant to talk about
medication and defer to medical professionals. In this paper DUNCAN, SPARKS, MURPHY and MILLER
highlight the explosion in the use of psychotropic medications for children and teens. This trend flies in the
face of the American Psychological Association’s recommendation of the use of psychosocial interventions
as the first intervention of choice with children and teens. The reliability and validity of psychiatric diagnoses is
questioned, in particular against a background of fluctuations in child development and social adaptations,
and a compelling critique is provided of the current research findings on the effectiveness of psychotropic
medications including antidepressants and ADHD medications. Therapists are urged to shed their timidity
and discuss openly the risks and benefits of medication with the knowledge that there is empirical support
for psychosocial interventions as a first line approach. Recommendations are offered to engage clients as
central partners in developing solutions—medical or non-medical—that fit each child and each situation.
Our recent article about therapist effects in couple therapy. So what distinguished one therapist from another? Demographics didn’t matter but 2 other things did. First, that tried and true but neglected old friend, the alliance accounted for 50% of the differences among therapists. Those who formed better alliances across clients got better outcomes. And therapist specific experience with couples accounted for 25% of the differences. So, experienced therapists can take some solace that getting older does have its advantages—as long as it is specific to task at hand.
The first quasiexperiemental study of the ORS/SRS in a telephonic EAP company. Doubled outcomes and improved retension. Set the stage for the RCTs that followed
Although many of you may not be interested in the psychometric details of the ORS and SRS, it does bear importantly on whether there are seen as credible. Jeff Reese and I (Duncan & Reese, 2013) recently exchanged views with Halstead, Youn, and Armijo (2013), debating when a measure is too brief and when it is too long. Here is our paper. First regarding when a measure is too brief: There is no doubt that 45 items, 30 items, or even 19 items is psychometrically better than 4 items, and that the increased reliability and validity of longer measures likely result in better detection, prediction, and ultimate measurement of outcome. But how much better is the really the question. Are these differences clinically meaningful and do they offset the low compliance rates and resulting data integrity issues from missing data? These are the questions that require empirical investigation to determine how brief is too brief, although from my experience, the verdict has already been rendered. But when is a measure too long? The answer is simple: When clinicians won’t use it.
The article discusses the development and research supporting the Partners for Change Outcome Management System (PCOMS). PCOMS uses two brief measures - the Outcome Rating Scale (ORS) and Session Rating Scale (SRS) - to collect feedback from clients at each session on their progress and the therapeutic alliance. The ORS and SRS were developed to be brief and feasible for routine use. Research shows providing therapists feedback based on these measures improves client outcomes compared to treatment as usual. The article outlines how PCOMS was developed and refined, presents supporting research on the measures' psychometrics and clinical usefulness, and discusses examples of implementing PCOMS in behavioral health settings.
This article discusses applying research on psychotherapy outcomes, which has shown that common factors like the therapeutic relationship are more influential than theoretical approach or techniques. The article proposes intentionally using the client's frame of reference to enhance common factors and collaboration. It suggests emphasizing the client's perceptions of their relationship with the therapist and understanding of their issues over theoretical perspectives. A client-directed process is outlined that de-emphasizes theory and maximizes common factors and the client's involvement.
This study examined the psychometric properties of Dutch translations of the Outcome Rating Scale (ORS) and Session Rating Scale (SRS). Data was collected from 126 clients who completed a total of 1005 ORS and SRS assessments over multiple therapy sessions. Results found the Dutch translations had good internal consistency and test-retest reliability, similar to previous American studies. Scores on the ORS and SRS also converged with therapist satisfaction ratings. Additionally, SRS scores predicted later ORS scores, supporting the validity of both measures. Overall, the study provides preliminary support for using the Dutch ORS and SRS in cross-cultural settings.
This is the validation study of the Group Session Rating Scale (GSRS). In a nutshell, this study found more than acceptable reliability and validity with not only an alliance measure but also with group climate and cohesiveness scales. The GSRS was also predictive of last session outcomes. An RCT comparing PCOMS to TAU in group therapy has been submitted.
Overview of PCOMS and couple and family therapy.
Duncan, B., & Sparks, J. (2017). The Partners for Change Outcome Management System. In J. L. Lebow, A. L. Chambers, & D. C. Breunlin (Eds.), Encyclopedia of Couple and Family Therapy (pp. 1-10). New York: Springer.
This document discusses what makes an effective or "master" therapist. It begins by arguing that psychotherapy is a relational endeavor dependent on the client and therapist's connection, not just evidence-based treatments. The most important thing a therapist can do is identify clients who are not benefiting and change course.
It then discusses four questions about what makes an effective therapist. In response to the first question about what they do, the author emphasizes routinely measuring outcomes and the therapeutic alliance to ensure client perspectives are central. For the second question about who they are, the author believes their belief in clients and psychotherapy's ability to create change is important.
In response to the third question about what defines an extraordinary therapist, the author argues
This document summarizes a study on implementing a systematic client feedback protocol into a marriage and family therapy training program to improve trainee competence and accountability. The study describes how the program integrated continuous client feedback into coursework, clinical training, and supervision using an Outcome Management system. Research shows that incorporating client feedback improves client outcomes and therapist effectiveness. The program believes this approach will train therapists to be more accountable to clients and enhance services provided at their family therapy clinic.
Article by Dr Mary Haynes about her agency's journey to a recovery orientation via CDOI and PCOMS published in the SAMHSA Recovery to Practice Newsletter.
What Is Client Directed Outcome InformedScott Miller
Client Directed Outcome Informed (CDOI) clinical work privileges the client's perspective and uses their feedback to guide treatment in a partnership between client and provider. Several mental health and substance abuse treatment organizations that have implemented CDOI report improved outcomes like higher retention rates and lower costs from reduced sessions and cancellations. Research shows involving clients in decisions about their treatment and focusing on whether treatment is working improves success rates by an average of 65%.
Most therapists want to improve their skills and help more clients. However, research shows that factors like personal therapy, specific treatment approaches, training, or experience do not necessarily correlate with better outcomes. After studying thousands of therapists over 15 years, one key factor was identified - "Healing Involvement", where therapists are fully engaged with clients through empathy, skills, efficacy, and handling difficulties constructively. This state can be achieved through career development improving skills over time, self-care reducing burnout, and connection to purpose and values in their work.
The 4th RCT demonstrating the power of feedback to improve outcomes--this time with group therapy of soldiers with substance abuse problems. All of the RCTs of PCOMS have been conducted by the Heart and Soul of Change Project
The document discusses research on whether using a continuous feedback system called the Partners for Change Outcome Management System (PCOMS) can improve psychotherapy outcomes. PCOMS involves clients completing brief measures after each session to assess treatment progress and the therapeutic relationship. Studies found that clients who used PCOMS with their therapists demonstrated statistically significant treatment gains compared to those receiving usual treatment and were more likely to experience reliable change in fewer sessions.
The Partners for Change Outcome Management System (PCOMS) uses brief scales completed by clients at each session to provide feedback on client progress and the therapeutic alliance. This allows clinicians to identify clients at risk for negative outcomes early. Five randomized clinical trials have shown that PCOMS significantly improves treatment outcomes and reduces costs by shortening treatment length and increasing provider productivity. Hundreds of organizations in the U.S. and other countries have implemented PCOMS, which involves clients in their care while respecting clinicians' time.
THIS CHAPTER DISCUSSES
•The empirical evidence supporting a strengths-based approach
•Specific practice guidelines for recruiting client resources to promote change
•The link between pluralistic counselling and a focus on client strengths
This article argues that client perspectives have been overlooked in psychotherapy integration efforts. It proposes conducting therapy within the context of the client's own theory of change, which privileges the client's voice as the source of wisdom and solution. The client should be seen as the heroic driver of the therapeutic process, not just as an object of assessment and intervention by the therapist. Research shows that client factors such as strengths, perceptions of the therapeutic relationship, and resources account for the majority of improvement in therapy. Therefore, integration approaches should focus on understanding and incorporating the client's own ideas about the problem and how change occurs.
Barry's standard handouts providing a narrative description of what he presents. Includes a discussion of the common factors and the Partners for Change Outcome Management System
PCOMS and an Acute Care Inpatient Unit: Quality Improvement and Reduced Readm...Barry Duncan
High psychiatric readmission rates continue while evidence suggests that care is not perceived by patients as “patient centered.” Research has focused on aftercare strategies with little attention to the inpatient treatment itself as an intervention to reduce readmission rates. Quality improvement strategies based on patient-centered care may offer an alternative. We evaluated outcomes and readmission rates using a benchmarking methodology with a naturalistic data set from an inpatient psychiatric facility (N 2,247) that used a quality-improvement strategy called systematic patient feedback. A systematic patient feedback system, the Partners for Change Outcome Management System (PCOMS), was used. Overall pre-post effect sizes were d 1.33 and d 1.38 for patients diagnosed with a mood
disorder. These effect sizes were statistically equivalent to RCT benchmarks for feedback and depression.
Readmission rates were 6.1% (30 days), 9.5% (60 days), and 16.4% (180 days), all lower than national benchmarks. We also found that patients who achieved clinically significant treatment outcomes were less likely to be readmitted. We tentatively suggest that a focus on real-time patient outcomes as well as care that is “patient centered” may provide lower readmission rates.
No evidence for demand characteristics or social desirability with the Session Rating Scale.
Reese, R. J., Gillaspy, J. A., Owen, J. J., Flora, K. L., Cunningham, L. E., Archie, D., & Marsden, T. (2013). The influence of demand characteristics and social desirability on clients’ ratings of the therapeutic alliance. Journal of Clinical Psychology, 69, 696-709.
Do people fill out the SRS differently IF the therapist is in the room?Scott Miller
This study examined how demand characteristics and social desirability may influence clients' ratings of the therapeutic alliance. 102 clients from two university counseling centers were randomly assigned to one of three conditions for providing alliance feedback: immediate feedback where ratings were discussed with the therapist, next session feedback where ratings were private and discussed later, and no feedback where ratings were private and not shared. The study found no significant differences in alliance scores across the feedback conditions and scores were not correlated with social desirability but were correlated with an established alliance measure, providing evidence that scores were not inflated due to demand characteristics.
Summary of SAMHSA's review of and listing of feedback Informed Treatment as an evidence-based practice. The International Center for Clinical Excellence received perfect scores for readiness for dissemination materials
Benchmarking the Effectiveness of Psychotherapy Treatment for .docxikirkton
Benchmarking the Effectiveness of Psychotherapy Treatment for Adult
Depression in a Managed Care Environment: A Preliminary Study
Takuya Minami
University of Utah
Bruce E. Wampold and Ronald C. Serlin
University of Wisconsin–Madison
Eric G. Hamilton
PacifiCare Behavioral Health
George S. (Jeb) Brown
Center for Clinical Informatics
John C. Kircher
University of Utah
This preliminary study evaluated the effectiveness of psychotherapy treatment for adult clinical depres-
sion provided in a natural setting by benchmarking the clinical outcomes in a managed care environment
against effect size estimates observed in published clinical trials. Overall results suggest that effect size
estimates of effectiveness in a managed care context were comparable to effect size estimates of efficacy
observed in clinical trials. Relative to the 1-tailed 95th-percentile critical effect size estimates, effec-
tiveness of treatment provided in this setting was observed to be between 80% (patients with comorbidity
and without antidepressants) and 112% (patients without comorbidity concurrently on antidepressants) as
compared to the benchmarks. Because the nature of the treatments delivered in the managed care
environment were unknown, it was not possible to make conclusions about treatments. However, while
replications are warranted, concerns that psychotherapy delivered in a naturalistic setting is inferior to
treatments delivered in clinical trials appear unjustified.
Keywords: benchmarking, effectiveness, managed care, clinical trials, depression
More than a decade has passed since estimating the effect of
psychotherapy as it is delivered in natural settings was identified as
a critical issue in psychotherapy research (e.g., Barlow, 1981;
Cohen, 1965; Luborsky, 1972; Seligman, 1995; Strupp, 1989;
Weisz, Donenberg, Han, & Weiss, 1995). Although the benefits of
psychotherapy have been investigated in laboratory environments
with randomized clinical trials (RCTs) and found to be substantial
as early as the late 1970s (Smith & Glass, 1977; also Smith, Glass,
& Miller, 1980), surprisingly little is known about the effects of
psychotherapy in natural settings. The dichotomy of laboratory and
natural settings was emphasized by Seligman (1995), who discrim-
inated between efficacy, which is now used to denote the effects of
psychotherapy in RCTs, and effectiveness, which is used to denote
the effects of psychotherapy in clinical practice.
The few studies that have investigated effectiveness over the
years have provided mixed results, attributed in part to a variety of
methodologies used to investigate effectiveness because of diffi-
culty in using a randomized control group design in natural set-
tings. Notably, three methods have been used to estimate the
effects of psychotherapy in natural settings: clinical representa-
tiveness, direct comparison, and benchmarking. Clinical represen-
tativeness studies, including some of the analyses conducted by
Smith et al ...
Supervisor variance in psychotherapy outcome in routine practice (psychothera...Daryl Chow
Objective: Although supervision has long been considered as a means for helping trainees develop competencies in their clinical work, little empirical research has been conducted examining the influence of supervision on client treatment outcomes. Specifically, one might ask whether differences in supervisors can predict/explain whether clients will make a positive or negative change through psychotherapy. Method: In this naturalistic study, we used a large (6521 clients seen by 175 trainee therapists who were supervised by 23 supervisors) 5-year archival data-set of psychotherapy outcomes from a private nonprofit mental health center to test whether client treatment outcomes (as measured by the OQ-45.2) differed depending on who was providing the supervision. Hierarchical linear modeling was used with clients (Level 1) nested within therapists (Level 2) who were nested within supervisors (Level 3). Results: In the main analysis, supervisors explained less than 1% of the variance in client psychotherapy outcomes. Conclusions: Possible reasons for the lack of variability between supervisors are discussed.
Feedback informed treatment (fit) achieving(apa ip miller hubble seidel chow ...Scott Miller
1) The document discusses Feedback Informed Treatment (FIT), which uses routine monitoring of a client's progress and the therapeutic alliance to improve outcomes. Short scales like the Session Rating Scale and Outcome Rating Scale are used to gather feedback from clients.
2) Research shows that formal collection and discussion of client feedback doubles rates of reliable change, decreases dropout rates by 50%, and cuts deterioration rates by a third compared to treatment without feedback.
3) The feedback allows therapists to adjust their approach if a client is not progressing well or the alliance is weakening, in order to maximize the fit between client, therapist, and treatment for that individual.
“Evidenced based” behavioral medicine as bad as bad pharmaJames Coyne
Introduction to symposium held at International Congress of Behavioral Medicine, Groningen, August 2014. Discusses the shortcomings of evidence-based behavioral medicine in light of efforts to reform the shortcomings of the Pharma literature.
1PAGE 21. What is the question the authors are asking .docxfelicidaddinwoodie
1
PAGE
2
1. What is the question the authors are asking?
They asked about a reduction in judgmental biases regarding the cost and probability associated with adverse social events as they are presumed as being mechanisms for the treatment of Social Anxiety Disorder (SAD). Also, the authors poised on the changes in judgmental biases as mechanisms to explain cognitive-behavioral therapy for social anxiety disorder. On top of that, they stated that methodological limitations extant studies highlight the possibility that rather than causing symptom relief, a significant reduction in judgmental biases tends to be consequences of it or correlate. Considerably, they expected cost bias at mid-treatment to be a predictor of the treatment outcome.
2. Why do the authors believe this question is important?
According to the authors, this question was relevant as methodological limitations of present studies reflect on the possibility that instead of causing symptom belief, a significant reduction in judgmental biases can be consequences or correlated to it. Additionally, they ought to ascertain the judgment bias between treated and non-treated participants. Significantly, this was important as they had to determine the impact of pre and post changes in cost and probability of the treatment outcomes. But, probability bias at mid-treatment was a predictor of the treatment outcome contrary to the cost bias at mid-treatment that could not be identified as a significant predictor of the treatment outcome.
3. How do they try to answer this question?
They conducted a study to evaluate the significant changes in judgmental bias as aspects of cognitive-behavioral therapy for social anxiety disorders. To do this, they conducted a study using information from two treatment studies; an uncontrolled trial observing amygdala activity as a response to VRE (Virtual Reality Exposure Therapy) with the use of functional magnetic resonance imaging and a randomized control trial that compared Virtual Reality Exposure Therapy with Exposure Group Therapy for SAD. A total of 86 individuals who met the DSM-IV-TR criteria for the diagnosis of non-generalized (n=46) and generalized (n=40) SAD participated. After completing eight weeks of the treatment protocol, the participants who identified public speaking as their most fearsome social situation were included. The SCID (Structured clinical interview for the DSM-IV) was used to ascertain diagnostic and eligibility status on Axis 1 conditions within substance abuse, mood and anxiety disorder modules. The social anxiety measures were measured with the use of BFNE (Brief Fear of Negative Evaluation), a self-reporting questioner that examined the degree to which persons fear to be assessed by other across different social settings. Additionally, the OPQ (Outcome Probability Questionnaire) self-reporting questionnaire was used to evaluate individual’s estimate on the probability that adverse, threatening events will occur at t ...
Outcomes from 45 Years of Clinical Practice (Paul Clement)Scott Miller
Paul Clement is one of my heroes. He's been tracking the outcome of his clinical services for decades. I was stunned when, in 1994, he published results from his private work over a two decades long period. Now, we have the data from 45 years. Read it!
This document discusses issues related to evaluating the effectiveness of psychotherapy. It addresses questions around who should be asked to evaluate outcomes, when they should be asked, and how outcomes should be measured. It describes Hans Eysenck's early controversial claim that psychotherapy is ineffective and how subsequent meta-analyses found psychotherapy to be consistently effective. The document also distinguishes between efficacy studies conducted in controlled research settings and effectiveness studies conducted in real-world clinical practice settings. It reviews findings from efficacy studies on transdiagnostic therapies targeting underlying pathology. Finally, it discusses challenges in disseminating evidence-based therapies to practitioners and strategies like practice-oriented research to better bridge the gap between research and practice.
A Model For Pharmacological Research Treatment Of Cocaine DependenceRichard Hogue
This document presents a model for conducting pharmacological research on treatments for cocaine dependence. The model aims to standardize procedures across clinical trials to improve validity and allow for comparison of results. Key aspects of the model include comprehensive intake evaluations, 8-12 week double-blind placebo-controlled treatment periods, frequent monitoring of drug use and symptoms, and collection of standardized outcome data. The goal is to generate more reliable evidence on potential medications while accounting for issues like heterogeneous populations and variable study methodologies that have challenged interpretation of past research.
This document describes a strengths-based approach to integrating assessment of mind-body wellness into the client intake process. It involves exploring six evidence-based categories related to exercise, nutrition, sleep, relaxation, hobbies, and relationships. Research supports benefits of these areas for mental health. Briefly addressing client assets in these categories at intake may improve motivation and reduce dropout rates by focusing on self-directed wellness strategies within the client's control.
Assignment DescriptionA reputable hospital has high quality .docxluearsome
Assignment Description
A reputable hospital has high quality ratings from patient satisfaction surveys but is still losing market share. For many years, health care organizations, as well as traditional businesses, have been frustrated that high customer satisfaction scores do not necessarily lead to higher levels of profitability or sales.
Prepare a report examining this phenomenon that address the following elements:
Evaluate and explain inconsistency between customer satisfaction scores and profitability and why it tends to exist in health care organizations.
Apply the statistical procedures discussed in class to support (or refute) the inconsistency.
Assess price vs. quality of services as well as the impact of insurance or managed care contracts on a hospital's market share, regardless of patient satisfaction levels.
Explain how you could use high patient satisfaction results to your advantage when negotiating a new managed care contract for the hospital. Discuss ethical issues involved when presenting results.
Discuss how qualitative and quantitative data can be used to help this hospital improve market share.
The body of the resultant report should be 5–7 pages and include at least 5 relevant peer-reviewed academic or professional references published within the past 5 years.
Library Resources:
Statistical Analysis 1 Below is a list of articles and summary descriptions on effective communication in health care. Click here to use the online library to search for the complete articles. Article 1 The increased use of meta-analysis in systematic reviews of health care interventions has highlighted several types of bias that can arise during the completion of a randomized controlled trial. Study publication bias and outcome reporting bias have been recognized as potential threats to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. This update reviews and summarizes the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomized controlled trials. Twenty studies were eligible, of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding the publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had higher odds of being fully reported as compared to nonsignificant outcomes (range of odds ratios: 2.2–4.7). In comparing trial publications to protocols, it was found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. It was decided not to undertake meta-analysis because of the differences between studies. This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publica ...
Behavioral Health Care - Issues in Management 2014 Report of Results Final 3 ...Richard Thoune
- The document discusses behavioral health care issues and management in Jackson County, Michigan based on a provider survey conducted by the Jackson Health Network.
- The survey found that most providers routinely screen for common issues like depression and substance abuse, but less so for others like bipolar disorder and PTSD. Management approaches varied by diagnosis, with providers more comfortable treating minor issues themselves and preferring co-management or referral for more serious conditions.
- Screening rates among pediatric patients were generally lower than adults for similar conditions. Management of pediatric patients also differed from adults, with providers more reliant on co-management or referral across most behavioral health diagnoses in children.
Enhancing Psychotherapy Process With Common Factors Feedback.docxkhanpaulita
Enhancing Psychotherapy Process With Common Factors Feedback:
A Randomized, Clinical Trial
Andrew S. McClintock, Matthew R. Perlman, Shannon M. McCarrick,
Timothy Anderson, and Lina Himawan
Ohio University
In this study, we developed and tested a common factors feedback (CFF) system. The CFF system was
designed to provide ongoing feedback to clients and therapists about client ratings of three common
factors: (a) outcome expectations, (b) empathy, and (c) the therapeutic alliance. We evaluated the CFF
system using randomized, clinical trial (RCT) methodology. Participants: Clients were 79 undergradu-
ates who reported mild, moderate, or severe depressive symptoms at screening and pretreatment
assessments. These clients were randomized to either: (a) treatment as usual (TAU) or (b) treatment as
usual plus the CFF system (TAU � CFF). Both conditions entailed 5 weekly sessions of evidence-based
therapy delivered by doctoral students in clinical psychology. Clients completed measures of common
factors (i.e., outcome expectations, empathy, therapeutic alliance) and outcome at each session. Clients
and therapists in TAU � CFF received feedback on client ratings of common factors at the beginning of
Sessions 2 through 5. When surveyed, clients and therapists indicated that that they were satisfied with
the CFF system and found it useful. Multilevel modeling revealed that TAU � CFF clients reported
larger gains in perceived empathy and alliance over the course of treatment compared with TAU clients.
No between-groups effects were found for outcome expectations or treatment outcome. These results
imply that our CFF system was well received and has the potential to improve therapy process for clients
with depressive symptoms.
Public Significance Statement
In this study, we developed a system that provides ongoing feedback to clients and therapists about
what is transpiring in therapy. Results suggest that the feedback system may help to improve the
process of treatment for clients with depressive symptoms.
Keywords: common factors, feedback, empathy, alliance, randomized clinical trial
A growing body of research attests to the utility and effectiveness
of outcome feedback (Connolly Gibbons et al., 2015; De Jong et al.,
2014; Shimokawa, Lambert, & Smart, 2010). In outcome feedback
systems, client progress is monitored and reviewed by therapists (and,
in some cases, by clients as well) to guide ongoing treatment (Lam-
bert, 2007). Specifically, these systems collect distress/symptomatol-
ogy data from clients on a routine basis, and then compare these data
with norms or expected treatment responses (see Lambert, 2007; Lutz
et al., 2006). When a client is off-track (i.e., is projected to have a
relatively poor treatment response), the therapist is alerted and is
then typically provided with strategies for improving quality of
care (Lambert et al., 2004; Miller, Duncan, Sorrell, & Brown,
2005).
Although outcome feedback has demonstrated efficacy (e..
Background: Behavioral health conditions are prevalent among patients in inpatient medical settings and when not adequately treated contribute to diminished treatment outcomes and quality of life. Substantial evidence has demonstrated the effectiveness of psychological interventions in addressing behavioral health conditions in a range of settings but, to a lesser extent with psychologically-based interventions delivered in inpatient medical settings. Purpose: The purpose of this paper is to increase attention on psychological interventions being delivered to patients across a broad spectrum of medical specialties in inpatient medical settings to support the implementation of interventions to address increasing patient needs. Methods: This selected, brief review of the literature sought to describe published psychologically-based interventions delivered in inpatient medical settings. A search for studies catalogued on PubMed from 2007 to 2016 was examined and studies were included in the review if they were delivered within inpatient medical settings. Two reviewers independently assessed relevant studies for criteria. Results: A total of ten articles met the inclusion criteria with interventions targeting outcomes across four primary domains: 1) pain and fatigue; 2) cognition; 3) affective/emotional and; 4) self-harm. Several articles support interventions grounded in Cognitive-Behavioral Therapy and brief psychological interventions. Most studies reported favorable outcomes for the interventions relative to controls. Conclusions: Psychologically-based interventions, especially those that integrate components of cognitive-behavioral therapy and a multidisciplinary approach, can be implemented in inpatient medical settings and may promote improved patient outcomes. However, the quality of this evidence requires formal assessment, requiring more comprehensive reviews are needed to replicate findings and clarify effectiveness of interventions.
This document discusses group intervention for substance use disorders. It covers characteristics of individuals with SUDs, factors that influence group interventions, mutual self-help groups like 12-step programs, the role of social workers, and integrating therapeutic treatment with mutual self-help groups. Research shows that attending groups like AA during and after treatment can improve outcomes compared to not attending. Social reinforcement of group attendance also predicts continued abstinence. Integrating 12-step principles and mutual aid groups can provide clients with ongoing peer support.
American journal of psychotherapy 2013 vol 67 pp 23 -46 (2) by paul clementScott Miller
This summarizes a study that analyzed outcome data from 1,599 psychotherapy patients seen by a private practitioner over 45 years. It found that 65.15% of patients were rated as improved or much improved after treatment, with a mean pre-/post-treatment effect size of 1.90. Patients and their parents rated outcomes more positively than the therapist. There was a positive relationship between length of treatment and better outcomes.
Similar to PCOMS: A Viable Quality Improvement Strategy for Public Behavioral Health (20)
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
2. a CMHC. This study seemed to confirm the conclusion reached by
the President’s New Freedom Commission on Mental Health
(2002): “America’s mental health service delivery system is in
shambles [and] . . . incapable of efficiently delivering . . . effective
treatments” (p. ii). Hansen et al. (2002) also reported a 35%
reliable and clinically significant change rate across six different
types of outpatient settings. In other words, almost two-thirds of
the 6,072 clients did not report benefit from psychotherapy.
Given these findings, quality improvement strategies have gar-nered
interest as research has moved from establishing efficacy in
RCTs to demonstrating effectiveness in natural settings. A primary
approach to improving the quality of mental health care is to
transport evidence-based treatments into practice settings (Laska,
Gurman, & Wampold, 2013; McHugh & Barlow, 2012). For
example, researchers who applied evidence-based cognitive treat-ments
for panic and depression to public behavioral health settings
found that pre–post treatment effects were similar to those ob-tained
in RCTs (Merrill, Tolbert, & Wade, 2003; Wade, Treat, &
Stuart, 1998).
Some researchers have recommended that transporting
evidence-based treatments should not be the only quality improve-ment
strategy. For example, Laska et al. (2013) suggested that the
utility of transporting evidence-based treatments is partially con-tradicted
by findings from comparisons of treatment-as-usual
(TAU) to benchmarks of treatments in RCTs; i.e., clients who
received TAU psychotherapy in managed care and university
counseling center settings likely received treatment as effective as
clients receiving treatments in clinical trials (Minami et al., 2009;
Minami, Wampold, et al., 2008). Practicing therapists in these
settings appear to be achieving, on average, similar outcomes to
RCTs, arguing against the utility and cost of transporting evidence-based
treatments as a sole method to improve outcome (Laska et
al., 2013).
Another strategy of quality improvement is continuous outcome
feedback (Lambert, 2010). Two continuous monitoring and feed-back
interventions have demonstrated gains in RCTs and are
included in the Substance Abuse and Mental Health Administra-tion’s
National Registry of Evidence based Programs and Practices
(NREPP). The first, Lambert and colleagues’ Outcome Question-naire
(OQ) System, has demonstrated significant gains over TAU
in six RCTs with clients at-risk for negative outcome or dropout
(Harmon et al., 2007; Hawkins, Lambert, Vermeersch, Slade, &
Tuttle, 2004; Lambert et al., 2001, 2002; Slade, Lambert, Harmon,
Smart, & Bailey, 2008; Whipple et al., 2003). A meta-analytic
review of the six studies (N 6,151) using the OQ System
revealed that clients in a feedback condition had less than half the
odds of experiencing deterioration and approximately 2.6 times
higher odds of attaining reliable improvement than did those in a
TAU condition (Shimokawa, Lambert, Smart, 2010).
The second NREPP listed method of using continuous client
feedback to improve outcomes, the Partners for Change Outcome
Management System (PCOMS; Duncan, 2012, 2014; Duncan
Sparks, 2010), has demonstrated significant treatment gains for
feedback over TAU in three RCTs (Anker, Duncan, Sparks,
2009; Reese, Norsworthy, Rowlands, 2009; Reese, Toland,
Slone, Norsworthy, 2010). Anker et al. (2009) randomized 205
couples seeking couple therapy to feedback or TAU. Compared to
couples who received TAU, nearly four times as many couples in
the feedback condition reached clinically significant change. Reese
et al. (2009) found significant treatment gains for individual clients
in the feedback condition when compared to those receiving TAU
in both a university counseling center (N 74) and a graduate
training clinic (N 74). In addition, clients in the feedback
condition achieved reliable change in significantly fewer sessions
than did those receiving TAU. The last RCT (Reese et al., 2010)
replicated the Anker et al. study with couples and found nearly the
same results (N 92). In a recent meta-analysis of PCOMS
studies (N 558), Lambert and Shimokawa (2011) reported that
clients in the feedback group had 3.5 times higher odds of expe-riencing
reliable change and less than half the odds of experiencing
deterioration.
Although promising as a quality improvement strategy, PCOMS
has not been systematically evaluated in a public behavioral health
setting. In addition, the findings highlighted by Laska et al. (2013)
of comparable results of clinical trial treatment benchmarks and
treatment as usual in university or managed care settings may or
may not apply to PBH settings, given the differences in client
populations. The current study, therefore, adopted a benchmarking
methodology to evaluate the effectiveness of services provided to
racially and ethnically diverse clients at or below the federal
poverty line at a large, public behavioral health agency that im-plemented
a continuous outcome management system, PCOMS, as
a quality improvement strategy (see Bohanske Franczak, 2010).
Benchmarking permits comparison of treatments delivered in non-controlled
settings against a reliably determined effect size in
clinical trials or meta-analyses of clinical trials (McFall, 1996;
Merrill et al., 2003; Minami, Wampold, Serlin, Kircher, Brown,
2007; Wade et al., 1998; Weersing Weisz, 2002).
We used the benchmarking methodology from Weersing and
Weisz (2002) and Minami, Serlin, Wampold, Kircher, and Brown
(2008) for the current study. Weersing and Weisz (2002) advanced
previous benchmarking efforts in four ways (Minami, Serlin, et al.,
2008). First, they did not alter the treatment being evaluated in the
naturalistic setting which permits the results to be generalized to
TAU in the same setting. Second, they used a meta-analytically
determined benchmark rather than a few studies to provide a more
rigorous, comprehensive comparison. Third, they included a wait-list/
control condition benchmark to also compare treatment to the
natural remission of symptoms. Last, they evaluated whether the
effect size of interest fell within the two-tailed 95% confidence
interval of the benchmark effect size rather than subjectively
comparing the effect sizes. Further advancing this methodology,
Minami, Serlin, et al. (2008) included the “good enough principle”
(Serlin Lapsey, 1985, 1993), which establishes a clinically
relevant margin between the observed effect size and benchmark
to avoid obtaining statistical significance with differences that are
clinically trivial. These improved benchmarking approaches have
not only enabled a better understanding of the effectiveness of
clinical services in community mental health care (Weersing
Weisz, 2002), managed care (Minami, Wampold, et al., 2008) and
university counseling settings (Minami et al., 2009), but have also
provided a detailed methodology to allow other data sets to be
similarly analyzed.
Two related questions guided our analyses. First, does contin-uous
outcome feedback as a quality improvement strategy offer a
viable alternative to the dissemination of evidence-based treat-ments?
Second, is psychotherapy effective in a public behavioral
setting serving individuals who are impoverished and/or desig-
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
2 REESE, DUNCAN, BOHANSKE, OWEN, AND MINAMI
3. nated as disabled? This second question arises from two findings
in the literature: (a) the noted poor outcomes of services provided
in PBH settings (e.g., Hansen et al., 2002) and (b) the finding that
individuals from low socioeconomic backgrounds have a higher
risk for psychological dysfunction and limited access to resources
(e.g., Jokela, Batty, Vahtera, Elovainio, Kivimäki, 2013;
McLaughlin, Costello, Leblanc, Sampson, Kessler, 2012; Pan,
Stewart, Chang, 2013; Reiss, 2013).
We enlisted two benchmarking strategies to address these ques-tions.
First, following the standards set by earlier studies of man-aged
care and university counseling settings, we evaluated the
effectiveness of psychotherapy provided to clients with depressive
disorders at a PBH agency by comparing the observed pre–post
effect size estimate against treatment efficacy benchmarks con-structed
from treatments in clinical trials of major depression
(Minami et al., 2007). We also compared this sample of clients
with depressive disorders to a benchmark of clients diagnosed with
major depression who did not receive treatment. We hypothesized
that the treatment offered in the PBH setting for depressive disor-ders
would be equivalent to treatment efficacy observed in clinical
trials of major depression and superior to waitlist controls. Second,
given that the current PBH agency had implemented a continuous
outcome management system, the overall sample was compared
against benchmarks derived from clinical trials evaluating out-come
feedback compared to TAU (Lambert Shimokawa, 2011).
Here we used the benchmarks derived from feedback and TAU
conditions. We hypothesized that results attained in the PBH
agency would be similar to benchmarks reported in RCTs of
outcome feedback and superior to benchmarks derived from TAU.
Method
Participants
Participants included in this study were drawn from an archival
data set of therapy outcomes at a large PBH agency, Southwest
Behavioral Health Services (SBHS), a non-profit, comprehensive
community behavioral health organization providing services to
people living in Maricopa (Phoenix), Mohave, Yavapa, Coconino,
and Gila counties in Arizona. SBHS provides clinical services to a
diverse group of Medicaid insured clients at or below 100% of the
federal poverty level through a wide variety of programs, including
mental health and substance abuse treatments for youth and adults.
The data for this study were collected from adult discharged cases
between January 2007 and December 2011.
Clients (N 5,186) were predominantly female (60.7%) and
Caucasian (67.8%), ranging in age from 18 to 87 (M 36.7,
Mdn 47.6, SD 12.3). Most ranged in age from 18 to 40
(61.8%) or 41 to 64 (37.3%). As can be seen in Table 1, Hispanics
were the largest minority (17.7%) followed by African Americans
(9.3%), Native Americans (2.8%), and other ethnic groups (2.4%).
Clients attended a mean of 8.86 sessions (Mdn 5.00, SD
10.85). Regarding primary diagnosis, depression, mood, and anx-iety
disorders (excluding Bipolar Disorder) were the most common
(46.0%), followed by substance abuse disorders (18.8%), Bipolar
Disorder and Schizophrenia (14.4%), and Adjustment Disorder
(10.0%). A mix of other diagnostic categories accounted for the
remainder (see Table 2 for a full list). Therapists conducted semi-structured
intakes and determined a primary diagnosis by the third
session. Information about comorbidity and medication use was
not available.
Therapists
Therapists (N 86) were predominantly female (84.2%) and
were Caucasian (88.1%), Hispanic (9.8%), and African American
(2.1%). Providers were licensed and had a master’s degree or
higher in one of the following fields: counseling (68.2%), clinical
social work (12.7%), substance abuse counseling (11.3%), and
psychology (9.4%).
The Outcome Rating Scale (ORS)
Psychological functioning and distress was assessed pre and
post treatment using the Outcome Rating Scale (ORS; Duncan,
2011; Miller Duncan, 2004), a self-report instrument designed
to measure client progress repeatedly (at the beginning of each
session although only first and last session data were available in
the data set) throughout the course of therapy. The ORS assesses
four dimensions: (1) Individual—personal or symptomatic distress
or well-being, (2) Interpersonal—relational distress or how well
the client is getting along in intimate relationships, (3) Social—the
client’s view of satisfaction with work/school and relationships
outside of the home, and (4) Overall—general sense of well-being.
The ORS translates these four dimensions into a visual analog
format of four 10-cm lines, with instructions to place a mark on
each line with low estimates to the left and high to the right. The
four 10-cm lines add to a total score of 40. The score is the
summation of the marks made by the client to the nearest milli-meter
on each of the four lines, measured by a centimeter ruler or
template. Lower scores reflect more distress.
Table 1
Therapy Outcomes by Race/Ethnicity
Race/ethnicity
Pre ORS Post ORS Change Within-group
M (SD) M (SD) M (SD) d [95% CI]
Hispanic (N 914) 20.27 (8.56) 26.34 (8.92) 6.07 (9.24) 0.71 [0.32, 1.10]
African American (N 478) 19.53 (8.55) 25.50 (9.38) 5.97 (9.36) 0.70 [0.16, 1.24]
Native American (N 143) 21.07 (8.40) 26.85 (9.16) 5.78 (9.41) 0.69 [0.28, 1.66]
Asian American (N 22) 18.91 (7.02) 25.59 (9.96) 6.68 (9.53) 0.97 [1.05, 3.00]
Euro-American (N 3,503) 19.01 (7.97) 24.71 (8.97) 5.71 (9.08) 0.72 [0.53, 0.90]
Other (N 104) 21.26 (8.43) 26.79 (9.81) 5.53 (9.76) 0.66 [0.48, 1.80]
Note. N5,164; 22 clients did not indicate race/ethnicity. ORSOutcome Rating Scale; CIconfidence interval.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
PSYCHOTHERAPY EFFECTIVENESS IN PBH 3
4. In addition to the PCOMS manual (Duncan, 2011; Miller
Duncan, 2004), four validation studies of the ORS have been
published (Bringhurst, Watson, Miller, Duncan, 2006; Campbell
Hemsley, 2009; Duncan, Sparks, Miller, Bohanske, Claud,
2006; Miller, Duncan, Brown, Sparks, Claud, 2003). Across
studies, average Cronbach’s alpha coefficients for ORS scores
were .85 (clinical samples) and .95 (nonclinical samples; Gillaspy
Murphy, 2011). As an indicator of treatment progress, ORS
scores have been found to be sensitive to change for clinical
samples yet stable over time for nonclinical samples (Bringhurst et
al., 2006; Duncan et al., 2006; Miller et al., 2003). The concurrent
validity of ORS scores has been examined through correlations
with established outcome measures. For example, the average
bivariate correlation between the ORS and the OQ-45 across three
studies (Bringhurst et al., 2006; Campbell Hemsley, 2009;
Miller et al., 2003) was .62 (range .53–.74), indicating moder-ately
strong concurrent validity (Gillaspy Murphy, 2011).
Jacobson and Truax’s (1991) formulas were used to determine
the ORS clinical cutoff and the reliable change index for evaluat-ing
clinically significant change. Miller et al. (2003) used a non-clinical,
community sample (n 86) and a clinical sample (n
435) to establish a cut score of 25.1 The reliable change index for
the ORS was computed using a diverse sample of 34,790 partic-ipants
who were primarily of low socioeconomic status; the reli-able
change index was determined to be 5 points (Duncan, 2011;
Miller Duncan, 2004). Therefore, to achieve clinically signifi-cant
change a client must begin treatment with an ORS score 25,
improve by at least 5 points, and finish treatment with an ORS
score 25.
Procedures
Feedback process. SBHS implemented PCOMS beginning in
2007, eventually rolling it out across all clinical services (Bohan-ske
Franczak, 2010). PCOMS involves ongoing assessment of
outcome using the Outcome Rating Scale (Miller et al., 2003) and
the therapeutic alliance using the Session Rating Scale (SRS;
Duncan et al., 2003). PCOMS is designed to identify clients who
are not responding to therapy so that the lack of progress can be
addressed and new approaches collaboratively developed.
Clients complete the ORS at intake and prior to each session and
the SRS toward the end of each session. In the first meeting, the
ORS assesses where the client sees him or herself, allowing for an
ongoing comparison in later sessions. The SRS allows for routine
discussion of the therapeutic alliance. The therapist and client
review the client’s responses on the SRS and discuss any potential
alliance ruptures and how the service may be improved. At second
and subsequent sessions, interpretation of the ORS depends on
both the amount and rate of change that has occurred since the
prior visit(s). The longer therapy continues without measurable
change, the greater the likelihood of dropout and/or poor outcome.
ORS scores are used to engage the client in a conversation about
progress, and more important, what, if anything, should be done
differently if progress is not occurring. PCOMS is designed to
directly involve clients in all decisions affecting their care (Dun-can,
2014).
Therapists received two days (12 hr) of PCOMS training plus
annual one-day booster trainings. Although there were no fidelity
checks, therapists were expected to collect outcome data, and
at-risk clients identified by the data were routinely discussed in
regular agency supervision. SBHS did not mandate or monitor the
treatment approach used by the providers but required that they use
PCOMS.
Participant inclusion criteria for depression and complete
samples. The data set initially consisted of 8,224 adult clients.
To answer the research questions of interest, only clients with pre
and post treatment scores were included (clients must have at-tended
at least two sessions). Given that clients at SBHS were
1 Jacobson and Truax’s (1991) cutoff formula was used: c (SD0M1
SD1M0)/(SD1 SD0); 0 nonclinical sample, 1 clinical sample.
Table 2
Therapy Outcomes by Diagnosis (Dx)
Dx
Sample size Pre ORS Post ORS Within-group effect size
N M(SD) M (SD) d [95% CI]
Substance Dx 957 23.24 (8.26) 28.59 (8.55) 0.65 [0.28, 1.02]
Mood Dx NOS 633 17.59 (7.61) 24.01 (8.66) 0.84 [0.47, 1.22]
Anxiety Dx 503 19.56 (7.20) 25.15 (8.78) 0.78 [0.33, 1.22]
Schizophrenia Dx 171 19.23 (8.32) 23.75 (9.62) 0.55 [0.33, 1.42]
Bipolar Dx 562 17.79 (8.14) 23.59 (9.05) 0.71 [0.24, 1.19]
Major depression/depression NOS 1,129 17.06 (7.84) 23.16 (9.25) 0.78 [0.46, 1.10]
Dysthymic Dx 71 19.02 (6.76) 24.95 (8.59) 0.88 [0.22, 1.99]
Adjustment Dx 506 19.93 (7.79) 26.24 (8.29) 0.81 [0.33, 1.29]
PTSD 170 17.90 (8.36) 24.41 (9.67) 0.78 [0.11, 1.67]
Impulse Dx 29 21.33 (8.38) 27.09 (8.39) 0.70 [1.42, 2.82]
ADHD 73 21.84 (8.22) 25.86 (8.51) 0.49 [0.83, 1.82]
V-codes 275 21.30 (7.22) 26.57 (8.43) 0.73 [0.13, 1.33]
Note. d [1 (3/4n 5)] [Mpost Mpre/SDpre]. ORS Outcome Rating Scale; CI confidence interval;
Substance Dx any substance abuse/dependency diagnosis; NOS not otherwise specified; Anxiety Dx
diagnosis of panic, panic with agoraphobia, anxiety NOS, phobia, obsessive-compulsive disorder, or generalized
anxiety disorder; Adjustment Dx any adjustment diagnosis; PTSD posttraumatic stress disorder; ADHD
attention-deficit/hyperactivity disorder; V-codes any V-code diagnosis. Diagnoses reflect the primary diag-nosis.
N 5,079. There were some missing data based on diagnoses being infrequently diagnosed (e.g., learning
and communication disorders, autism, and deferred diagnoses).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
4 REESE, DUNCAN, BOHANSKE, OWEN, AND MINAMI
5. asked to complete the Outcome Rating Scale at the beginning of
each session, a larger inclusion rate was obtained than typical in
other naturalistic data sets (e.g., Minami, Wampold, et al., 2008;
Stiles, Barkham, Connell, Mellor-Clark, 2008). However, 2,152
(26.2%) participants were eliminated because they only attended
one session. Clients who were absent from services for 90 days
were considered closed cases. If such clients re-entered services,
only the first encounter was included. Consequently, another 293
(3.6%) clients were eliminated. Although some researchers only
include clients functioning in the clinical range at intake (e.g.,
Minami et al., 2009; Stiles et al., 2008), we included clients whose
functioning at intake was in the non-clinical range. Including this
group allowed our data set to be more representative of individuals
served in PBH and accounted for 27% of the final sample. Using
the criteria, we identified 5,168 clients seen by 86 therapists. This
complete sample was used for comparison to the feedback bench-marks.
Although the total sample is likely more representative of typ-ical
agency practice and was used to compare to the benchmarks
for feedback and TAU, to address the first hypothesis and approx-imate
the methodology used in the depression benchmarking stud-ies
of managed care and university counseling settings (Minami et
al., 2009; Minami, Wampold, et al., 2008), the data were trimmed
by eliminating those clients who scored over the clinical cutoff
(initial score in the nonclinical range) and who had a diagnosis of
any disorder other than a depressive disorder. We included clients
with a primary diagnosis of major depressive disorder, dysthymia,
depressive disorder not otherwise specified (NOS), and adjustment
disorder with depressed mood. We did not further differentiate the
depressive diagnoses for two reasons. First was the concern for the
accuracy of a differential diagnosis (e.g., major depression disor-der
vs. depressed mood NOS) without a formalized, structured
assessment process. Second and more pragmatically, the effect
sizes varied little across depressive diagnoses and did not warrant
further differentiation. This reduced the sample to 1,589 clients for
the first benchmark comparison.
Benchmarking Strategy
Depression benchmarks. The effectiveness of treatment for
SBHS clients diagnosed with a depressive disorder (n 1,589)
was evaluated by comparing it to two sets of benchmarks. The first
set of benchmarks were developed using the results of RCTs
focused on treating adult major depression. We selected Minami et
al.’s (2007) benchmarks that provided aggregated clinical trial
effect sizes derived from pre–post treatment scores for adult major
depression (i.e., intent-to-treat samples [dDEPitt 0.80] and com-pleter
samples [dDEPc 0.93]) and waitlist control conditions for
depression (dWLC 0.15).
These three benchmarks were selected for two reasons. First,
given how little is known about general effectiveness in PBH, we
believed that having additional benchmarks for a common present-ing
issue would help contextualize our findings. Second, we se-lected
Minami et al.’s (2007) benchmarks because the treatment
efficacy studies utilized general distress outcome measures that
were likely comparable in terms of sensitivity and reactivity.
Given that the ORS is a general distress outcome measure, it likely
has lower sensitivity and reactivity in comparison to outcome
measures for an identified issue (e.g., Beck Depression Inventory
or Hamilton Depression Rating Scale) that are likely to be higher
on both sensitivity and reactivity resulting in higher effect sizes
(Minami et al., 2007). Consistent with the adult depression treat-ment
benchmarks, we only analyzed the pre–post data of clients
who began treatment in the clinical range (ORS 25) and were
diagnosed with major depressive disorder, dysthymic disorder,
depressed mood NOS, or adjustment disorder with depressed
mood.
Feedback (complete sample) benchmarks. The second set
of benchmarks was aggregated from RCT studies using continuous
outcome feedback systems. We focused on studies that utilized the
OQ System or PCOMS because the OQ System has the most
research support among feedback systems, PCOMS research per-mitted
a direct comparison to the SBHS sample, and these are the
only two systems designated as evidence based. We conducted a
thorough search of the peer-reviewed literature using the search
terms “patient focused research,” “client feedback and outcome,”
“OQ45,” and “patient level feedback,” which resulted in a total of
186 hits. We also consulted previous client feedback meta-analyses
(Lambert Shimokawa, 2011; Shimokawa et al., 2010).
Studies were excluded if they did not use a RCT design, an
outpatient sample, the OQ System or PCOMS, or means and
standard deviations were not provided for the entire sample. For
example, Simon, Lambert, Harris, Busath, and Vazquez (2012)
only provided descriptive statistics for clients who were not-on-track,
and another study (Murphy, Rashleigh, Timulak, 2012)
only utilized the ORS and not the complete PCOMS method. This
process resulted in nine studies selected, six for the OQ System
and three for PCOMS (see Lambert Shimokawa, 2011, for a
review of the studies). To construct the feedback benchmarks, we
used the formulas outlined by Minami, Serlin, et al. (2008, pp.
517–518, Formulas 1 and 2)2 to compute unbiased, standardized
effect sizes and to aggregate the effect sizes across the feedback
studies (p. 518, Formula 3). We constructed four feedback bench-marks:
Treatment effect sizes were calculated for the feedback
(dFTall 0.60) and TAU (dTAUall 0.41) samples from the nine
OQ/PCOMS studies and for the three PCOMS (dFTors 1.13) and
TAU (dTAUors 0.47) samples. All clients (N 5,168) were used
in the SBHS sample irrespective of pretreatment score or diagnosis
as consistent with the feedback benchmarks.
Analytical Strategy
To compare the pre–post treatment effects to the selected bench-marks,
we followed the formulas and procedures highlighted in
previous benchmarking studies (Minami et al., 2009; Minami,
Wampold, et al., 2008; Minami et al., 2007). We used the same
formula used to construct the benchmarks (Minami, Serlin, et al.,
2008) to compute the observed treatment effects, d [1 (3/4n
5)] [Mpost Mpre/SDpre]. Next, we statistically evaluated equiv-alence
or superiority to the selected benchmarks using an a priori
margin of differences between the benchmark and treatment effect
sizes. Serlin and Lapsley (1985, 1993) recommend using a pre-determined
margin considered to be clinically trivial to resolve the
dilemma of rejecting the null hypothesis with small differences
2 Formula 2 requires the pre–post measure correlation. We used r .50
for the OQ studies (Minami, Wampold, et al., 2008) and r .43 for the
ORS studies based on the current data set.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
PSYCHOTHERAPY EFFECTIVENESS IN PBH 5
6. due to statistical power (i.e., sample size). Given our large sample
size, we determined that differences between the SBHS effect sizes
and the respective benchmarks that were within 10% of the bench-marks
could be considered clinically negligible (Minami et al.,
2009; Minami, Wampold, et al., 2008). For example, comparing
against the depression intent-to-treat effect size dDEPitt 0.80,
differences within 10% of this effect size (90%110%, i.e.,
0.720.88) were considered to be clinically negligible. In other
words, if the treatment effect size estimate was statistically within
this range or larger given a Type I error rate of .05 (i.e., reject
the null that the treatment effect size estimate is smaller than the
lower bound of d 0.72), we can conclude that the treatment
effect appears to be at or above the depression intent-to-treat
benchmark. Conversely, comparing against the waitlist control
benchmark dWLC 0.15, the treatment effect size estimate must
statistically exceed 110% of this benchmark (i.e., d 0.17) in
order to conclude that the treatment effect is larger than the waitlist
control benchmark.
SBHS effect sizes were compared against the benchmarks
plus the 10% margin (range-null hypotheses), which follows a
noncentral t statistic (Serlin Lapsley, 1985, 1993). Specifi-cally,
if the SBHS sample effect size falls at or above 90% of
the treatment benchmarks (i.e., benchmark minus 10%), the
SBHS effect size can be considered clinically equivalent to the
benchmarks. For the comparison against the TAU and waitlist
benchmarks, the 10% margin was used in the opposite direc-tion.
In other words, if the SBHS sample effect size fell within
110% of the TAU and waitlist benchmarks (i.e., benchmark
plus 10%), the SBHS effect size was considered clinically
equivalent to the TAU and waitlist. Therefore, to claim that the
effect size estimate was superior to the TAU condition, the
estimate needed to exceed 110% of the TAU and waitlist
benchmarks under the specified Type I error rate.
Results
Preliminary Analyses
We screened the data for disparities in outcomes based on client
gender and race/ethnicity. First, we tested whether men and
women had similar outcomes via an ANOVA with ORS pre–post
change scores as the DV and gender as the IV. The results for
client gender were not statistically significant, F(1, 5167) 3.65,
p .06, partial 2 .001. Second, we tested whether therapy
outcomes varied by client race/ethnicity via an ANOVA with ORS
pre–post change scores as the DV and race/ethnicity as the IV. The
results for client race/ethnicity were not statistically significant,
F(5, 5158) 0.28, p .95, partial 2 .000. Table 1 shows the
pre and post ORS scores by race/ethnicity.
Next, we tested whether therapy outcomes varied by diagno-sis,
via an ANOVA with ORS pre–post change scores as the DV
and primary diagnosis as the IV. The results for primary diag-nosis
were not statistically significant, F(11, 5067) 1.48, p
.13, partial 2 .003. Table 2 shows the pre and post ORS
scores by primary diagnosis. Finally, we inspected the rates of
reliable and clinically significant change to provide additional
context of our benchmarking results given the PBH rates re-ported
in Hansen et al. (2002). In the total SBHS sample (N
5,168), 65.6% achieved reliable change and 42.9% achieved
clinically significant change. Table 3 presents a comparison of
clinically significant change by session of the current SBHS
data set and a university counseling center reported in Baldwin,
Berkeljon, Atkins, Olsen, and Nielsen (2009). An inspection of
Table 3 reveals a surprising similarity of the two data sets,
measured by different outcome instruments (the ORS and OQ-
45), in the rates of clinically significant change by session as
well as the overall clinically significant rate (42.9% in the
Table 3
Clinically Significant Change by Total Number of Sessions for the SBHS Data Set and Baldwin
et al. (2009)
TotalN nin clinical range % CSC (n) of eligible
SBHS UCC SBHS UCC SBHS UCC No. of sessions
550 N/A 420 N/A 26.2 (110) N/A 2
702 1,195 527 706 32.8 (173) 35.8 (253) 3
549 843 401 520 38.2 (153) 40.4 (210) 4
467 597 370 381 47.3 (175) 40.4 (154) 5
360 418 251 270 43.8 (110) 42.2 (114) 6
317 311 226 208 46.9 (106) 43.3 (90) 7
280 257 186 182 51.6 (96) 46.5 (80) 8
260 229 181 153 49.7 (90) 47.7 (73) 9
213 152 155 100 51.6 (80) 50.0 (50) 10
160 128 111 92 41.4 (46) 46.7 (43) 11
144 110 101 76 54.5 (55) 47.4 (36) 12
114 93 81 60 58.0 (47) 41.7 (25) 13
107 82 68 63 45.6 (31) 49.2 (31) 14
87 43 63 32 54.0 (34) 53.1 (17) 15
91 41 63 34 50.8 (32) 47.1 (16) 16
77 32 56 23 48.2 (27) 31.1 (9) 17
586 145 435 95 49.7 (216) 43.2 (41) 18–40
104 N/A 79 N/A 46.8 (37) N/A 41
5,168 4,676 3,774 2,985 42.9 (1,618) 41.6 (1,242) TOTAL
Note. SBHS Southwest Behavioral Health Services sample; UCC University Counseling Center sample
from Baldwin et al. (2009); CSC clinically significant change.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
6 REESE, DUNCAN, BOHANSKE, OWEN, AND MINAMI
7. SBHS sample vs. 41.6% in the university counseling center
sample). The mean number of sessions in the university coun-seling
center study and the current study was 6.5 and 8.9,
respectively (roughly 75% of the clients in the university coun-seling
center study attended 8 sessions or less while approxi-mately
75% of the SBHS sample attended 12 sessions or less).
Regarding clients entering therapy in the clinical range, 63.8%
of the clients in the university counseling center study entered
in the clinical range compared with 72.9% in the SBHS sample.
Benchmark Comparisons
Depression benchmarks. The mean pre–post treatment ORS
scores for the SBHS depressed sample (n 1,589) were Mpre
14.73 (SD 5.86) and Mpost 22.59 (SD 8.86), respectively,
resulting in a standardized effect size of d 1.34. This effect was
statistically compared to Minami et al.’s (2007) adult major de-pression
ITT and completer treatment efficacy benchmarks. Given
the sample size, the 95th percentile critical effect size for the ITT
depression benchmark (dDEitt 0.80) minus 10% (dDEitt[90%]
0.72) was dCV 0.76, which was easily surpassed by the observed
effect size of the treated SBHS sample (i.e., d 1.34, t 53.42,
df 1,588, 3 28.70, p .001; see Table 4 for comparisons and
critical d for each benchmark). Therefore, the pre–post treatment
effect size of the SBHS subsample with depressive symptoms can
be considered clinically equivalent to the pre–post treatment effect
size observed in RCTs with clients who have depressive symp-toms.
Compared against the completer benchmark (dDEc 0.93)
minus 10% (dDEitt[90%] 0.84), the SBHS pre–post treatment
effect size was also statistically significant (t 53.42, df 1,588,
33.36, p .001). These findings suggest that the treatment
outcomes in the SBHS sample were comparable in effectiveness to
the outcomes in the clinical trials for depressed clients who com-pleted
treatment. In both cases, the SBHS effect sizes were sub-stantially
larger than the effect sizes clinical trial studies for
depression.
Last, we compared the SBHS depressed sample effect size to the
waitlist control benchmark effect size (dDWLC 0.15; plus 10%
for comparison of superiority, dDWLC[110%] 0.17) reported in
Minami et al.’s (2007) study, which was statistically significant
(t 53.42, df 1,588, 6.58, p .001). Again, the observed
effect size was much larger than the designated benchmark. These
results support the first hypothesis and suggest that treatment
delivered in this PBH setting is at least comparable to treatment
efficacy studies treating major depression and superior to de-pressed
clients in a waitlist control condition.
Feedback benchmarks. The effect size estimate for the entire
SBHS sample (N 5,168), pre ORS (M 19.38, SD 8.17) and
post ORS (M 25.18, SD 9.05), was dSBHS 0.71. The
Feedback treatment benchmark using all nine RCT studies had an
overall estimated effect size of dFTall 0.60, but there was
substantial variability in effect sizes depending upon the feedback
procedure and measure used as can be observed in Table 5.
Therefore, we also constructed benchmarks using only the three
PCOMS studies to provide a more direct comparison. To evaluate
equivalence, the SBHS sample was compared to the Feedback
treatment benchmark effect size of all nine RCTs minus 10%
(dFT[90%] 0.54). The observed sample effect size exceeded the
critical d 0.56 and yielded a statistically significant difference
(t 51.04, df 5,167, 38.82, p .001), suggesting that
treatment provided at SBHS was at least equivalent to the treat-ment
conditions in the nine feedback studies. However, the ob-served
sample effect size fell short of the benchmark margin when
compared to the PCOMS Feedback treatment benchmark minus
10% (dFTORS[90%] 1.02), t 51.04, df 5,167, 73.11,
p
.999). This finding suggests that the treatment received in the
SBHS sample was not equivalent to the benchmark and did not
achieve the standard of treatment found in the three PCOMS
studies.
We also compared the SBHS sample to the TAU conditions
from the feedback studies. Specifically, we compared the TAU
benchmark for all nine feedback sample effect sizes plus 10%
(dTAUall[110%] 0.45) to the SBHS sample and found that the
SBHS effect size was superior (t 51.04, df 5,167, 32.42,
p .001). For the TAU benchmark from the three PCOMS studies
(dTAUors[110%] 0.52), we also found that the SBHS effect size
was significantly larger (t 51.04, df 5,167, 37.17, p
.001). Table 6 shows the critical d required to obtain statistical
significance for each of the comparisons. These results partially
support hypothesis two and suggest that treatment delivered at
SBHS was comparable to RCT feedback studies overall and su-perior
to TAU from the same RCT feedback studies as well as
PCOMS TAU, but not of RCTs of the PCOMS Feedback condi-tion.
Discussion
Given that the majority of mental health and substance abuse
services occur in the public sector, there has been a surprising lack
3 noncentrality parameter used to estimate critical t.
Table 4
Effect Size Comparisons to Depression Benchmark RCT Studies
SBHS d
ITT benchmark
Completers
benchmark
Waitlist control
benchmark
dcv p dcv p dcv p
1.34 0.76 .001 0.89 .001 0.20 .001
Note. Clients diagnosed with a depressive disorder in the Southwest
Behavioral Health Services (SBHS) sample (n 1,589) were compared to
Minami et al.’s (2007) intent-to-treat (ITT) efficacy, completers, and
waitlist control benchmarks. RCT randomized clinical trial; dcv
critical effect size value required to attain statistical significance.
Table 5
Effect Size Comparisons for Continuous Assessment Studies
Study N Outcome d 95% CI
Current study 5,168 ORS 0.71 [0.67, 0.75]
All feedback studies 4,676 OQ-45/ORS 0.60 [0.56, 0.64]
OQ-45 studies only 4,268 OQ-45 0.57 [0.53, 0.61]
ORS studies only 408 ORS 1.13 [1.00, 1.26]
Note. Nine feedback studies were evaluated; six that utilized the Out-come
Questionnaire 45.2 (OQ-45) and three that utilized the Outcome
Rating Scale (ORS). CI confidence interval.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
PSYCHOTHERAPY EFFECTIVENESS IN PBH 7
8. Table 6
Effect Size Comparisons to Feedback Benchmark RCT Studies
of information available about the effectiveness of psychotherapy
conducted in these settings. To our knowledge this is the first
benchmarking study of treatment outcomes at a PBH agency not
limited to the transportation of an evidence-based treatment pro-vided
by a limited number of therapists to clients with a specific
diagnosis. One of the goals of this study was to evaluate how a
public behavioral system of care fared using one quality improve-ment
strategy, continuous outcome feedback, by comparing it to:
standards generated by another quality improvement strategy,
benchmarks determined from meta-analysis of clinical trials; and
standards arising from the quality improvement strategy employed
in the current study, continuous outcome feedback, to benchmarks
generated from both OQ System and PCOMS feedback studies.
The use of continuous outcome feedback in the public health
agency was shown to meet the standards of both strategies.
A comparison of effect size estimates revealed that psychother-apy
for adult depression provided in a particular PBH setting is
likely effective; providers in this study generated effect size esti-mates
that were similar to those observed in treatment in clinical
trials for major depression. In addition, the total sample effect size
estimates of the PBH agency were also comparable to RCTs
evaluating systematic client feedback (OQ System and PCOMS
combined) but not to RCTs of PCOMS alone.
Preliminary analyses were conducted on client demographic
variables such as race/ethnicity, gender, and diagnoses. The cur-rent
study found that demographic variables had little impact on
effectiveness. An interesting “non-finding” was that diagnosis had
little impact on differential outcome as well. This should be
interpreted with caution; however, given the diagnoses were based
on unsystematic clinical interviews rather than structured diagnos-tic
interviews.
Comparisons to the two noted benchmarking studies (Minami et
al., 2009; Minami, Wampold, et al., 2008) of clients in the clinical
range, specific to depression, revealed similar effect size estimates.
The similarity is noteworthy given the representative nature of the
current sample. Both of the other benchmarking studies lost con-siderable
portions of data. For example, Minami, Wampold, et al.
(2008)—who conducted the study in a managed care setting in
which the OQ was administered by 65% of therapists and only
required at the first, third, fifth, and every fifth session thereafter—
lost over 55% of the data for lack of two data points. Recall,
however, that 26% of the current data set was lost because of the
attrition of the first to the second session.
Evaluation of the observed effect size estimates of depressed
clients compared to the depressed client waitlist control bench-mark
suggested that approximately 87% of the clients treated for
two or more sessions at this agency were likely better off after
receiving treatment than is the average client randomized into a
waitlist control condition. Therefore, despite differences in clinical
and demographic characteristics between the agency and clinical
trials included in the benchmark, it is reasonable to conclude that
psychotherapy services that include continuous outcome feedback
provided at this agency are effective.
Very few studies have systematically investigated large natural-istic
data sets. We were able to identify only two other studies in
addition to the benchmarking studies discussed above. Table 3
presents the comparison of the rates of clinically significant
change of the current data set to those reported by Baldwin et al.
(2009). The prevailing assumptions regarding the two sites may be
that university counseling clients are likely to be more functional
than PBH clients (more available resources, education, etc.) and
therefore more likely to achieve better outcomes. While there is
some support for the first assumption given the percentage of
clients entering in the clinical range, the difference (9.1%) may be
less than expected. The second assumption was not borne out by
this study.
In the second study, even more impressive results were reported
from a large U.K. sample (N 9,703). Stiles et al. (2008) found
a reliable change rate of 81.4% and a clinically significant change
rate of 62% but given that they included only completers and those
who had planned terminations prevent meaningful comparisons—
only 9,703 clients were included from a data base of over 33,000
(Stiles et al., 2008).
The current study demonstrated outcomes superior to previous
reports of outcomes in PBH settings (Hansen et al., 2002; Weers-ing
Weisz, 2002) and largely comparable to estimates of both
benchmarks for major depression and overall feedback. Perhaps
the most obvious explanation is the dose of treatment, the issue
highlighted by Hansen et al. (2002), who argued that the dose of
treatment (4.1 sessions in the State CMHC sample and 4.3 overall)
was inadequate exposure to psychotherapy for improvement to
occur. The current study provides some support for their argument
given that the average was 8.9 sessions. Not supportive of the dose
explanation, however, is that in as few as three sessions in the
current sample (see Table 3), over 50% of clients achieved either
reliable (21.7%) or clinically significant change (32.8%).
The addition of continuous client feedback as a quality improve-ment
strategy provides a more likely explanation. Considering
both the OQ System and PCOMS, identifying clients at risk via the
routine use of outcome measures has now been shown in nine
RCTs to improve outcomes. Southwest Behavioral Health Services
started implementation of continuous client feedback in 2007, and
now integrates PCOMS in all services (Bohanske Franczak,
2010). Although not addressed as a quality improvement strategy,
d
Feedback
benchmark (all)
Feedback
benchmark
(ORS)
TAU benchmark
(all)
TAU benchmark
(ORS)
dcv p dcv p dcv p dcv p
0.71 0.56 .001 1.05 .999 0.39 .001 0.45 .001
Note. RCT randomized clinical trial; ORS Outcome Rating Scale; TAU treatment-as-usual; dcv
critical effect size value required to attain statistical significance.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
8 REESE, DUNCAN, BOHANSKE, OWEN, AND MINAMI
9. the managed care and university counseling center benchmark
studies (Minami et al., 2009; Minami, Wampold, et al., 2008) as
well as Baldwin et al.’s (2009) study discussed above also rou-tinely
monitored outcomes with the OQ. The reported comparable
results to the clinical trial treatment benchmarks could be argued to
be partially due to continuous outcome monitoring. This is of
course an empirical question; we only are speculating given the
absence of any direct comparison of continuous outcome monitor-ing
to either TAU or a transported evidence based treatment.
Future investigation could conduct such comparisons as well as
quasi-experimental or cluster randomization research in the imple-mentation
of outcome feedback in other PBH settings.
A limitation of the current study is the use of one, brief outcome
measure, the ORS. The ORS is by design brief and therefore
feasible for routine clinical use. Its feasibility, however, is also a
drawback. Although psychometrically acceptable, it does not yield
the breadth or depth of information found in longer measures like
the OQ-45. A major question highlighted by this study is the
difference of the effect sizes of the ORS and OQ-45 found in
RCTs. There are at least three possible explanations. First, the
higher effect sizes of the ORS may be an artifact of the ORS itself.
It may be more sensitive to or over-represent change compared to
the OQ-45. Although the ORS and OQ-45 are moderately corre-lated
and seem to result in similar expected treatment trajectories
as well as similar clinically significant change rates in the com-parison
noted above, the sensitivity differences between the two
measures may differ. This is currently being empirically investi-gated.
Second, given that the ORS is administered and discussed
with clients, it may via demand characteristics, lead to inflated
scores. Follow-up results in Anker et al.’s (2009) trial included
client ratings of the ORS administered via mail 6 months post
treatment. The feedback effect was maintained which suggests that
demand characteristics were not responsible. Finally, the effect
sizes of the ORS may be related to the differences in clinical
processes associated with the two measures. The PCOMS process
of discussion of both outcome and the alliance at every session
may explain the larger treatment gains (Anker et al., 2009; Dun-can,
2012). Partial support of this possibility is provided by OQ
System feedback studies in which a higher effect size occurs when
clinical support tools, like alliance measures, are incorporated with
the OQ-45 (Shimokawa et al., 2010). This too is an empirical
question and further research will hopefully shed light here as well.
A related issue arises from our comparison of the current total
sample and the PCOMS Feedback condition of the RCTs. Al-though
equivalent to the combined OQ System and PCOMS treat-ment
conditions and superior to the combined TAU conditions, the
current total sample was not equivalent to the PCOMS Feedback
benchmark. Although we again cannot definitively explain this
finding, the most obvious possibility is the differences in the
samples. The RCTs were conducted in settings with clients who
likely had more available resources and problems of less severity
and chronicity. We also are uncertain of the adherence to the
PCOMS protocol by the Southwest Behavioral Health Services
therapists; it is possible that therapists in the RCTs were more
compliant because they knew PCOMS was being evaluated.
The limitations of benchmarking detailed by Minami,
Wampold, et al. (2008) are applicable here and also call for caution
in interpreting the results: (1) although the benchmarks constructed
from the feedback studies partially included the same outcome
measure (ORS), and although the ORS matches the low reactivity/
low specificity of the benchmark instruments in general, the effi-cacy
and natural history benchmarks taken from Minami et al.
(2007) were constructed from different measures; (2) comparison
against clinical trials are not ideal given that treatment provided in
a PBH setting is drastically different than RCTs—no random
assignment, set amount of sessions, or control of diagnostic valid-ity,
co-morbidity, or therapeutic environment (although the feed-back
RCT studies were likely more comparable to treatment in a
PBH setting with the exception of random assignment); (3) the
characteristics of the psychotherapy delivered by the therapists in
this study are unknown including their orientation or use of
evidence-based treatments; (4) the context of a PBH setting and
use of feedback limits the generalizability to other settings, and the
extent to which there was therapist fidelity to the feedback inter-vention
is unknown; (5) therapist effects were not modeled in this
study or the clinical trials to set the benchmark except for Anker et
al.’s (2009) and Reese et al.’s (2010) studies; and (6) benchmark-ing
cannot explain why clinical trials and natural settings are
similar or different given the profound differences in client and
therapist factors. As Minami, Wampold, et al. (2008) concluded so
do we, that although not perfect, and given that there no bench-marks
for effectiveness in PBH settings, benchmarks constructed
from efficacy in clinical trials are the best currently available and
provide some preliminary evidence of effectiveness in public
settings.
Another limitation of the current study is that data on medica-tion
use were unavailable for this study. Agency estimates, how-ever,
suggest that approximately 35% of clients were on psycho-tropic
medication. Given that Minami, Wampold, et al. (2008)
reported an increase in effect (d 0.15) by use of psychotropic
medication in a managed care setting (although severity was not
controlled), it is possible that the agency’s observed effect size
calculated with only depressed clients who were not on medication
could have been as low as d 1.19 (which is still above the ITT
and completer benchmarks). Replication with medication informa-tion
is therefore necessary.
In light of the limitations, the current study provides preliminary
evidence that one PBH agency using a continuous client feedback
system has routinely been providing effective psychotherapy ser-vices.
Given previous studies of PBH settings and the serious
concerns raised by the President’s New Freedom Commission on
Mental Health (Hansen et al., 2002; Weersing Weisz, 2002), this
study offers a tentative empirical counter to those concerns.
Perhaps more importantly, our results also suggest that adding
routine outcome management as a quality improvement strategy
may be a viable alternative to transporting evidence-based treat-ments
to natural settings. Bohanske and Franczak (2010) make a
similar argument based on what they called “efficiency variables.”
They reviewed PCOMS at several public agencies and reported
substantial improvements in client retention, therapist productiv-ity,
and length of stay.
Laska et al. (2013) call for an integrated quality improvement
strategy consisting of the dissemination of evidence-based treat-ments
and a “common factors” approach which includes outcome
feedback. Continuous feedback can be easily integrated with any
quality improvement strategy including evidence-based treat-ments.
The OQ System and PCOMS are not specific treatment
approaches for particular diagnoses and instead are a-theoretical
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
PSYCHOTHERAPY EFFECTIVENESS IN PBH 9
10. and can be applied to clients of all diagnostic categories (Duncan,
2012). Both feedback systems are congruent with the American
Psychological Association Presidential Task Force on Evidence-
Based Practice (2006; Duncan Reese, 2012) definition of
evidence-based practice in psychology. Continuous outcome feed-back
enables the identification of clients who are not benefiting
from any given treatment so that clinicians may collaboratively
design different interventions. This approach to clinical practice
does not prioritize evidence-based treatments as a quality improve-ment
strategy. Rather, it calls for a more sophisticated and empir-ically
informed clinician who chooses from a variety of orienta-tions
and methods to best fit client preferences and cultural values.
Although there has not been convincing evidence for differential
efficacy among approaches (Duncan et al., 2010), there is indeed
differential effectiveness for a particular approach with a particular
client — therapists need expertise in a broad range of intervention
options, including evidence- based treatments, but client measur-able
response to treatment must be the ultimate goal.
Psychotherapy science continues to move from a sole focus on
the RCT and efficacy to the study of effectiveness in clinical
practice. Benchmarking studies have provided the methodology to
further narrow the split between research and practice. Wolfe
(2012) suggests that feasible outcome tools for everyday clinical
practice, like the OQ and ORS, can serve to build the bridge
between research and practice. We hope that our study offers a
demonstration of this possibility and encourages other looks at
practice in natural settings enabled by routine outcome manage-ment
(Lambert, 2010). Everyday data collection could allow many
research possibilities. One possibility may be a more routine use of
RCT methodology in PBH settings as well as the examination of
clients of diverse ethnicity and race. Current feedback studies are
unfortunately quite restricted in this area. Another compelling
research question yet to be addressed is why feedback results in
improved outcomes. Routine outcome management using the OQ
System or PCOMS could enable dismantling strategies to address
this important topic. Finally, continuous outcome monitoring
could allow an ongoing evaluation of quality improvement strat-egies.
On October 31, 1963, President John F. Kennedy (JFK) signed
into law the Community Mental Health Act (also known as the
Mental Retardation and Community Mental Health Centers Con-struction
Act of 1963). It was the last piece of legislation JFK
signed before his assassination. For millions of Americans, JFK’s
final legislation opened the door to a new era of hope and recov-ery—
to a life in the community. With the 50th anniversary of the
Community Mental Health Act of 1963 passed, this study presents
a preliminary but more hopeful picture of outcomes in PBH. While
replications are necessary, our results are reassuring to those who
receive, provide, or pay for services in the public sector, suggest-ing
that therapists in a PBH setting, when given systematic out-come
feedback, are effectively treating not only depression but
also a range of psychological problems.
As outcome measures become more readily available to front-line
practitioners and PBH agencies, a more accurate picture will
likely emerge about the effectiveness of psychotherapy with those
who arguably need the services most. Routine collection of out-come
data, providing individualized, responsive services, and in-volving
consumers in decisions about their care holds promise to
not only inform us about the effectiveness of PBH care and the
classic question of what works for whom, but also a viable strategy
to ensure quality to those who are often not considered in discus-sions
of psychotherapy.
References
American Psychological Association Presidential Task Force on Evidence-
Based Practice. (2006). Evidence-based practice in psychology. Ameri-can
Psychologist, 61, 271–285. doi:10.1037/0003-066X.61.4.271
Anker, M. G., Duncan, B. L., Sparks, J. A. (2009). Using client feedback
to improve couple therapy outcomes: A randomized clinical trial in a
naturalistic setting. Journal of Consulting and Clinical Psychology, 77,
693–704. doi:10.1037/a0016062
Baldwin, S. A., Berkeljon, A., Atkins, D. C., Olsen, J. A., Nielsen, S. L.
(2009). Rates of change in naturalistic psychotherapy: Contrasting dose-effect
and good-enough level models of change. Journal of Consulting
and Clinical Psychology, 77, 203–211. doi:10.1037/a0015235
Bohanske, R., Franczak, M. (2010). Transforming public behavioral
health care: A case example of consumer directed services, recovery,
and the common factors. In B. Duncan, S. Miller, B. Wampold, M.
Hubble (Eds.), The heart and soul of change: Delivering what works
(2nd ed., pp. 299–322). doi:10.1037/12075-010
Bringhurst, D. L., Watson, C. W., Miller, S. D., Duncan, B. L. (2006).
The reliability and validity of the Outcome Rating Scale: A replication
study of a brief clinical measure. Journal of Brief Therapy, 5, 23–30.
Campbell, A., Hemsley, S. (2009). Outcome Rating Scale and Session
Rating Scale in psychological practice: Clinical utility of ultra-brief mea-sures.
Clinical Psychologist, 13, 1–9. doi:10.1080/13284200802676391
Duncan, B. (2011). The Partners for Change Outcome Management Sys-tem
(PCOMS): Administration, scoring, interpreting update for the
Outcome and Session Ratings Scale. Jensen Beach, FL: Author.
Duncan, B. (2012). The Partners for Change Outcome Management Sys-tem
(PCOMS): The Heart and Soul of Change Project. Canadian Psy-chology/
Psychologie canadienne, 53, 93–104. doi:10.1037/a0027762
Duncan, B. (2014). On becoming a better therapist: Evidence-based prac-tice
one client at a time (2nd ed.). Washington, DC: American Psycho-logical
Association.
Duncan, B., Miller, S., Sparks, J., Claud, D., Reynolds, L., Brown, J.,
Johnson, L. (2003). The Session Rating Scale: Preliminary psychometric
properties of a “working” alliance measure. Journal of Brief Therapy, 3,
3–12.
Duncan, B., Miller, S., Wampold, B., Hubble, M. (Eds.). (2010). The
heart and soul of change: Delivering what works in therapy (2nd ed.).
Washington, DC: American Psychological Association. doi:10.1037/
12075-000
Duncan, B. L., Reese, R. J. (2012). Empirically supported treatments,
evidence based treatments, and evidence based practice. In G. Stricker
T. Widiger (Eds.), Handbook of psychology: Clinical psychology (2nd
ed., pp. 977–1023). doi:10.1002/9781118133880.hop208021
Duncan, B., Sparks, J. (2010). Heroic clients, heroic agencies: Partners
for change (2nd ed.). Jensen Beach, FL: Author.
Duncan, B., Sparks, J., Miller, S., Bohanske, R., Claud, D. (2006).
Giving youth a voice: A preliminary study of the reliability and validity
of a brief outcome measure for children, adolescents, and caretakers.
Journal of Brief Therapy, 5, 71–87.
Gillaspy, J. A., Murphy, J. J. (2011). The use of ultra-brief client
feedback tools in SFBT. In C. W. Franklin, T. Trepper, E. McCollum,
W. Gingerich (Eds.), Solution-focused brief therapy (pp. 73–94). doi:
10.1093/acprof:oso/9780195385724.003.0034
Hansen, N., Lambert, M., Forman, E. (2002). The psychotherapy dose-effect
and its implications for treatment delivery services. Clinical
Psychology: Science and Practice, 9, 329–343. doi:10.1093/clipsy.9.3
.329
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
10 REESE, DUNCAN, BOHANSKE, OWEN, AND MINAMI
11. Harmon, S. C., Lambert, M. J., Smart, D. W., Hawkins, E. J., Nielson, S. L.,
Slade, K., Lutz, W. (2007). Enhancing outcome for potential treatment
failures: Therapist/client feedback and clinical support tools. Psychotherapy
Research, 17, 379–392. doi:10.1080/10503300600702331
Hawkins, E. J., Lambert, M. J., Vermeersch, D. A., Slade, K., Tuttle, K.
(2004). The effects of providing patient progress information to thera-pists
and patients. Psychotherapy Research, 14, 308–327. doi:10.1093/
ptr/kph027
Jacobson, N. S., Truax, P. (1991). Clinical significance: A statistical
approach to defining meaningful change in psychotherapy research.
Journal of Consulting and Clinical Psychology, 59, 12–19. doi:10.1037/
0022-006X.59.1.12
Jokela, M., Batty, G. D., Vahtera, J., Elovainio, M., Kivimäki, M.
(2013). Socioeconomic inequalities in common mental disorders and
psychotherapy treatment in the UK between 1991 and 2009. The British
Journal of Psychiatry, 202, 115–120. doi:10.1192/bjp.bp.111.098863
Kaiser Commission on Medicaid and the Uninsured. (2011). Mental health
financing in the United States: A primer. Menlo Park, CA: The Henry J.
Kaiser Family Foundation. Retrieved from http://www.kff.org
Lambert, M. J. (2010). “Yes, it is time for clinicians to monitor treatment
outcome”. In B. L. Duncan, S. C. Miller, B. E. Wampold, M. A.
Hubble (Eds.), Heart and soul of change: Delivering what works in
therapy (2nd ed., pp. 239–266). Washington, DC: American Psycho-logical
Association.
Lambert, M. J. (2013). The efficacy and effectiveness of psychotherapy. In
M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy
and behavior change (6th ed., pp. 169–218). Hoboken, NJ: Wiley.
Lambert, M. J., Shimokawa, K. (2011). Collecting client feedback.
Psychotherapy, 48, 72–79. doi:10.1037/a0022238
Lambert, M. J., Whipple, J. L., Smart, D. W., Vermeersch, D. A., Nielsen,
S. L., Hawkins, E. J. (2001). The effects of providing therapists with
feedback on patient progress during psychotherapy: Are outcomes en-hanced?
Psychotherapy Research, 11, 49–68. doi:10.1080/713663852
Lambert, M. J., Whipple, J. L., Vermeersch, D. A., Smart, D. W., Hawkins,
E. J., Nielson, S. L., Goates, M. (2002). Enhancing psychotherapy
outcomes via providing feedback on client progress: A replication.
Clinical Psychology and Psychotherapy, 9, 91–103. doi:10.1002/cpp
.324
Laska, K. M., Gurman, A. S., Wampold, B. E. (2013, December 30).
Expanding the lens of evidence-based practice in psychotherapy: A
common factors perspective. Psychotherapy. Advance online publica-tion.
doi:10.1037/a0034332
McFall, R. M. (1996). Making psychology incorruptible. Applied and
Preventive Psychology, 5, 9–15. doi:10.1016/S0962-1849(96)80021-7
McHugh, K. R., Barlow, D. H. (2012). Dissemination and implemen-tation
of evidence-based psychological interventions. New York, NY:
Oxford University Press.
McLaughlin, K. A., Costello, E. J., Leblanc, W., Sampson, N. A., Kessler,
R. C. (2012). Socioeconomic status and adolescent mental disorders. Amer-ican
Journal of Public Health, 102, 1742–1750. doi:10.2105/AJPH.2011
.300477
Merrill, K. A., Tolbert, V. E., Wade, W. A. (2003). Effectiveness of
cognitive therapy for depression in a community mental health center: A
benchmarking study. Journal of Consulting and Clinical Psychology,
71, 404–409. doi:10.1037/0022-006X.71.2.404
Miller, S. D., Duncan, B. L. (2004). The Outcome and Session Rating
Scales: Administration and scoring manual. Jensen Beach, FL: Authors.
Miller, S. D., Duncan, B. L., Brown, J., Sparks, J., Claud, D. (2003). The
Outcome Rating Scale: A preliminary study of the reliability, validity,
and feasibility of a brief visual analog measure. Journal of Brief Ther-apy,
2, 91–100.
Minami, T., Davies, D. R., Tierney, S. C., Bettmann, J. E., McAward,
S. M., Averill, L. A., . . . Wampold, B. E. (2009). Preliminary evidence
on the effectiveness of psychological treatments delivered at a university
counseling center. Journal of Counseling Psychology, 56, 309–320.
doi:10.1037/a0015398
Minami, T., Serlin, R. C., Wampold, B. E., Kircher, J. C., Brown, G. S.
(2008). Using clinical trials to benchmark effects produced in clinical
practice. Quality Quantity, 42, 513–525. doi:10.1007/s11135-006-
9057-z
Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S.,
Kircher, J. C. (2008). Benchmarking the effectiveness of psychother-apy
treatment for adult depression in a managed care environment: A
preliminary study. Journal of Consulting and Clinical Psychology, 76,
116–124. doi:10.1037/0022-006X.76.1.116
Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., Brown, G. S.
(2007). Benchmarks for psychotherapy efficacy in adult major depres-sion.
Journal of Consulting and Clinical Psychology, 75, 232–243.
doi:10.1037/0022-006X.75.2.232
Murphy, K. P., Rashleigh, C. M., Timulak, L. (2012). The relationship
between progress feedback and therapeutic outcome in student counsel-ling:
A randomised control trial. Counselling Psychology Quarterly, 25,
1–18. doi:10.1080/09515070.2012.662349
Pan, Y. J., Stewart, R., Chang, C. K. (2013). Socioeconomic disadvan-tage,
mental disorders and risk of 12-month suicide ideation and attempt
in the National Comorbidity Survey Replication (NCS-R) in US. Social
Psychiatry and Psychiatric Epidemiology, 48, 71–79. doi:10.1007/
s00127-012-0591-9
President’s New Freedom Commission on Mental Health. (2002). Interim
report (DHHS Pub. No. SMA-03-3932). Retrieved from http://www
.mentalhealthcommission.gov/reports/interim_toc.htm
Reese, R. J., Norsworthy, L. A., Rowlands, S. R. (2009). Does a continuous
feedback system improve psychotherapy outcome? Psychotherapy: Theory,
Research, Practice, Training, 46, 418–431. doi:10.1037/a0017901
Reese, R. J., Toland, M. D., Slone, N. C., Norsworthy, L. A. (2010). Effect
of client feedback on couple psychotherapy outcomes. Psychotherapy:
Theory, Research, Practice, Training, 47, 616–630. doi:10.1037/a0021182
Reiss, F. (2013). Socioeconomic inequalities and mental health problems in
children and adolescents: A systematic review. Social Science Med-icine,
90, 24–31. doi:10.1016/j.socscimed.2013.04.026
Serlin, R. C., Lapsley, D. K. (1985). Rationality in psychological
research: The good-enough principle. American Psychologist, 40, 73–
83. doi:10.1037/0003-066X.40.1.73
Serlin, R. C., Lapsley, D. K. (1993). Rational appraisal of psychological
research and the good-enough principle. In G. Keren C. Lewis (Eds.),
A handbook for data analysis in the behavioral sciences: Methodolog-ical
issues (pp. 199–228). Hillsdale, NJ: Erlbaum.
Shimokawa, K., Lambert, M., Smart, D. (2010). Enhancing treatment
outcome of patients at risk of treatment failure: Meta-analytic and mega-analytic
review of a psychotherapy quality assurance system. Journal of
Consulting and Clinical Psychology, 78, 298–311. doi:10.1037/a0019247
Simon, W., Lambert, M. J., Harris, M. W., Busath, G., Vazquez, A.
(2012). Providing patient progress information and clinical support tools
to therapists: Effects on patients at risk of treatment failure. Psychother-apy
Research, 22, 638–647. doi:10.1080/10503307.2012.698918
Slade, K., Lambert, M. J., Harmon, S., Smart, D. W., Bailey, R. (2008).
Improving psychotherapy outcome: The use of immediate electronic
feedback and revised clinical support tools. Clinical Psychology
Psychotherapy, 15, 287–303. doi:10.1002/cpp.594
Stiles, W. B., Barkham, M., Connell, J., Mellor-Clark, J. (2008). Re-sponsive
regulation of treatment duration in routine practice in United
Kingdom primary care settings: Replication in a larger sample. Journal
of Consulting and Clinical Psychology, 76, 298–305. doi:10.1037/0022-
006X.76.2.298
Wade, W. A., Treat, T. A., Stuart, G. L. (1998). Transporting an
empirically supported treatment for panic disorder to a service clinic
setting: A benchmarking strategy. Journal of Consulting and Clinical
Psychology, 66, 231–239. doi:10.1037/0022-006X.66.2.231
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
PSYCHOTHERAPY EFFECTIVENESS IN PBH 11
12. Weersing, V. R., Weisz, J. R. (2002). Community clinic treatment of
depressed youth: Benchmarking usual care against CBT clinical trials.
Journal of Consulting and Clinical Psychology, 70, 299–310. doi:
10.1037/0022-006X.70.2.299
Whipple, J. L., Lambert, M. J., Vermeersch, D. A., Smart, D. W., Nielsen,
S. L., Hawkins, E. J. (2003). Improving the effects of psychotherapy:
The use of early identification of treatment and problem-solving strate-gies
in routine practice. Journal of Counseling Psychology, 50, 59–68.
doi:10.1037/0022-0167.50.1.59
Wolfe, B. E. (2012). Healing the research-practice split: Let’s start with
me. Psychotherapy, 49, 101–108. doi:10.1037/a0027114
Received March 25, 2013
Revision received March 28, 2014
Accepted April 8, 2014
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
12 REESE, DUNCAN, BOHANSKE, OWEN, AND MINAMI